Will Neuroscience Ever Provide a Theory of Consciousness?
Perhaps the hard problem is an obstacle that science cannot get past.
Several years ago a South African neuroscientist called Mark Solms claimed to have located the source and found the role of consciousness in the brain. Solms’ theory rejects the idea that consciousness is a higher cortical function and instead argues that we should be looking in the brain stem. We already know that lesions in small area of the brain stem, in particular a small part called the reticular activating system, cause consciousness to cease, while extensive or almost total damage to the cortex it seems does not, these facts are a key part of Solms case.
But importantly Solms also argues that by seeing consciousness as a global workspace or a feature of higher cognition we ignore its real role, which he believes is rooted in affective states, or feeling, something for which the brain stem plays a crucial role. Solms’ argument is that feelings need to be conscious, unlike certain kinds of information processing which can occur unconsciously, and that feelings are a vital way by which an organism maintains and regulates homeostasis. Emotions are valences that relate to a continuous need for equilibrium, they are how we stay on top of entropy.
Solms’ theory is very interesting, and certainly presents new avenues for exploring the prerequisites for consciousness in the brain, but it comes with quite a few problems. The fact that the reticular activating system is required for consciouses doesn’t prove it is key to its actual contents or function, my heart is essential for my consciousness to continue functioning but that doesn’t mean my heart causes or is consciousness1; second, the claim that because we can seemingly process images or words subliminally cortical function therefore doesn’t require consciousness is questionable, as is the claim that emotions or feelings are conscious by definition2: they are conscious by definition so long as they refer to the experience of those feelings, but you can feel generally sad or angry and it can manifest more as behaviour than feeling; and thirdly this theory doesn’t work as a “theory of everything” for consciousness in the brain since clearly plenty of other forms of thinking and awareness are also conscious that are not just feelings.
Then, most significantly, there is the problem that comes with many of the theories of consciousness that seem to emerge proposing radically different solutions to what consciousness is doing in the brain: it claims to solve the ‘hard problem,’ and it doesn’t. Saying that affective states need to be conscious or even are conscious by definition isn’t a solution. It’s simply the solution that many popular consciousness theories provide, they identify consciousness with a certain feature (e.g. attention in AST or integrated information in IIT) then say that because consciousness is that feature any mechanistic explanation of how that feature might be a function in the brain is somehow explaining it.
It is a popular idea among physicalist neuroscience that the hard problem won’t be solved as much as it will be dissolved as we begin to understand more about the brain. Neuroscientist Anil Seth compares the hard problem to the belief in vitalism, which was never disproved as much as it was gradually dissolved by the accumulation of understandings about the various processes that compose living organisms. Rather than being debunked or solved, vitalism just gradually went away. Seth claims the same will happen with the hard problem of consciousness.
Of course, we don’t know what we don’t know, but it seems unlikely that this will be the case with consciousness. Solms’ theory for example doesn’t offer even the slightest scent of a trail towards understanding how charged ions flowing through neurons produce a qualitative state of experience. If we assume that affective feelings are indeed central to what consciousness is and are central to an organism’s survival, the question is, at what stage in a process is that conscious event occurring? Let’s say I am hot, and besides systemic responses such as sweating or panting, I experience the feeling of overheating, providing a signal to my behaviour to go find some way of cooling down. Why does the actual conscious experience need to be there? Why not just the signal and the behaviour?
Then there is the question of where the actual subjective state of experience is — at some point you would have to argue that a physical signal is transformed into a mental signal in order to be “experienced” in some pure subjective position before being returned to unconscious behavioural function. Putting aside the fact that this would seem like a clumsy addition rather than something necessary, how can subjective experience have a causal property unless it is simply a correlate of a physical state, and if it is, why does it need to be conscious? In the end, you must resolve upon the confession that the arbitrary assumption remains: some part of the brain’s sequencing of process, whatever we agree it can be delineated to, is conscious, and we don’t know how or why.
So must we accept that no theory of neuroscientific process can actually cross this bridge? There is certainly reason to assume we can and will develop much more sophisticated understandings of what parts of the brain are involved in conscious states and what the function of consciousness — or at least the function of what consciousness is correlated with — is doing in the brain. But a problem seems to remain unsolved.
It seems that the category error highlighted by the hard problem of consciousness is already wired into the very language by which we expect scientific explanations. Deeper descriptions of the brain’s function don’t quite get at the problem of understanding how subjective and objective properties are descriptions of the same thing. The problem of consciousness does not tell us that our neuroscience is inadequate, it tells us that our ontology is inadequate. And perhaps it’s reasonable to accept that neuroscience itself is actually not likely to be the place where we find a radical upturning of our understanding of the nature of being itself. You could even argue this expectation itself is a hinderance to the progress of neuroscience since instead of expecting theories to simply broaden our understanding of the brain, the expectation lurks that they have to be framed as solutions to the ‘hard problem.’
Perhaps since the hard problem itself is a philosophical as much as a scientific observation, it may be that solutions will only ever lie in thought experiments or observations that generate parsimonious perspectives, rather than prove any position. The barrier of observational equivalence between positions such as materialism and panpsychism means that such positions may remain as they seem to be now, like that picture of a rabbit that is also a duck: those holding their position see one perspective as obvious and any other as patently absurd. Given that obvious facts of clear correlation between consciousness and the brain already exist, not the least things like anaesthetic, it seems questionable that getting any more specific would change anyone’s position. There are no doubt many fascinating breakthroughs in neuroscience that lie ahead, but consciousness may always remain an opening ended question.
*
I’m obviously simplifying his argument here for brevity, for more detail his book on his thesis is called “The Hidden Spring” or you can see him present his idea in a lecture here.
Solms uses one study as evidence in which subjects have images flashed subliminally in front of them of two people, one with the word “rapist” written underneath. When asked which they dislike of the two people afterwards subjects say the rapist image with apparent statistical significance, hence, Solms says, the processing of image and language does not require consciousness but feeling still does. I have issue with this conclusion, firstly the subjects can’t actually say why they prefer one image, all they have is, ironically, a “feeling.” And the claims that feelings are by definition conscious seems to me to be actually refuted by these kinds of studies. If you flash subliminal images of people of a different race, for example, studies have shown it produces an unconscious response in the amygdala, which you would say is a response of feeling. Again, feelings only need to be conscious so long as you are defining them as conscious, in which case the claim is circular anyway.
The hard problem tells us that “physicalist” ontology is inadequate. There is no hard problem of consciousness for non-physicalists.
What I find surprising about the entire popular conversation is how many physicalists completely miss the point. They focus on providing a neuroscientific explanation, but that means they’ve already assumed physicalism is true.
They’ve ignored the hard problem, not solved it. The hard problem isn’t a scientific problem at all. It’s a discussion in philosophy of mind.
It’s only because people can’t drop their physicalist assumptions that they think neuroscience is even relevant.
Your title is an accurate rewording of the hard problem, but instead of a question we should say the hard problem of consciousness is - Neuroscience can't explain consciousness.
Maybe that would force the physicalists to tell us why we should believe it can, rather than endlessly speculating about different brain functions and correlations as if it was relevant.
Typo: legions=>lesions