I subscribe to a physicalist view of consciousness. That is, I believe that consciousness is fully reducible to natural processes occurring within the material brain. Whilst we don’t yet have a full working theory of consciousness, I consider that this is most likely attainable in principle. However, even if such a complete naturalistic theory can never be formulated, I do not believe that this implies that consciousness requires anything other than matter/energy operating within space/time, and certainly do not accept that anything supernatural is required. In such a case, I think that our inability to explain consciousness (and such related concepts as qualia) by natural means alone would be due to its complexity and to limits of our imagination, ingenuity, language, or knowledge, and would not be because consciousness is dependent upon some non-material substance or additional properties for its existence.
The theories of consciousness that propose the mind to be dependent upon some non-material substance, or upon some additional mental properties, are known as dualist theories. These can be broadly classified as either Substance Dualism, in which the mind is thought to be a separate non-material substance, and Property Dualism, which proposes that when matter is organized in the appropriate way (i.e. in the way that living human bodies are organized), mental properties emerge. Substance Dualism was notably defended by Descartes, but is today associated primarily with theology – where this non-material substance is identified with the soul.
Substance dualism seems to me to be multiply fallacious, and perhaps impossible to verify or falsify. I would contend that it falls foul of Occam’s razor, since it posits some unknown supernatural substance to explain the mind, which we know to be closely associated with the material brain, and leaves more questions unanswered than it actually answers. To the proponent of substance dualism, I would submit the following questions:
1. On your hypothesis, what exactly is this other ‘stuff’ that mind consists of?
2. Since the material brain is intimately tied up with consciousness, what exactly is it that this substance does that the material brain does not do, and how does it do this?
3. How do the material brain and this other stuff interact to produce consciousness?
4. Why is the material brain required to play a part in consciousness at all? Why not just have ‘mind’ that is completely independent of any physical brain?
5. How do you explain the fact that brain damage can result in a (temporary or permanent) loss of consciousness?
6. How do you explain that stimulating the brain in certain ways repeatedly causes certain feelings, memories, and thoughts?
7. How does this immaterial substance cause a physical effect e.g. the firing of neurons and releasing of chemicals?
8. At what point in human evolution did we acquire this ‘mind’, how did that happen, and where did it come from?
9. Do other animals have any of this substance?
10. At what point during the evolution of the universe did this mind stuff come into existence, and how did that happen?
11. Are we born with this mind substance already present, or does it enter later? If the latter, when does it enter, and how?
12. How might we verify or falsify your theory of consciousness?
Property Dualism does not suffer from many of the problems of substance dualism, but it still needs to explain how these proposed emergent properties of the mind are able to cause physical effects within the material brain.
My own particular view is that some non-naïve variety of Identity Theory is the most compelling explanation of consciousness (although I might be convinced by some other materialist theory of mind, such as Functionalism). This is a materialist theory that offers a strongly reductive option by identifying conscious mental properties, states and processes with physical ones, most typically of a neural or neurophysiological nature. In other words, our conscious experience when we have a thought, or when we experience some phenomenal experience (such as seeing a colour, smelling a flower, feeling pain etc.) is identical with the specific brain state that accompanies this experience.
One might ask why physical processing within the brain should give rise to the rich inner life that we experience. However, on the Identity Theory, the concept that consciousness somehow ‘arises’ from something is a misnomer – and one that has led to much debate about what this extra something is, and how it arises from and interrelates to the physical brain. I would contend that consciousness is in fact identical to physical brain states, so a thought is nothing more than a particular configuration of firing neurons, chemicals, and other physical phenomena within the brain. Likewise, the experience of seeing a colour, or feeling pain is just our first-person experience of some brain state or other. These states are not necessarily the same from person to person, or even each time we experience something (token identity).
Now, this seems at first to be counterintuitive. How can a thought be nothing more than a brain state? Surely, our consciousness is more than this? In fact, the most intuitive idea would seem to be that our consciousness is something entirely separate from the physical brain – some other substance entirely. This is one of the appeals of Substance Dualism.
By contrast, Property Dualism, which is more modest in its claims, holds that there exist mental properties (i.e., characteristics or aspects of things) that are neither identical with nor reducible to physical properties. Conscious properties, such as the colour qualia involved in a conscious experience of a visual perception, cannot be explained in purely physical terms and, thus, are not themselves to be identified with any brain state or process.
However, on the Identity theory, both of these viewpoints are mistaken. There is no other ‘substance’ of which mind is constituted, and there are no mental properties that are not reducible to the physical. Although this may indeed be counterintuitive, perhaps we should instead ask exactly why our first-person experience of brain states would not be what we call thoughts, memories, feelings, qualia etc. How should these brain states manifest themselves, if not in these ways?
The brain states and our conscious experiences are just two sides of the same coin, two ways of seeing the same phenomenon – one from the first-person, and the other from the third-person. I am saying that the correlation between the brain states and our experience is more than just a correlation; it is an identity. Just as the presence of heat when molecules are excited is not just a correlation – they are different ways of looking at the same phenomenon. It is a (masked man) fallacy to say that we recognise heat, but do not recognise moving molecules, therefore heat is not moving molecules. In the same way, it is a fallacy to say that we know what thoughts and sensations are like, but do not know what brain states are like, therefore thoughts and sensations are not brain states.
Interestingly, when the brain is monitored during the process of seeing something, and the subject then being asked to remember this something, the neurons in the brain seem to fire in a similar way. This suggests that memories are just the brain recreating a simplified version of its internal state when the event first happened.
On the Identity theory, it is just a ‘brute fact’ that there are such identities, and the appearance of arbitrariness between brain properties and mental properties is just that – an apparent problem leading many to wonder about the alleged explanatory gap. Qualia would then just be identical to physical properties. There is therefore no real explanatory gap on this theory, and science will, in principle, be able to explain consciousness fully from a third-person point of view. Of course, as science explains things from a third-person point of view, it can never describe ‘what it feels like’ to smell a flower, since this it not the type of problem that it addresses, but I don’t think this implies that ‘what it feel like’ is some sort of knowledge that requires the mind to have additional non-physical properties.
This concept of qualia (properties of sensory experiences) is one that is often cited when arguing for dualism. However, I would suggest that the assumed connection between qualia and consciousness is actually a red herring. For example, how can we be so sure that a dog does not experience qualia? Why would it not experience some feeling of what it is like to scratch itself, or eat a bone, or see its owner return? Why would a simple creature like a butterfly not experience some feeling of ‘attractiveness’ in a flower that it is drawn towards? Colour vision is present in some lower animals, so why wouldn’t they have some experience of ‘redness’? Hence, I would say that lower animals likely have experience of qualia, and science cannot describe how this ‘feels’ to them, but that doesn’t imply that they possess some sort of non-physical mental property.
I would suggest that qualia are merely functions of perception, and are thus likely to be present in other animals too. We are conscious of our perception of qualia, because we are conscious, but that does not imply that consciousness is a prerequisite for experiencing qualia, or that qualia in themselves tell us something interesting about consciousness. Our experience of qualia will likely vary from that present in lower animals, as we have an additional layer of self-awareness that they don’t seem to possess. However, in my opinion, the experience of qualia alone cannot be used to justify property dualism.
The so-called Knowledge Argument is sometimes used against such physicalist theories. This is commonly expressed by recourse to two famous thought experiments. In the first, courtesy of Thomas Nagel, we are asked to imagine what it would be like to be a bat. We might know everything about the working of the bat’s brain from a scientific point of view, but we can still not imagine actually being a bat. In other words, whilst we might know all physical facts about a bat’s brain, there is still some knowledge missing – the bat’s first-person experience of being a bat. Therefore, so the argument goes, the materialist theories of consciousness are flawed.
However, this argument does not refute the theory that the bat’s conscious experience is not identical with its brain states. The fact that we can never know this experience from a first-person point of view does not mean that it is not a purely naturalistic process. After all, the only way in which we might experience being a bat would be for us to run some kind of ‘bat simulation’ within our own brain. Even if this were ever scientifically feasible, we would still lack the language to be able to express our experience - not to mention that the memories of this utterly alien event would need to be translated to some other form to be stored with our own brain. This would, by its very nature, distort and anthropomorphise the event.
The other well-known thought experiment (created by Frank Jackson) concerns a person called Mary, who is kept in a black and white room from birth. She becomes a brilliant neuroscientist, and an expert on colour perception. She knows all of the physical facts about human colour perception, but has never seen, for example, the colour red for herself. Upon leaving the room, she experiences red for the first time, and apparently learns some new fact that she didn’t know before. Since we presumed that she knew all physical facts about colour perception, this new fact must be non-physical. Therefore, the materialist viewpoint is shown to be fallacious.
However, on the Identity theory, Mary is not learning a new fact, she is merely learning the same fact, but from a different point of view – first-person rather than third-person. The thought experiment does not refute the identity between the brain state and Mary’s experience of seeing red, so the metaphysics of the physicalist perspective is not challenged.
Actually, I think that the Mary's Room thought experiment is an example of the Masked Man Fallacy.
Here is an example of this fallacy (courtesy of Stephen Law):
1. John Wayne is someone that Michael knows to have appeared in True Grit
2. Marion Morrison is not someone that Michael knows to have appeared in True Grit
3. Therefore, Marion Morrison is not John Wayne
But this conclusion is fallacious, as John Wayne was Marion Morrison's stage name. This illustrates that personal knowledge or belief is not a property of an object that can be used to disprove identity.The Mary argument can be rephrased in a way that follows the example above, and makes the fallacy much more evident:
1. Mary has all the physical information concerning human colour vision before her release.
2. Upon her release, Mary believes that she comes to know new information about human colour vision
3. Therefore, not all information is physical information.
Now we can see the problem. Mary might believe that she comes to know new information upon her release but, in reality, she is just seeing the same information from a different perspective. So, she comes to know no new facts about human colour vision, but just sees the same facts in a different way. Therefore, this is an example of the masked man fallacy.
Paul Churchland (in the essay 'On functionalism and materialism’) makes a similar point when discussing this argument. He suggests that it is making an error of equivocation with the word ‘know’. Knowing in the sense of knowing physical facts is not the same as knowing based upon first-hand experience of the phenomenon. Furthermore, we can allow for a duality or multiplicity of types of knowing, without committing to there being non-physical information. Rather, there are multiple mediums through which we can come to ‘know’ the same facts.
To summarise, I believe that thoughts, memories, sensory experiences etc. are just our first-person experience of brain states. That seems quite alien to us, but the fact that we don't recognise thoughts and sensations as being identical with brain states doesn't mean that they are not. Consciousness is just what you get when you have a material brain consisting of billions of neurons, and a feedback loop that gives it self-awareness (not located in any particular spot – this is a category error). There is no additional non-material substance required, nor are there any emergent mental properties. Science may never be able to describe what it ‘feels’ like to see the colour red, for example, but I think it is an unjustified leap to think that this implies that the brain possesses some non-physical properties. The limitation here is that we are trying to describe first-person experiences in third-person terms – and failing. In my opinion, the more parsimonious explanation is that our first-person experiences just ‘are’ these brain states, and the limitation is just one of description.
For an interesting discussion of qualia, see this. For an interesting paper about the explanatory gap, and why it may be illusory, see this and also the article by Thomas Clark here.