Explaining the nature of consciousness is one of the most important and perplexing areas of philosophy, but the concept is notoriously ambiguous. The abstract noun “consciousness” is not frequently used by itself in the contemporary literature, but is originally derived from the Latin con (with) and scire (to know). Perhaps the most commonly used contemporary notion of a conscious mental state is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view. But how are we to understand this? For instance, how is the conscious mental state related to the body? Can consciousness be explained in terms of brain activity? What makes a mental state be a conscious mental state? The problem of consciousness is arguably the most central issue in current philosophy of mind and is also importantly related to major traditional topics in metaphysics, such as the possibility of immortality and the belief in free will. This article focuses on Western theories and conceptions of consciousness, especially as found in contemporary analytic philosophy of mind.
The two broad, traditional and competing theories of mind are dualism and materialism (or physicalism). While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense, whereas the latter holds that, to put it crudely, the mind is the brain, or is caused by neural activity. It is against this general backdrop that many answers to the above questions are formulated and developed. There are also many familiar objections to both materialism and dualism. For example, it is often said that materialism cannot truly explain just how or why some brain states are conscious, and that there is an important “explanatory gap” between mind and matter. On the other hand, dualism faces the problem of explaining how a non-physical substance or mental state can causally interact with the physical body.
Some philosophers attempt to explain consciousness directly in neurophysiological or physical terms, while others offer cognitive theories of consciousness whereby conscious mental states are reduced to some kind of representational relation between mental states and the world. There are a number of such representational theories of consciousness currently on the market, including higher-order theories which hold that what makes a mental state conscious is that the subject is aware of it in some sense. The relationship between consciousness and science is also central in much current theorizing on this topic: How does the brain “bind together” various sensory inputs to produce a unified subjective experience? What are the neural correlates of consciousness? What can be learned from abnormal psychology which might help us to understand normal consciousness? To what extent are animal minds different from human minds? Could an appropriately programmed machine be conscious?
Table of Contents
- Terminological Matters: Various Concepts of Consciousness
- Some History on the Topic
- The Metaphysics of Consciousness: Materialism vs. Dualism
- Specific Theories of Consciousness
- Consciousness and Science: Key Issues
- Animal and Machine Consciousness
- References and Further Reading
The concept of consciousness is notoriously ambiguous. It is important first to make several distinctions and to define related terms. The abstract noun “consciousness” is not often used in the contemporary literature, though it should be noted that it is originally derived from the Latin con (with) and scire (to know). Thus, “consciousness” has etymological ties to one’s ability to know and perceive, and should not be confused with conscience, which has the much more specific moral connotation of knowing when one has done or is doing something wrong. Through consciousness, one can have knowledge of the external world or one’s own mental states. The primary contemporary interest lies more in the use of the expressions “x is conscious” or “x is conscious of y.” Under the former category, perhaps most important is the distinction between state and creature consciousness (Rosenthal 1993a). We sometimes speak of an individual mental state, such as a pain or perception, as conscious. On the other hand, we also often speak of organisms or creatures as conscious, such as when we say “human beings are conscious” or “dogs are conscious.” Creature consciousness is also simply meant to refer to the fact that an organism is awake, as opposed to sleeping or in a coma. However, some kind of state consciousness is often implied by creature consciousness, that is, the organism is having conscious mental states. Due to the lack of a direct object in the expression “x is conscious,” this is usually referred to as intransitive consciousness, in contrast to transitive consciousness where the locution “x is conscious of y” is used (Rosenthal 1993a, 1997). Most contemporary theories of consciousness are aimed at explaining state consciousness; that is, explaining what makes a mental state a conscious mental state.
It might seem that “conscious” is synonymous with, say, “awareness” or “experience” or “attention.” However, it is crucial to recognize that this is not generally accepted today. For example, though perhaps somewhat atypical, one might hold that there are even unconscious experiences, depending of course on how the term “experience” is defined (Carruthers 2000). More common is the belief that we can be aware of external objects in some unconscious sense, for example, during cases of subliminal perception. The expression “conscious awareness” does not therefore seem to be redundant. Finally, it is not clear that consciousness ought to be restricted to attention. It seems plausible to suppose that one is conscious (in some sense) of objects in one’s peripheral visual field even though one is only attending to some narrow (focal) set of objects within that visual field.
Perhaps the most fundamental and commonly used notion of “conscious” is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is “something it is like” for me to be in that state from the subjective or first-person point of view. When I am, for example, smelling a rose or having a conscious visual experience, there is something it “seems” or “feels” like from my perspective. An organism, such as a bat, is conscious if it is able to experience the outer world through its (echo-locatory) senses. There is also something it is like to be a conscious creature whereas there is nothing it is like to be, for example, a table or tree. This is primarily the sense of “conscious state” that will be used throughout this entry. There are still, though, a cluster of expressions and terms related to Nagel’s sense, and some authors simply stipulate the way that they use such terms. For example, philosophers sometimes refer to conscious states as phenomenal or qualitative states. More technically, philosophers often view such states as having qualitative properties called “qualia” (prounced like "kwal' ee uh"; the singular is quale). There is significant disagreement over the nature, and even the existence, of qualia, but they are perhaps most frequently understood as the felt properties or qualities of conscious states.
Ned Block (1995) makes an often cited distinction between phenomenal consciousness (or “phenomenality”) and access consciousness. The former is very much in line with the Nagelian notion described above. However, Block also defines the quite different notion of access consciousness in terms of a mental state’s relationship with other mental states; for example, a mental state’s “availability for use in reasoning and rationality guiding speech and action” (Block 1995: 227). This would, for example, count a visual perception as (access) conscious not because it has the “what it’s likeness” of phenomenal states, but rather because it carries visual information which is generally available for use by the organism, regardless of whether or not it has any qualitative properties. Access consciousness is therefore more of a functional notion; that is, concerned with what such states do. Although this concept of consciousness is certainly very important in cognitive science and philosophy of mind generally, not everyone agrees that access consciousness deserves to be called “consciousnesses” in any important sense. Block himself argues that neither sense of consciousness implies the other, while others urge that there is a more intimate connection between the two.
Finally, it is helpful to distinguish between consciousness and self-consciousness, which plausibly involves some kind of awareness or consciousness of one’s own mental states (instead of something out in the world). Self-consciousness arguably comes in degrees of sophistication ranging from minimal bodily self-awareness to the ability to reason and reflect on one’s own mental states, such as one’s beliefs and desires. Some important historical figures have even held that consciousness entails some form of self-consciousness (Kant 1781/1965, Sartre 1956), a view shared by some contemporary philosophers (Gennaro 1996a, Kriegel 2004).
Interest in the nature of conscious experience has no doubt been around for as long as there have been reflective humans. It would be impossible here to survey the entire history, but a few highlights are in order. In the history of Western philosophy, which is the focus of this entry, important writings on human nature and the soul and mind go back to ancient philosophers, such as Plato. More sophisticated work on the nature of consciousness and perception can be found in the work of Plato’s most famous student Aristotle (see Caston 2002), and then throughout the later Medieval period. It is, however, with the work of René Descartes (1596-1650) and his successors in the early modern period of philosophy that consciousness and the relationship between the mind and body took center stage. As we shall see, Descartes argued that the mind is a non-physical substance distinct from the body. He also did not believe in the existence of unconscious mental states, a view certainly not widely held today. Descartes defined “thinking” very broadly to include virtually every kind of mental state and urged that consciousness is essential to thought. Our mental states are, according to Descartes, infallibly transparent to introspection. John Locke (1689/1975) held a similar position regarding the connection between mentality and consciousness, but was far less committed on the exact metaphysical nature of the mind.
Perhaps the most important philosopher of the period explicitly to endorse the existence of unconscious mental states was G.W. Leibniz (1686/1991, 1720/1925). Although Leibniz also believed in the immaterial nature of mental substances (which he called “monads”), he recognized the existence of what he called “petit perceptions,” which are basically unconscious perceptions. He also importantly distinguished between perception and apperception, roughly the difference between outer-directed consciousness and self-consciousness (see Gennaro 1999 for some discussion). The most important detailed theory of mind in the early modern period was developed by Immanuel Kant. His main work Critique of Pure Reason (1781/1965) is as equally dense as it is important, and cannot easily be summarized in this context. Although he owes a great debt to his immediate predecessors, Kant is arguably the most important philosopher since Plato and Aristotle and is highly relevant today. Kant basically thought that an adequate account of phenomenal consciousness involved far more than any of his predecessors had considered. There are important mental structures which are “presupposed” in conscious experience, and Kant presented an elaborate theory as to what those structures are, which, in turn, had other important implications. He, like Leibniz, also saw the need to postulate the existence of unconscious mental states and mechanisms in order to provide an adequate theory of mind (Kitcher 1990 and Brook 1994 are two excellent books on Kant’s theory of mind.).
Over the past one hundred years or so, however, research on consciousness has taken off in many important directions. In psychology, with the notable exception of the virtual banishment of consciousness by behaviorist psychologists (e.g., Skinner 1953), there were also those deeply interested in consciousness and various introspective (or “first-person”) methods of investigating the mind. The writings of such figures as Wilhelm Wundt (1897), William James (1890) and Alfred Titchener (1901) are good examples of this approach. Franz Brentano (1874/1973) also had a profound effect on some contemporary theories of consciousness. Similar introspectionist approaches were used by those in the so-called “phenomenological” tradition in philosophy, such as in the writings of Edmund Husserl (1913/1931, 1929/1960) and Martin Heidegger (1927/1962). The work of Sigmund Freud was very important, at minimum, in bringing about the near universal acceptance of the existence of unconscious mental states and processes.
It must, however, be kept in mind that none of the above had very much scientific knowledge about the detailed workings of the brain. The relatively recent development of neurophysiology is, in part, also responsible for the unprecedented interdisciplinary research interest in consciousness, particularly since the 1980s. There are now several important journals devoted entirely to the study of consciousness: Consciousness and Cognition, Journal of Consciousness Studies, and Psyche. There are also major annual conferences sponsored by world wide professional organizations, such as the Association for the Scientific Study of Consciousness, and an entire book series called “Advances in Consciousness Research” published by John Benjamins. (For a small sample of introductory texts and important anthologies, see Kim 1996, Gennaro 1996b, Block et. al. 1997, Seager 1999, Chalmers 2002, Baars et. al. 2003, Blackmore 2004, Campbell 2005, Velmans and Schneider 2007, Zelazo et al. 2007, Revonsuo 2010.)
Metaphysics is the branch of philosophy concerned with the ultimate nature of reality. There are two broad traditional and competing metaphysical views concerning the nature of the mind and conscious mental states: dualism and materialism. While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense. On the other hand, materialists hold that the mind is the brain, or, more accurately, that conscious mental activity is identical with neural activity. It is important to recognize that by non-physical, dualists do not merely mean “not visible to the naked eye.” Many physical things fit this description, such as the atoms which make up the air in a typical room. For something to be non-physical, it must literally be outside the realm of physics; that is, not in space at all and undetectable in principle by the instruments of physics. It is equally important to recognize that the category “physical” is broader than the category “material.” Materialists are called such because there is the tendency to view the brain, a material thing, as the most likely physical candidate to identify with the mind. However, something might be physical but not material in this sense, such as an electromagnetic or energy field. One might therefore instead be a “physicalist” in some broader sense and still not a dualist. Thus, to say that the mind is non-physical is to say something much stronger than that it is non-material. Dualists, then, tend to believe that conscious mental states or minds are radically different from anything in the physical world at all.
There are a number of reasons why some version of dualism has been held throughout the centuries. For one thing, especially from the introspective or first-person perspective, our conscious mental states just do not seem like physical things or processes. That is, when we reflect on our conscious perceptions, pains, and desires, they do not seem to be physical in any sense. Consciousness seems to be a unique aspect of the world not to be understood in any physical way. Although materialists will urge that this completely ignores the more scientific third-person perspective on the nature of consciousness and mind, this idea continues to have force for many today. Indeed, it is arguably the crucial underlying intuition behind historically significant “conceivability arguments” against materialism and for dualism. Such arguments typically reason from the premise that one can conceive of one’s conscious states existing without one’s body or, conversely, that one can imagine one’s own physical duplicate without consciousness at all (see section 3b.iv). The metaphysical conclusion ultimately drawn is that consciousness cannot be identical with anything physical, partly because there is no essential conceptual connection between the mental and the physical. Arguments such as these go back to Descartes and continue to be used today in various ways (Kripke 1972, Chalmers 1996), but it is highly controversial as to whether they succeed in showing that materialism is false. Materialists have replied in various ways to such arguments and the relevant literature has grown dramatically in recent years.
Historically, there is also the clear link between dualism and a belief in immortality, and hence a more theistic perspective than one tends to find among materialists. Indeed, belief in dualism is often explicitly theologically motivated. If the conscious mind is not physical, it seems more plausible to believe in the possibility of life after bodily death. On the other hand, if conscious mental activity is identical with brain activity, then it would seem that when all brain activity ceases, so do all conscious experiences and thus no immortality. After all, what do many people believe continues after bodily death? Presumably, one’s own conscious thoughts, memories, experiences, beliefs, and so on. There is perhaps a similar historical connection to a belief in free will, which is of course a major topic in its own right. For our purposes, it suffices to say that, on some definitions of what it is to act freely, such ability seems almost “supernatural” in the sense that one’s conscious decisions can alter the otherwise deterministic sequence of events in nature. To put it another way: If we are entirely physical beings as the materialist holds, then mustn’t all of the brain activity and behavior in question be determined by the laws of nature? Although materialism may not logically rule out immortality or free will, materialists will likely often reply that such traditional, perhaps even outdated or pre-scientific beliefs simply ought to be rejected to the extent that they conflict with materialism. After all, if the weight of the evidence points toward materialism and away from dualism, then so much the worse for those related views.
One might wonder “even if the mind is physical, what about the soul?” Maybe it’s the soul, not the mind, which is non-physical as one might be told in many religious traditions. While it is true that the term “soul” (or “spirit”) is often used instead of “mind” in such religious contexts, the problem is that it is unclear just how the soul is supposed to differ from the mind. The terms are often even used interchangeably in many historical texts and by many philosophers because it is unclear what else the soul could be other than “the mental substance.” It is difficult to describe the soul in any way that doesn’t make it sound like what we mean by the mind. After all, that’s what many believe goes on after bodily death; namely, conscious mental activity. Granted that the term “soul” carries a more theological connotation, but it doesn’t follow that the words “soul” and “mind” refer to entirely different things. Somewhat related to the issue of immortality, the existence of near death experiences is also used as some evidence for dualism and immortality. Such patients experience a peaceful moving toward a light through a tunnel like structure, or are able to see doctors working on their bodies while hovering over them in an emergency room (sometimes akin to what is called an “out of body experience”). In response, materialists will point out that such experiences can be artificially induced in various experimental situations, and that starving the brain of oxygen is known to cause hallucinations.
Various paranormal and psychic phenomena, such as clairvoyance, faith healing, and mind-reading, are sometimes also cited as evidence for dualism. However, materialists (and even many dualists) will first likely wish to be skeptical of the alleged phenomena themselves for numerous reasons. There are many modern day charlatans who should make us seriously question whether there really are such phenomena or mental abilities in the first place. Second, it is not quite clear just how dualism follows from such phenomena even if they are genuine. A materialist, or physicalist at least, might insist that though such phenomena are puzzling and perhaps currently difficult to explain in physical terms, they are nonetheless ultimately physical in nature; for example, having to do with very unusual transfers of energy in the physical world. The dualist advantage is perhaps not as obvious as one might think, and we need not jump to supernatural conclusions so quickly.
Interactionist Dualism or simply “interactionism” is the most common form of “substance dualism” and its name derives from the widely accepted fact that mental states and bodily states causally interact with each other. For example, my desire to drink something cold causes my body to move to the refrigerator and get something to drink and, conversely, kicking me in the shin will cause me to feel a pain and get angry. Due to Descartes’ influence, it is also sometimes referred to as “Cartesian dualism.” Knowing nothing about just where such causal interaction could take place, Descartes speculated that it was through the pineal gland, a now almost humorous conjecture. But a modern day interactionist would certainly wish to treat various areas of the brain as the location of such interactions.
Three serious objections are briefly worth noting here. The first is simply the issue of just how does or could such radically different substances causally interact. How something non-physical causally interacts with something physical, such as the brain? No such explanation is forthcoming or is perhaps even possible, according to materialists. Moreover, if causation involves a transfer of energy from cause to effect, then how is that possible if the mind is really non-physical? Gilbert Ryle (1949) mockingly calls the Cartesian view about the nature of mind, a belief in the “ghost in the machine.” Secondly, assuming that some such energy transfer makes any sense at all, it is also then often alleged that interactionism is inconsistent with the scientifically well-established Conservation of Energy principle, which says that the total amount of energy in the universe, or any controlled part of it, remains constant. So any loss of energy in the cause must be passed along as a corresponding gain of energy in the effect, as in standard billiard ball examples. But if interactionism is true, then when mental events cause physical events, energy would literally come into the physical word. On the other hand, when bodily events cause mental events, energy would literally go out of the physical world. At the least, there is a very peculiar and unique notion of energy involved, unless one wished, even more radically, to deny the conservation principle itself. Third, some materialists might also use the well-known fact that brain damage (even to very specific areas of the brain) causes mental defects as a serious objection to interactionism (and thus as support for materialism). This has of course been known for many centuries, but the level of detailed knowledge has increased dramatically in recent years. Now a dualist might reply that such phenomena do not absolutely refute her metaphysical position since it could be replied that damage to the brain simply causes corresponding damage to the mind. However, this raises a host of other questions: Why not opt for the simpler explanation, i.e., that brain damage causes mental damage because mental processes simply are brain processes? If the non-physical mind is damaged when brain damage occurs, how does that leave one’s mind according to the dualist’s conception of an afterlife? Will the severe amnesic at the end of life on Earth retain such a deficit in the afterlife? If proper mental functioning still depends on proper brain functioning, then is dualism really in no better position to offer hope for immortality?
It should be noted that there is also another less popular form of substance dualism called parallelism, which denies the causal interaction between the non-physical mental and physical bodily realms. It seems fair to say that it encounters even more serious objections than interactionism.
While a detailed survey of all varieties of dualism is beyond the scope of this entry, it is at least important to note here that the main and most popular form of dualism today is called property dualism. Substance dualism has largely fallen out of favor at least in most philosophical circles, though there are important exceptions (e.g., Swinburne 1986, Foster 1996) and it often continues to be tied to various theological positions. Property dualism, on the other hand, is a more modest version of dualism and it holds that there are mental properties (that is, characteristics or aspects of things) that are neither identical with nor reducible to physical properties. There are actually several different kinds of property dualism, but what they have in common is the idea that conscious properties, such as the color qualia involved in a conscious experience of a visual perception, cannot be explained in purely physical terms and, thus, are not themselves to be identified with any brain state or process.
Two other views worth mentioning are epiphenomenalism and panpsychism. The latter is the somewhat eccentric view that all things in physical reality, even down to micro-particles, have some mental properties. All substances have a mental aspect, though it is not always clear exactly how to characterize or test such a claim. Epiphenomenalism holds that mental events are caused by brain events but those mental events are mere “epiphenomena” which do not, in turn, cause anything physical at all, despite appearances to the contrary (for a recent defense, see Robinson 2004).
Finally, although not a form of dualism, idealism holds that there are only immaterial mental substances, a view more common in the Eastern tradition. The most prominent Western proponent of idealism was 18th century empiricist George Berkeley. The idealist agrees with the substance dualist, however, that minds are non-physical, but then denies the existence of mind-independent physical substances altogether. Such a view faces a number of serious objections, and it also requires a belief in the existence of God.
Some form of materialism is probably much more widely held today than in centuries past. No doubt part of the reason for this has to do with the explosion in scientific knowledge about the workings of the brain and its intimate connection with consciousness, including the close connection between brain damage and various states of consciousness. Brain death is now the main criterion for when someone dies. Stimulation to specific areas of the brain results in modality specific conscious experiences. Indeed, materialism often seems to be a working assumption in neurophysiology. Imagine saying to a neuroscientist “you are not really studying the conscious mind itself” when she is examining the workings of the brain during an fMRI. The idea is that science is showing us that conscious mental states, such as visual perceptions, are simply identical with certain neuro-chemical brain processes; much like the science of chemistry taught us that water just is H2O.
There are also theoretical factors on the side of materialism, such as adherence to the so-called “principle of simplicity” which says that if two theories can equally explain a given phenomenon, then we should accept the one which posits fewer objects or forces. In this case, even if dualism could equally explain consciousness (which would of course be disputed by materialists), materialism is clearly the simpler theory in so far as it does not posit any objects or processes over and above physical ones. Materialists will wonder why there is a need to believe in the existence of such mysterious non-physical entities. Moreover, in the aftermath of the Darwinian revolution, it would seem that materialism is on even stronger ground provided that one accepts basic evolutionary theory and the notion that most animals are conscious. Given the similarities between the more primitive parts of the human brain and the brains of other animals, it seems most natural to conclude that, through evolution, increasing layers of brain areas correspond to increased mental abilities. For example, having a well developed prefrontal cortex allows humans to reason and plan in ways not available to dogs and cats. It also seems fairly uncontroversial to hold that we should be materialists about the minds of animals. If so, then it would be odd indeed to hold that non-physical conscious states suddenly appear on the scene with humans.
There are still, however, a number of much discussed and important objections to materialism, most of which question the notion that materialism can adequately explain conscious experience.
Joseph Levine (1983) coined the expression “the explanatory gap” to express a difficulty for any materialistic attempt to explain consciousness. Although not concerned to reject the metaphysics of materialism, Levine gives eloquent expression to the idea that there is a key gap in our ability to explain the connection between phenomenal properties and brain properties (see also Levine 1993, 2001). The basic problem is that it is, at least at present, very difficult for us to understand the relationship between brain properties and phenomenal properties in any explanatory satisfying way, especially given the fact that it seems possible for one to be present without the other. There is an odd kind of arbitrariness involved: Why or how does some particular brain process produce that particular taste or visual sensation? It is difficult to see any real explanatory connection between specific conscious states and brain states in a way that explains just how or why the former are identical with the latter. There is therefore an explanatory gap between the physical and mental. Levine argues that this difficulty in explaining consciousness is unique; that is, we do not have similar worries about other scientific identities, such as that “water is H2O” or that “heat is mean molecular kinetic energy.” There is “an important sense in which we can’t really understand how [materialism] could be true.” (2001: 68)
David Chalmers (1995) has articulated a similar worry by using the catchy phrase “the hard problem of consciousness,” which basically refers to the difficulty of explaining just how physical processes in the brain give rise to subjective conscious experiences. The “really hard problem is the problem of experience…How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?” (1995: 201) Others have made similar points, as Chalmers acknowledges, but reference to the phrase “the hard problem” has now become commonplace in the literature. Unlike Levine, however, Chalmers is much more inclined to draw anti-materialist metaphysical conclusions from these and other considerations. Chalmers usefully distinguishes the hard problem of consciousness from what he calls the (relatively) “easy problems” of consciousness, such as the ability to discriminate and categorize stimuli, the ability of a cognitive system to access its own internal states, and the difference between wakefulness and sleep. The easy problems generally have more to do with the functions of consciousness, but Chalmers urges that solving them does not touch the hard problem of phenomenal consciousness. Most philosophers, according to Chalmers, are really only addressing the easy problems, perhaps merely with something like Block’s “access consciousness” in mind. Their theories ignore phenomenal consciousness.
There are many responses by materialists to the above charges, but it is worth emphasizing that Levine, at least, does not reject the metaphysics of materialism. Instead, he sees the “explanatory gap [as] primarily an epistemological problem” (2001: 10). That is, it is primarily a problem having to do with knowledge or understanding. This concession is still important at least to the extent that one is concerned with the larger related metaphysical issues discussed in section 3a, such as the possibility of immortality.
Perhaps most important for the materialist, however, is recognition of the fact that different concepts can pick out the same property or object in the world (Loar 1990, 1997). Out in the world there is only the one “stuff,” which we can conceptualize either as “water” or as “H2O.” The traditional distinction, made most notably by Gottlob Frege in the late 19th century, between “meaning” (or “sense”) and “reference” is also relevant here. Two or more concepts, which can have different meanings, can refer to the same property or object, much like “Venus” and “The Morning Star.” Materialists, then, explain that it is essential to distinguish between mental properties and our concepts of those properties. By analogy, there are so-called “phenomenal concepts” which uses a phenomenal or “first-person” property to refer to some conscious mental state, such as a sensation of red (Alter and Walter 2007). In contrast, we can also use various concepts couched in physical or neurophysiological terms to refer to that same mental state from the third-person point of view. There is thus but one conscious mental state which can be conceptualized in two different ways: either by employing first-person experiential phenomenal concepts or by employing third-person neurophysiological concepts. It may then just be a “brute fact” about the world that there are such identities and the appearance of arbitrariness between brain properties and mental properties is just that – an apparent problem leading many to wonder about the alleged explanatory gap. Qualia would then still be identical to physical properties. Moreover, this response provides a diagnosis for why there even seems to be such a gap; namely, that we use very different concepts to pick out the same property. Science will be able, in principle, to close the gap and solve the hard problem of consciousness in an analogous way that we now have a very good understanding for why “water is H2O” or “heat is mean molecular kinetic energy” that was lacking centuries ago. Maybe the hard problem isn’t so hard after all – it will just take some more time. After all, the science of chemistry didn’t develop overnight and we are relatively early in the history of neurophysiology and our understanding of phenomenal consciousness. (See Shear 1997 for many more specific responses to the hard problem, but also for Chalmers’ counter-replies.)
There is a pair of very widely discussed, and arguably related, objections to materialism which come from the seminal writings of Thomas Nagel (1974) and Frank Jackson (1982, 1986). These arguments, especially Jackson’s, have come to be known as examples of the “knowledge argument” against materialism, due to their clear emphasis on the epistemological (that is, knowledge related) limitations of materialism. Like Levine, Nagel does not reject the metaphysics of materialism. Jackson had originally intended for his argument to yield a dualistic conclusion, but he no longer holds that view. The general pattern of each argument is to assume that all the physical facts are known about some conscious mind or conscious experience. Yet, the argument goes, not all is known about the mind or experience. It is then inferred that the missing knowledge is non-physical in some sense, which is surely an anti-materialist conclusion in some sense.
Nagel imagines a future where we know everything physical there is to know about some other conscious creature’s mind, such as a bat. However, it seems clear that we would still not know something crucial; namely, “what it is like to be a bat.” It will not do to imagine what it is like for us to be a bat. We would still not know what it is like to be a bat from the bat’s subjective or first-person point of view. The idea, then, is that if we accept the hypothesis that we know all of the physical facts about bat minds, and yet some knowledge about bat minds is left out, then materialism is inherently flawed when it comes to explaining consciousness. Even in an ideal future in which everything physical is known by us, something would still be left out. Jackson’s somewhat similar, but no less influential, argument begins by asking us to imagine a future where a person, Mary, is kept in a black and white room from birth during which time she becomes a brilliant neuroscientist and an expert on color perception. Mary never sees red for example, but she learns all of the physical facts and everything neurophysiologically about human color vision. Eventually she is released from the room and sees red for the first time. Jackson argues that it is clear that Mary comes to learn something new; namely, to use Nagel’s famous phrase, what it is like to experience red. This is a new piece of knowledge and hence she must have come to know some non-physical fact (since, by hypothesis, she already knew all of the physical facts). Thus, not all knowledge about the conscious mind is physical knowledge.
The influence and the quantity of work that these ideas have generated cannot be exaggerated. Numerous materialist responses to Nagel’s argument have been presented (such as Van Gulick 1985), and there is now a very useful anthology devoted entirely to Jackson’s knowledge argument (Ludlow et. al. 2004). Some materialists have wondered if we should concede up front that Mary wouldn’t be able to imagine the color red even before leaving the room, so that maybe she wouldn’t even be surprised upon seeing red for the first time. Various suspicions about the nature and effectiveness of such thought experiments also usually accompany this response. More commonly, however, materialists reply by arguing that Mary does not learn a new fact when seeing red for the first time, but rather learns the same fact in a different way. Recalling the distinction made in section 3b.i between concepts and objects or properties, the materialist will urge that there is only the one physical fact about color vision, but there are two ways to come to know it: either by employing neurophysiological concepts or by actually undergoing the relevant experience and so by employing phenomenal concepts. We might say that Mary, upon leaving the black and white room, becomes acquainted with the same neural property as before, but only now from the first-person point of view. The property itself isn’t new; only the perspective, or what philosophers sometimes call the “mode of presentation,” is different. In short, coming to learn or know something new does not entail learning some new fact about the world. Analogies are again given in other less controversial areas, for example, one can come to know about some historical fact or event by reading a (reliable) third-person historical account or by having observed that event oneself. But there is still only the one objective fact under two different descriptions. Finally, it is crucial to remember that, according to most, the metaphysics of materialism remains unaffected. Drawing a metaphysical conclusion from such purely epistemological premises is always a questionable practice. Nagel’s argument doesn’t show that bat mental states are not identical with bat brain states. Indeed, a materialist might even expect the conclusion that Nagel draws; after all, given that our brains are so different from bat brains, it almost seems natural for there to be certain aspects of bat experience that we could never fully comprehend. Only the bat actually undergoes the relevant brain processes. Similarly, Jackson’s argument doesn’t show that Mary’s color experience is distinct from her brain processes.
Despite the plethora of materialist responses, vigorous debate continues as there are those who still think that something profound must always be missing from any materialist attempt to explain consciousness; namely, that understanding subjective phenomenal consciousness is an inherently first-person activity which cannot be captured by any objective third-person scientific means, no matter how much scientific knowledge is accumulated. Some knowledge about consciousness is essentially limited to first-person knowledge. Such a sense, no doubt, continues to fuel the related anti-materialist intuitions raised in the previous section. Perhaps consciousness is simply a fundamental or irreducible part of nature in some sense (Chalmers 1996). (For more see Van Gulick 1993.)
Finally, some go so far as to argue that we are simply not capable of solving the problem of consciousness (McGinn 1989, 1991, 1995). In short, “mysterians” believe that the hard problem can never be solved because of human cognitive limitations; the explanatory gap can never be filled. Once again, however, McGinn does not reject the metaphysics of materialism, but rather argues that we are “cognitively closed” with respect to this problem much like a rat or dog is cognitively incapable of solving, or even understanding, calculus problems. More specifically, McGinn claims that we are cognitively closed as to how the brain produces conscious awareness. McGinn concedes that some brain property produces conscious experience, but we cannot understand how this is so or even know what that brain property is. Our concept forming mechanisms simply will not allow us to grasp the physical and causal basis of consciousness. We are not conceptually suited to be able to do so.
McGinn does not entirely rest his argument on past failed attempts at explaining consciousness in materialist terms; instead, he presents another argument for his admittedly pessimistic conclusion. McGinn observes that we do not have a mental faculty that can access both consciousness and the brain. We access consciousness through introspection or the first-person perspective, but our access to the brain is through the use of outer spatial senses (e.g., vision) or a more third-person perspective. Thus we have no way to access both the brain and consciousness together, and therefore any explanatory link between them is forever beyond our reach.
Materialist responses are numerous. First, one might wonder why we can’t combine the two perspectives within certain experimental contexts. Both first-person and third-person scientific data about the brain and consciousness can be acquired and used to solve the hard problem. Even if a single person cannot grasp consciousness from both perspectives at the same time, why can’t a plausible physicalist theory emerge from such a combined approach? Presumably, McGinn would say that we are not capable of putting such a theory together in any appropriate way. Second, despite McGinn’s protests to the contrary, many will view the problem of explaining consciousness as a merely temporary limit of our theorizing, and not something which is unsolvable in principle (Dennett 1991). Third, it may be that McGinn expects too much; namely, grasping some causal link between the brain and consciousness. After all, if conscious mental states are simply identical to brain states, then there may simply be a “brute fact” that really does not need any further explaining. Indeed, this is sometimes also said in response to the explanatory gap and the hard problem, as we saw earlier. It may even be that some form of dualism is presupposed in McGinn’s argument, to the extent that brain states are said to “cause” or “give rise to” consciousness, instead of using the language of identity. Fourth, McGinn’s analogy to lower animals and mathematics is not quite accurate. Rats, for example, have no concept whatsoever of calculus. It is not as if they can grasp it to some extent but just haven’t figured out the answer to some particular problem within mathematics. Rats are just completely oblivious to calculus problems. On the other hand, we humans obviously do have some grasp on consciousness and on the workings of the brain -- just see the references at the end of this entry! It is not clear, then, why we should accept the extremely pessimistic and universally negative conclusion that we can never discover the answer to the problem of consciousness, or, more specifically, why we could never understand the link between consciousness and the brain.
Unlike many of the above objections to materialism, the appeal to the possibility of zombies is often taken as both a problem for materialism and as a more positive argument for some form of dualism, such as property dualism. The philosophical notion of a “zombie” basically refers to conceivable creatures which are physically indistinguishable from us but lack consciousness entirely (Chalmers 1996). It certainly seems logically possible for there to be such creatures: “the conceivability of zombies seems…obvious to me…While this possibility is probably empirically impossible, it certainly seems that a coherent situation is described; I can discern no contradiction in the description” (Chalmers 1996: 96). Philosophers often contrast what is logically possible (in the sense of “that which is not self-contradictory”) from what is empirically possible given the actual laws of nature. Thus, it is logically possible for me to jump fifty feet in the air, but not empirically possible. Philosophers often use the notion of “possible worlds,” i.e., different ways that the world might have been, in describing such non-actual situations or possibilities. The objection, then, typically proceeds from such a possibility to the conclusion that materialism is false because materialism would seem to rule out that possibility. It has been fairly widely accepted (since Kripke 1972) that all identity statements are necessarily true (that is, true in all possible worlds), and the same should therefore go for mind-brain identity claims. Since the possibility of zombies shows that it doesn’t, then we should conclude that materialism is false. (See Identity Theory.)
It is impossible to do justice to all of the subtleties here. The literature in response to zombie, and related “conceivability,” arguments is enormous (see, for example, Hill 1997, Hill and McLaughlin 1999, Papineau 1998, 2002, Balog 1999, Block and Stalnaker 1999, Loar 1999, Yablo 1999, Perry 2001, Botterell 2001, Kirk 2005). A few lines of reply are as follows: First, it is sometimes objected that the conceivability of something does not really entail its possibility. Perhaps we can also conceive of water not being H2O, since there seems to be no logical contradiction in doing so, but, according to received wisdom from Kripke, that is really impossible. Perhaps, then, some things just seem possible but really aren’t. Much of the debate centers on various alleged similarities or dissimilarities between the mind-brain and water-H2O cases (or other such scientific identities). Indeed, the entire issue of the exact relationship between “conceivability” and “possibility” is the subject of an important recently published anthology (Gendler and Hawthorne 2002). Second, even if zombies are conceivable in the sense of logically possible, how can we draw a substantial metaphysical conclusion about the actual world? There is often suspicion on the part of materialists about what, if anything, such philosophers’ “thought experiments” can teach us about the nature of our minds. It seems that one could take virtually any philosophical or scientific theory about almost anything, conceive that it is possibly false, and then conclude that it is actually false. Something, perhaps, is generally wrong with this way of reasoning. Third, as we saw earlier (3b.i), there may be a very good reason why such zombie scenarios seem possible; namely, that we do not (at least, not yet) see what the necessary connection is between neural events and conscious mental events. On the one side, we are dealing with scientific third-person concepts and, on the other, we are employing phenomenal concepts. We are, perhaps, simply currently not in a position to understand completely such a necessary connection.
Debate and discussion on all four objections remains very active.
Despite the apparent simplicity of materialism, say, in terms of the identity between mental states and neural states, the fact is that there are many different forms of materialism. While a detailed survey of all varieties is beyond the scope of this entry, it is at least important to acknowledge the commonly drawn distinction between two kinds of “identity theory”: token-token and type-type materialism. Type-type identity theory is the stronger thesis and says that mental properties, such as “having a desire to drink some water” or “being in pain,” are literally identical with a brain property of some kind. Such identities were originally meant to be understood as on a par with, for example, the scientific identity between “being water” and “being composed of H2O” (Place 1956, Smart 1959). However, this view historically came under serious assault due to the fact that it seems to rule out the so-called “multiple realizability” of conscious mental states. The idea is simply that it seems perfectly possible for there to be other conscious beings (e.g., aliens, radically different animals) who can have those same mental states but who also are radically different from us physiologically (Fodor 1974). It seems that commitment to type-type identity theory led to the undesirable result that only organisms with brains like ours can have conscious states. Somewhat more technically, most materialists wish to leave room for the possibility that mental properties can be “instantiated” in different kinds of organisms. (But for more recent defenses of type-type identity theory see Hill and McLaughlin 1999, Papineau 1994, 1995, 1998, Polger 2004.) As a consequence, a more modest “token-token” identity theory has become preferable to many materialists. This view simply holds that each particular conscious mental event in some organism is identical with some particular brain process or event in that organism. This seems to preserve much of what the materialist wants but yet allows for the multiple realizability of conscious states, because both the human and the alien can still have a conscious desire for something to drink while each mental event is identical with a (different) physical state in each organism.
Taking the notion of multiple realizability very seriously has also led many to embrace functionalism, which is the view that conscious mental states should really only be identified with the functional role they play within an organism. For example, conscious pains are defined more in terms of input and output, such as causing bodily damage and avoidance behavior, as well as in terms of their relationship to other mental states. It is normally viewed as a form of materialism since virtually all functionalists also believe, like the token-token theorist, that something physical ultimately realizes that functional state in the organism, but functionalism does not, by itself, entail that materialism is true. Critics of functionalism, however, have long argued that such purely functional accounts cannot adequately explain the essential “feel” of conscious states, or that it seems possible to have two functionally equivalent creatures, one of whom lacks qualia entirely (Block 1980a, 1980b, Chalmers 1996; see also Shoemaker 1975, 1981).
Some materialists even deny the very existence of mind and mental states altogether, at least in the sense that the very concept of consciousness is muddled (Wilkes 1984, 1988) or that the mentalistic notions found in folk psychology, such as desires and beliefs, will eventually be eliminated and replaced by physicalistic terms as neurophysiology matures into the future (Churchland 1983). This is meant as analogous to past similar eliminations based on deeper scientific understanding, for example, we no longer need to speak of “ether” or “phlogiston.” Other eliminativists, more modestly, argue that there is no such thing as qualia when they are defined in certain problematic ways (Dennett 1988).
Finally, it should also be noted that not all materialists believe that conscious mentality can be explained in terms of the physical, at least in the sense that the former cannot be “reduced” to the latter. Materialism is true as an ontological or metaphysical doctrine, but facts about the mind cannot be deduced from facts about the physical world (Boyd 1980, Van Gulick 1992). In some ways, this might be viewed as a relatively harmless variation on materialist themes, but others object to the very coherence of this form of materialism (Kim 1987, 1998). Indeed, the line between such “non-reductive materialism” and property dualism is not always so easy to draw; partly because the entire notion of “reduction” is ambiguous and a very complex topic in its own right. On a related front, some materialists are happy enough to talk about a somewhat weaker “supervenience” relation between mind and matter. Although “supervenience” is a highly technical notion with many variations, the idea is basically one of dependence (instead of identity); for example, that the mental depends on the physical in the sense that any mental change must be accompanied by some physical change (see Kim 1993).
Most specific theories of consciousness tend to be reductionist in some sense. The classic notion at work is that consciousness or individual conscious mental states can be explained in terms of something else or in some other terms. This section will focus on several prominent contemporary reductionist theories. We should, however, distinguish between those who attempt such a reduction directly in physicalistic, such as neurophysiological, terms and those who do so in mentalistic terms, such as by using unconscious mental states or other cognitive notions.
The more direct reductionist approach can be seen in various, more specific, neural theories of consciousness. Perhaps best known is the theory offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons fire in synchrony and all have oscillations within the 35-75 hertz range (that is, 35-75 cycles per second). However, many philosophers and scientists have put forth other candidates for what, specifically, to identify in the brain with consciousness. This vast enterprise has come to be known as the search for the “neural correlates of consciousness” or NCCs (see section 5b below for more). The overall idea is to show how one or more specific kinds of neuro-chemical activity can underlie and explain conscious mental activity (Metzinger 2000). Of course, mere “correlation” is not enough for a fully adequate neural theory and explaining just what counts as a NCC turns out to be more difficult than one might think (Chalmers 2000). Even Crick and Koch have acknowledged that they, at best, provide a necessary condition for consciousness, and that such firing patters are not automatically sufficient for having conscious experience.
Many current theories attempt to reduce consciousness in mentalistic terms. One broadly popular approach along these lines is to reduce consciousness to “mental representations” of some kind. The notion of a “representation” is of course very general and can be applied to photographs, signs, and various natural objects, such as the rings inside a tree. Much of what goes on in the brain, however, might also be understood in a representational way; for example, as mental events representing outer objects partly because they are caused by such objects in, say, cases of veridical visual perception. More specifically, philosophers will often call such representational mental states “intentional states” which have representational content; that is, mental states which are “about something” or “directed at something” as when one has a thought about the house or a perception of the tree. Although intentional states are sometimes contrasted with phenomenal states, such as pains and color experiences, it is clear that many conscious states have both phenomenal and intentional properties, such as visual perceptions. It should be noted that the relation between intentionalilty and consciousness is itself a major ongoing area of dispute with some arguing that genuine intentionality actually presupposes consciousness in some way (Searle 1992, Siewart 1998, Horgan and Tienson 2002) while most representationalists insist that intentionality is prior to consciousness (Gennaro 2012, chapter two).
The general view that we can explain conscious mental states in terms of representational or intentional states is called “representationalism.” Although not automatically reductionist in spirit, most versions of representationalism do indeed attempt such a reduction. Most representationalists, then, believe that there is room for a kind of “second-step” reduction to be filled in later by neuroscience. The other related motivation for representational theories of consciousness is that many believe that an account of representation or intentionality can more easily be given in naturalistic terms, such as causal theories whereby mental states are understood as representing outer objects in virtue of some reliable causal connection. The idea, then, is that if consciousness can be explained in representational terms and representation can be understood in purely physical terms, then there is the promise of a reductionist and naturalistic theory of consciousness. Most generally, however, we can say that a representationalist will typically hold that the phenomenal properties of experience (that is, the “qualia” or “what it is like of experience” or “phenomenal character”) can be explained in terms of the experiences’ representational properties. Alternatively, conscious mental states have no mental properties other than their representational properties. Two conscious states with all the same representational properties will not differ phenomenally. For example, when I look at the blue sky, what it is like for me to have a conscious experience of the sky is simply identical with my experience’s representation of the blue sky.
A First-order representational (FOR) theory of consciousness is a theory that attempts to explain conscious experience primarily in terms of world-directed (or first-order) intentional states. Probably the two most cited FOR theories of consciousness are those of Fred Dretske (1995) and Michael Tye (1995, 2000), though there are many others as well (e.g., Harman 1990, Kirk 1994, Byrne 2001, Thau 2002, Droege 2003). Tye’s theory is more fully worked out and so will be the focus of this section. Like other FOR theorists, Tye holds that the representational content of my conscious experience (that is, what my experience is about or directed at) is identical with the phenomenal properties of experience. Aside from reductionistic motivations, Tye and other FOR representationalists often use the somewhat technical notion of the “transparency of experience” as support for their view (Harman 1990). This is an argument based on the phenomenological first-person observation, which goes back to Moore (1903), that when one turns one’s attention away from, say, the blue sky and onto one’s experience itself, one is still only aware of the blueness of the sky. The experience itself is not blue; rather, one “sees right through” one’s experience to its representational properties, and there is nothing else to one’s experience over and above such properties.
Whatever the merits and exact nature of the argument from transparency (see Kind 2003), it is clear, of course, that not all mental representations are conscious, so the key question eventually becomes: What exactly distinguishes conscious from unconscious mental states (or representations)? What makes a mental state a conscious mental state? Here Tye defends what he calls “PANIC theory.” The acronym “PANIC” stands for poised, abstract, non-conceptual, intentional content. Without probing into every aspect of PANIC theory, Tye holds that at least some of the representational content in question is non-conceptual (N), which is to say that the subject can lack the concept for the properties represented by the experience in question, such as an experience of a certain shade of red that one has never seen before. Actually, the exact nature or even existence of non-conceptual content of experience is itself a highly debated and difficult issue in philosophy of mind (Gunther 2003). Gennaro (2012), for example, defends conceptualism and connects it in various ways to the higher-order thought theory of consciousness (see section 4b.ii). Conscious states clearly must also have “intentional content” (IC) for any representationalist. Tye also asserts that such content is “abstract” (A) and not necessarily about particular concrete objects. This condition is needed to handle cases of hallucinations, where there are no concrete objects at all or cases where different objects look phenomenally alike. Perhaps most important for mental states to be conscious, however, is that such content must be “poised” (P), which is an importantly functional notion. The “key idea is that experiences and feelings...stand ready and available to make a direct impact on beliefs and/or desires. For example…feeling hungry… has an immediate cognitive effect, namely, the desire to eat….States with nonconceptual content that are not so poised lack phenomenal character [because]…they arise too early, as it were, in the information processing” (Tye 2000: 62).
One objection to Tye’s theory is that it does not really address the hard problem of phenomenal consciousness (see section 3b.i). This is partly because what really seems to be doing most of the work on Tye’s PANIC account is the very functional sounding “poised” notion, which is perhaps closer to Block’s access consciousness (see section 1) and is therefore not necessarily able to explain phenomenal consciousness (see Kriegel 2002). In short, it is difficult to see just how Tye’s PANIC account might not equally apply to unconscious representations and thus how it really explains phenomenal consciousness.
Other standard objections to Tye’s theory as well as to other FOR accounts include the concern that it does not cover all kinds of conscious states. Some conscious states seem not to be “about” anything, such as pains, anxiety, or after-images, and so would be non-representational conscious states. If so, then conscious experience cannot generally be explained in terms of representational properties (Block 1996). Tye responds that pains, itches, and the like do represent, in the sense that they represent parts of the body. And after-images, hallucinations, and the like either misrepresent (which is still a kind of representation) or the conscious subject still takes them to have representational properties from the first-person point of view. Indeed, Tye (2000) admirably goes to great lengths and argues convincingly in response to a whole host of alleged counter-examples to representationalism. Historically among them are various hypothetical cases of inverted qualia (see Shoemaker 1982), the mere possibility of which is sometimes taken as devastating to representationalism. These are cases where behaviorally indistinguishable individuals have inverted color perceptions of objects, such as person A visually experiences a lemon the way that person B experience a ripe tomato with respect to their color, and so on for all yellow and red objects. Isn’t it possible that there are two individuals whose color experiences are inverted with respect to the objects of perception? (For more on the importance of color in philosophy, see Hardin 1986.)
A somewhat different twist on the inverted spectrum is famously put forth in Block’s (1990) Inverted Earth case. On Inverted Earth every object has the complementary color to the one it has here, but we are asked to imagine that a person is equipped with color-inverting lenses and then sent to Inverted Earth completely ignorant of those facts. Since the color inversions cancel out, the phenomenal experiences remain the same, yet there certainly seem to be different representational properties of objects involved. The strategy on the part of critics, in short, is to think of counter-examples (either actual or hypothetical) whereby there is a difference between the phenomenal properties in experience and the relevant representational properties in the world. Such objections can, perhaps, be answered by Tye and others in various ways, but significant debate continues (Macpherson 2005). Intuitions also dramatically differ as to the very plausibility and value of such thought experiments. (For more, see Seager 1999, chapters 6 and 7. See also Chalmers 2004 for an excellent discussion of the dizzying array of possible representationalist positions.)
As we have seen, one question that should be answered by any theory of consciousness is: What makes a mental state a conscious mental state? There is a long tradition that has attempted to understand consciousness in terms of some kind of higher-order awareness. For example, John Locke (1689/1975) once said that “consciousness is the perception of what passes in a man’s own mind.” This intuition has been revived by a number of philosophers (Rosenthal, 1986, 1993b, 1997, 2000, 2004, 2005; Gennaro 1996a, 2012; Armstrong, 1968, 1981; Lycan, 1996, 2001). In general, the idea is that what makes a mental state conscious is that it is the object of some kind of higher-order representation (HOR). A mental state M becomes conscious when there is a HOR of M. A HOR is a “meta-psychological” state, i.e., a mental state directed at another mental state. So, for example, my desire to write a good encyclopedia entry becomes conscious when I am (non-inferentially) “aware” of the desire. Intuitively, it seems that conscious states, as opposed to unconscious ones, are mental states that I am “aware of” in some sense. This is sometimes referred to as the Transitivity Principle. Any theory which attempts to explain consciousness in terms of higher-order states is known as a higher-order (HO) theory of consciousness. It is best initially to use the more neutral term “representation” because there are a number of different kinds of higher-order theory, depending upon how one characterizes the HOR in question. HO theories, thus, attempt to explain consciousness in mentalistic terms, that is, by reference to such notions as “thoughts” and “awareness.” Conscious mental states arise when two unconscious mental states are related in a certain specific way; namely, that one of them (the HOR) is directed at the other (M). HO theorists are united in the belief that their approach can better explain consciousness than any purely FOR theory, which has significant difficulty in explaining the difference between unconscious and conscious mental states.
There are various kinds of HO theory with the most common division between higher-order thought (HOT) theories and higher-order perception (HOP) theories. HOT theorists, such as David M. Rosenthal, think it is better to understand the HOR as a thought of some kind. HOTs are treated as cognitive states involving some kind of conceptual component. HOP theorists urge that the HOR is a perceptual or experiential state of some kind (Lycan 1996) which does not require the kind of conceptual content invoked by HOT theorists. Partly due to Kant (1781/1965), HOP theory is sometimes referred to as “inner sense theory” as a way of emphasizing its sensory or perceptual aspect. Although HOT and HOP theorists agree on the need for a HOR theory of consciousness, they do sometimes argue for the superiority of their respective positions (such as in Rosenthal 2004, Lycan 2004, and Gennaro 2012). Some philosophers, however, have argued that the difference between these theories is perhaps not as important or as clear as some think it is (Güzeldere 1995, Gennaro 1996a, Van Gulick 2000).
A common initial objection to HOR theories is that they are circular and lead to an infinite regress. It might seem that the HOT theory results in circularity by defining consciousness in terms of HOTs. It also might seem that an infinite regress results because a conscious mental state must be accompanied by a HOT, which, in turn, must be accompanied by another HOT ad infinitum. However, the standard reply is that when a conscious mental state is a first-order world-directed state the higher-order thought (HOT) is not itself conscious; otherwise, circularity and an infinite regress would follow. When the HOT is itself conscious, there is a yet higher-order (or third-order) thought directed at the second-order state. In this case, we have introspection which involves a conscious HOT directed at an inner mental state. When one introspects, one's attention is directed back into one's mind. For example, what makes my desire to write a good entry a conscious first-order desire is that there is a (non-conscious) HOT directed at the desire. In this case, my conscious focus is directed at the entry and my computer screen, so I am not consciously aware of having the HOT from the first-person point of view. When I introspect that desire, however, I then have a conscious HOT (accompanied by a yet higher, third-order, HOT) directed at the desire itself (see Rosenthal 1986).
Peter Carruthers (2000) has proposed another possibility within HO theory; namely, that it is better for various reasons to think of the HOTs as dispositional states instead of the standard view that the HOTs are actual, though he also understands his “dispositional HOT theory” to be a form of HOP theory (Carruthers 2004). The basic idea is that the conscious status of an experience is due to its availability to higher-order thought. So “conscious experience occurs when perceptual contents are fed into a special short-term buffer memory store, whose function is to make those contents available to cause HOTs about themselves.” (Carruthers 2000: 228). Some first-order perceptual contents are available to a higher-order “theory of mind mechanism,” which transforms those representational contents into conscious contents. Thus, no actual HOT occurs. Instead, according to Carruthers, some perceptual states acquire a dual intentional content; for example, a conscious experience of red not only has a first-order content of “red,” but also has the higher-order content “seems red” or “experience of red.” Carruthers also makes interesting use of so-called “consumer semantics” in order to fill out his theory of phenomenal consciousness. The content of a mental state depends, in part, on the powers of the organisms which “consume” that state, e.g., the kinds of inferences which the organism can make when it is in that state. Daniel Dennett (1991) is sometimes credited with an earlier version of a dispositional account (see Carruthers 2000, chapter ten). Carruthers’ dispositional theory is often criticized by those who, among other things, do not see how the mere disposition toward a mental state can render it conscious (Rosenthal 2004; see also Gennaro 2004, 2012; for more, see Consciousness, Higher Order Theories of.)
It is worth briefly noting a few typical objections to HO theories (many of which can be found in Byrne 1997): First, and perhaps most common, is that various animals (and even infants) are not likely to have to the conceptual sophistication required for HOTs, and so that would render animal (and infant) consciousness very unlikely (Dretske 1995, Seager 2004). Are cats and dogs capable of having complex higher-order thoughts such as “I am in mental state M”? Although most who bring forth this objection are not HO theorists, Peter Carruthers (1989) is one HO theorist who actually embraces the conclusion that (most) animals do not have phenomenal consciousness. Gennaro (1993, 1996) has replied to Carruthers on this point; for example, it is argued that the HOTs need not be as sophisticated as it might initially appear and there is ample comparative neurophysiological evidence supporting the conclusion that animals have conscious mental states. Most HO theorists do not wish to accept the absence of animal or infant consciousness as a consequence of holding the theory. The debate continues, however, in Carruthers (2000, 2005, 2008) and Gennaro (2004, 2009, 2012, chapters seven and eight).
A second objection has been referred to as the “problem of the rock” (Stubenberg 1998) and the “generality problem” (Van Gulick 2000, 2004), but it is originally due to Alvin Goldman (Goldman 1993). When I have a thought about a rock, it is certainly not true that the rock becomes conscious. So why should I suppose that a mental state becomes conscious when I think about it? This is puzzling to many and the objection forces HO theorists to explain just how adding the HO state changes an unconscious state into a conscious. There have been, however, a number of responses to this kind of objection (Rosenthal 1997, Lycan, 1996, Van Gulick 2000, 2004, Gennaro 2005, 2012, chapter four). A common theme is that there is a principled difference in the objects of the HO states in question. Rocks and the like are not mental states in the first place, and so HO theorists are first and foremost trying to explain how a mental state becomes conscious. The objects of the HO states must be “in the head.”
Third, the above leads somewhat naturally to an objection related to Chalmers’ hard problem (section 3b.i). It might be asked just how exactly any HO theory really explains the subjective or phenomenal aspect of conscious experience. How or why does a mental state come to have a first-person qualitative “what it is like” aspect by virtue of the presence of a HOR directed at it? It is probably fair to say that HO theorists have been slow to address this problem, though a number of overlapping responses have emerged (see also Gennaro 2005, 2012, chapter four, for more extensive treatment). Some argue that this objection misconstrues the main and more modest purpose of (at least, their) HO theories. The claim is that HO theories are theories of consciousness only in the sense that they are attempting to explain what differentiates conscious from unconscious states, i.e., in terms of a higher-order awareness of some kind. A full account of “qualitative properties” or “sensory qualities” (which can themselves be non-conscious) can be found elsewhere in their work, but is independent of their theory of consciousness (Rosenthal 1991, Lycan 1996, 2001). Thus, a full explanation of phenomenal consciousness does require more than a HO theory, but that is no objection to HO theories as such. Another response is that proponents of the hard problem unjustly raise the bar as to what would count as a viable explanation of consciousness so that any such reductivist attempt would inevitably fall short (Carruthers 2000, Gennaro 2012). Part of the problem, then, is a lack of clarity about what would even count as an explanation of consciousness (Van Gulick 1995; see also section 3b). Once this is clarified, however, the hard problem can indeed be solved. Moreover, anyone familiar with the literature knows that there are significant terminological difficulties in the use of various crucial terms which sometimes inhibits genuine progress (but see Byrne 2004 for some helpful clarification).
A fourth important objection to HO approaches is the question of how such theories can explain cases where the HO state might misrepresent the lower-order (LO) mental state (Byrne 1997, Neander 1998, Levine 2001, Block 2011). After all, if we have a representational relation between two states, it seems possible for misrepresentation or malfunction to occur. If it does, then what explanation can be offered by the HO theorist? If my LO state registers a red percept and my HO state registers a thought about something green due, say, to some neural misfiring, then what happens? It seems that problems loom for any answer given by a HO theorist and the cause of the problem has to do with the very nature of the HO theorist’s belief that there is a representational relation between the LO and HO states. For example, if the HO theorist takes the option that the resulting conscious experience is reddish, then it seems that the HO state plays no role in determining the qualitative character of the experience. On the other hand, if the resulting experience is greenish, then the LO state seems irrelevant. Rosenthal and Weisberg hold that the HO state determines the qualitative properties even in cases when there is no LO state at all (Rosenthal 2005, 2011, Weisberg 2008, 2011a, 2011b). Gennaro (2012) argues that no conscious experience results in such cases and wonders, for example, how a sole (unconscious) HOT can result in a conscious state at all. He argues that there must be a match, complete or partial, between the LO and HO state in order for a conscious state to exist in the first place. This important objection forces HO theorists to be clearer about just how to view the relationship between the LO and HO states. Debate is ongoing and significant both on varieties of HO theory and in terms of the above objections (see Gennaro 2004a). There is also interdisciplinary interest in how various HO theories might be realized in the brain (Gennaro 2012, chapter nine).
A related and increasingly popular version of representational theory holds that the meta-psychological state in question should be understood as intrinsic to (or part of) an overall complex conscious state. This stands in contrast to the standard view that the HO state is extrinsic to (that is, entirely distinct from) its target mental state. The assumption, made by Rosenthal for example, about the extrinsic nature of the meta-thought has increasingly come under attack, and thus various hybrid representational theories can be found in the literature. One motivation for this movement is growing dissatisfaction with standard HO theory’s ability to handle some of the objections addressed in the previous section. Another reason is renewed interest in a view somewhat closer to the one held by Franz Brentano (1874/1973) and various other followers, normally associated with the phenomenological tradition (Husserl 1913/1931, 1929/1960; Sartre 1956; see also Smith 1986, 2004). To varying degrees, these views have in common the idea that conscious mental states, in some sense, represent themselves, which then still involves having a thought about a mental state, just not a distinct or separate state. Thus, when one has a conscious desire for a cold glass of water, one is also aware that one is in that very state. The conscious desire both represents the glass of water and itself. It is this “self-representing” which makes the state conscious.
These theories can go by various names, which sometimes seem in conflict, and have added significantly in recent years to the acronyms which abound in the literature. For example, Gennaro (1996a, 2002, 2004, 2006, 2012) has argued that, when one has a first-order conscious state, the HOT is better viewed as intrinsic to the target state, so that we have a complex conscious state with parts. Gennaro calls this the “wide intrinsicality view” (WIV) and he also argues that Jean-Paul Sartre’s theory of consciousness can be understood in this way (Gennaro 2002). Gennaro holds that conscious mental states should be understood (as Kant might have today) as global brain states which are combinations of passively received perceptual input and presupposed higher-order conceptual activity directed at that input. Higher-order concepts in the meta-psychological thoughts are presupposed in having first-order conscious states. Robert Van Gulick (2000, 2004, 2006) has also explored the alternative that the HO state is part of an overall global conscious state. He calls such states “HOGS” (Higher-Order Global States) whereby a lower-order unconscious state is “recruited” into a larger state, which becomes conscious partly due to the implicit self-awareness that one is in the lower-order state. Both Gennaro and Van Gulick have suggested that conscious states can be understood materialistically as global states of the brain, and it would be better to treat the first-order state as part of the larger complex brain state. This general approach is also forcefully advocated by Uriah Kriegel (Kriegel 2003a, 2003b, 2005, 2006, 2009) and is even the subject of an entire anthology debating its merits (Kriegel and Williford 2006). Kriegel has used several different names for his “neo-Brentanian theory,” such as the SOMT (Same-Order Monitoring Theory) and, more recently, the “self-representational theory of consciousness.” To be sure, the notion of a mental state representing itself or a mental state with one part representing another part is in need of further development and is perhaps somewhat mysterious. Nonetheless, there is agreement among these authors that conscious mental states are, in some important sense, reflexive or self-directed. And, once again, there is keen interest in developing this model in a way that coheres with the latest neurophysiological research on consciousness. A point of emphasis is on the concept of global meta-representation within a complex brain state, and attempts are underway to identify just how such an account can be realized in the brain.
It is worth mentioning that this idea was also briefly explored by Thomas Metzinger who focused on the fact that consciousness “is something that unifies or synthesizes experience” (Metzinger 1995: 454). Metzinger calls this the process of “higher-order binding” and thus uses the acronym HOB. Others who hold some form of the self-representational view include Kobes (1995), Caston (2002), Williford (2006), Brook and Raymont (2006), and even Carruthers’ (2000) theory can be viewed in this light since he contends that conscious states have two representational contents. Thomas Natsoulas also has a series of papers defending a similar view, beginning with Natsoulas 1996. Some authors (such as Gennaro 2012) view this hybrid position to be a modified version of HOT theory; indeed, Rosenthal (2004) has called it “intrinsic higher-order theory.” Van Gulick also clearly wishes to preserve the HO is his HOGS. Others, such as Kriegel, are not inclined to call their views “higher-order” at all and call it, for example, the “same-order monitoring” or “self-representational” theory of consciousness. To some extent, this is a terminological dispute, but, despite important similarities, there are also key subtle differences between these hybrid alternatives. Like HO theorists, however, those who advocate this general approach all take very seriously the notion that a conscious mental state M is a state that subject S is (non-inferentially) aware that S is in. By contrast, one is obviously not aware of one’s unconscious mental states. Thus, there are various attempts to make sense of and elaborate upon this key intuition in a way that is, as it were, “in-between” standard FO and HO theory. (See also Lurz 2003 and 2004 for yet another interesting hybrid account.)
Aside from the explicitly representational approaches discussed above, there are also related attempts to explain consciousness in other cognitive terms. The two most prominent such theories are worth describing here:
Daniel Dennett (1991, 2005) has put forth what he calls the Multiple Drafts Model (MDM) of consciousness. Although similar in some ways to representationalism, Dennett is most concerned that materialists avoid falling prey to what he calls the “myth of the Cartesian theater,” the notion that there is some privileged place in the brain where everything comes together to produce conscious experience. Instead, the MDM holds that all kinds of mental activity occur in the brain by parallel processes of interpretation, all of which are under frequent revision. The MDM rejects the idea of some “self” as an inner observer; rather, the self is the product or construction of a narrative which emerges over time. Dennett is also well known for rejecting the very assumption that there is a clear line to be drawn between conscious and unconscious mental states in terms of the problematic notion of “qualia.” He influentially rejects strong emphasis on any phenomenological or first-person approach to investigating consciousness, advocating instead what he calls “heterophenomenology” according to which we should follow a more neutral path “leading from objective physical science and its insistence on the third person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences.” (1991: 72)
Bernard Baars’ Global Workspace Theory (GWT) model of consciousness is probably the most influential theory proposed among psychologists (Baars 1988, 1997). The basic idea and metaphor is that we should think of the entire cognitive system as built on a “blackboard architecture” which is a kind of global workspace. According to GWT, unconscious processes and mental states compete for the spotlight of attention, from which information is “broadcast globally” throughout the system. Consciousness consists in such global broadcasting and is therefore also, according to Baars, an important functional and biological adaptation. We might say that consciousness is thus created by a kind of global access to select bits of information in the brain and nervous system. Despite Baars’ frequent use of “theater” and “spotlight” metaphors, he argues that his view does not entail the presence of the material Cartesian theater that Dennett is so concerned to avoid. It is, in any case, an empirical matter just how the brain performs the functions he describes, such as detecting mechanisms of attention.
Objections to these cognitive theories include the charge that they do not really address the hard problem of consciousness (as described in section 3b.i), but only the “easy” problems. Dennett is also often accused of explaining away consciousness rather than really explaining it. It is also interesting to think about Baars’ GWT in light of the Block’s distinction between access and phenomenal consciousness (see section 1). Does Baars’ theory only address access consciousness instead of the more difficult to explain phenomenal consciousness? (Two other psychological cognitive theories worth noting are the ones proposed by George Mandler 1975 and Tim Shallice 1988.)
Finally, there are those who look deep beneath the neural level to the field of quantum mechanics, basically the study of sub-atomic particles, to find the key to unlocking the mysteries of consciousness. The bizarre world of quantum physics is quite different from the deterministic world of classical physics, and a major area of research in its own right. Such authors place the locus of consciousness at a very fundamental physical level. This somewhat radical, though exciting, option is explored most notably by physicist Roger Penrose (1989, 1994) and anesthesiologist Stuart Hameroff (1998). The basic idea is that consciousness arises through quantum effects which occur in subcellular neural structures known as microtubules, which are structural proteins in cell walls. There are also other quantum approaches which aim to explain the coherence of consciousness (Marshall and Zohar 1990) or use the “holistic” nature of quantum mechanics to explain consciousness (Silberstein 1998, 2001). It is difficult to assess these somewhat exotic approaches at present. Given the puzzling and often very counterintuitive nature of quantum physics, it is unclear whether such approaches will prove genuinely scientifically valuable methods in explaining consciousness. One concern is simply that these authors are trying to explain one puzzling phenomenon (consciousness) in terms of another mysterious natural phenomenon (quantum effects). Thus, the thinking seems to go, perhaps the two are essentially related somehow and other physicalistic accounts are looking in the wrong place, such as at the neuro-chemical level. Although many attempts to explain consciousness often rely of conjecture or speculation, quantum approaches may indeed lead the field along these lines. Of course, this doesn’t mean that some such theory isn’t correct. One exciting aspect of this approach is the resulting interdisciplinary interest it has generated among physicists and other scientists in the problem of consciousness.
Over the past two decades there has been an explosion of interdisciplinary work in the science of consciousness. Some of the credit must go to the ground breaking 1986 book by Patricia Churchland entitled Neurophilosophy. In this section, three of the most important such areas are addressed.
Conscious experience seems to be “unified” in an important sense; this crucial feature of consciousness played an important role in the philosophy of Kant who argued that unified conscious experience must be the product of the (presupposed) synthesizing work of the mind. Getting clear about exactly what is meant by the “unity of consciousness” and explaining how the brain achieves such unity has become a central topic in the study of consciousness. There are many different senses of “unity” (see Tye 2003; Bayne and Chalmers 2003, Dainton 2000, 2008, Bayne 2010), but perhaps most common is the notion that, from the first-person point of view, we experience the world in an integrated way and as a single phenomenal field of experience. (For an important anthology on the subject, see Cleeremans 2003.) However, when one looks at how the brain processes information, one only sees discrete regions of the cortex processing separate aspects of perceptual objects. Even different aspects of the same object, such as its color and shape, are processed in different parts of the brain. Given that there is no “Cartesian theater” in the brain where all this information comes together, the problem arises as to just how the resulting conscious experience is unified. What mechanisms allow us to experience the world in such a unified way? What happens when this unity breaks down, as in various pathological cases? The “problem of integrating the information processed by different regions of the brain is known as the binding problem” (Cleeremans 2003: 1). Thus, the so-called “binding problem” is inextricably linked to explaining the unity of consciousness. As was seen earlier with neural theories (section 4a) and as will be seen below on the neural correlates of consciousness (5b), some attempts to solve the binding problem have to do with trying to isolate the precise brain mechanisms responsible for consciousness. For example, Crick and Koch’s (1990) idea that synchronous neural firings are (at least) necessary for consciousness can also be viewed as an attempt to explain how disparate neural networks bind together separate pieces of information to produce unified subjective conscious experience. Perhaps the binding problem and the hard problem of consciousness (section 3b.i) are very closely connected. If the binding problem can be solved, then we arguably have identified the elusive neural correlate of consciousness and have, therefore, perhaps even solved the hard problem. In addition, perhaps the explanatory gap between third-person scientific knowledge and first-person unified conscious experience can also be bridged. Thus, this exciting area of inquiry is central to some of the deepest questions in the philosophical and scientific exploration of consciousness.
As was seen earlier in discussing neural theories of consciousness (section 4a), the search for the so-called “neural correlates of consciousness” (NCCs) is a major preoccupation of philosophers and scientists alike (Metzinger 2000). Narrowing down the precise brain property responsible for consciousness is a different and far more difficult enterprise than merely holding a generic belief in some form of materialism. One leading candidate is offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons all fire in synchrony with one another (oscillations within the 35-75 hertz range or 35-75 cycles per second). Currently, one method used is simply to study some aspect of neural functioning with sophisticated detecting equipments (such as MRIs and PET scans) and then correlate it with first-person reports of conscious experience. Another method is to study the difference in brain activity between those under anesthesia and those not under any such influence. A detailed survey would be impossible to give here, but a number of other candidates for the NCC have emerged over the past two decades, including reentrant cortical feedback loops in the neural circuitry throughout the brain (Edelman 1989, Edelman and Tononi 2000), NMDA-mediated transient neural assemblies (Flohr 1995), and emotive somatosensory haemostatic processes in the frontal lobe (Damasio 1999). To elaborate briefly on Flohr’s theory, the idea is that anesthetics destroy conscious mental activity because they interfere with the functioning of NMDA synapses between neurons, which are those that are dependent on N-methyl-D-aspartate receptors. These and other NCCs are explored at length in Metzinger (2000). Ongoing scientific investigation is significant and an important aspect of current scientific research in the field.
One problem with some of the above candidates is determining exactly how they are related to consciousness. For example, although a case can be made that some of them are necessary for conscious mentality, it is unclear that they are sufficient. That is, some of the above seem to occur unconsciously as well. And pinning down a narrow enough necessary condition is not as easy as it might seem. Another general worry is with the very use of the term “correlate.” As any philosopher, scientist, and even undergraduate student should know, saying that “A is correlated with B” is rather weak (though it is an important first step), especially if one wishes to establish the stronger identity claim between consciousness and neural activity. Even if such a correlation can be established, we cannot automatically conclude that there is an identity relation. Perhaps A causes B or B causes A, and that’s why we find the correlation. Even most dualists can accept such interpretations. Maybe there is some other neural process C which causes both A and B. “Correlation” is not even the same as “cause,” let alone enough to establish “identity.” Finally, some NCCs are not even necessarily put forth as candidates for all conscious states, but rather for certain specific kinds of consciousness (e.g., visual).
Philosophers have long been intrigued by disorders of the mind and consciousness. Part of the interest is presumably that if we can understand how consciousness goes wrong, then that can help us to theorize about the normal functioning mind. Going back at least as far as John Locke (1689/1975), there has been some discussion about the philosophical implications of multiple personality disorder (MPD) which is now called “dissociative identity disorder” (DID). Questions abound: Could there be two centers of consciousness in one body? What makes a person the same person over time? What makes a person a person at any given time? These questions are closely linked to the traditional philosophical problem of personal identity, which is also importantly related to some aspects of consciousness research. Much the same can be said for memory disorders, such as various forms of amnesia (see Gennaro 1996a, chapter 9). Does consciousness require some kind of autobiographical memory or psychological continuity? On a related front, there is significant interest in experimental results from patients who have undergone a commisurotomy, which is usually performed to relieve symptoms of severe epilepsy when all else fails. During this procedure, the nerve fibers connecting the two brain hemispheres are cut, resulting in so-called “split-brain” patients (Bayne 2010).
Philosophical interest is so high that there is now a book series called Philosophical Psychopathology published by MIT Press. Another rich source of information comes from the provocative and accessible writings of neurologists on a whole host of psychopathologies, most notably Oliver Sacks (starting with his 1987 book) and, more recently, V. S. Ramachandran (2004; see also Ramachandran and Blakeslee 1998). Another launching point came from the discovery of the phenomenon known as “blindsight” (Weiskrantz 1986), which is very frequently discussed in the philosophical literature regarding its implications for consciousness. Blindsight patients are blind in a well defined part of the visual field (due to cortical damage), but yet, when forced, can guess, with a higher than expected degree of accuracy, the location or orientation of an object in the blind field.
There is also philosophical interest in many other disorders, such as phantom limb pain (where one feels pain in a missing or amputated limb), various agnosias (such as visual agnosia where one is not capable of visually recognizing everyday objects), and anosognosia (which is denial of illness, such as when one claims that a paralyzed limb is still functioning, or when one denies that one is blind). These phenomena raise a number of important philosophical questions and have forced philosophers to rethink some very basic assumptions about the nature of mind and consciousness. Much has also recently been learned about autism and various forms of schizophrenia. A common view is that these disorders involve some kind of deficit in self-consciousness or in one’s ability to use certain self-concepts. (For a nice review article, see Graham 2002.) Synesthesia is also a fascinating abnormal phenomenon, although not really a “pathological” condition as such (Cytowic 2003). Those with synesthesia literally have taste sensations when seeing certain shapes or have color sensations when hearing certain sounds. It is thus an often bizarre mixing of incoming sensory input via different modalities.
One of the exciting results of this relatively new sub-field is the important interdisciplinary interest that it has generated among philosophers, psychologists, and scientists (such as in Graham 2010, Hirstein 2005, and Radden 2004).
Two final areas of interest involve animal and machine consciousness. In the former case it is clear that we have come a long way from the Cartesian view that animals are mere “automata” and that they do not even have conscious experience (perhaps partly because they do not have immortal souls). In addition to the obviously significant behavioral similarities between humans and many animals, much more is known today about other physiological similarities, such as brain and DNA structures. To be sure, there are important differences as well and there are, no doubt, some genuinely difficult “grey areas” where one might have legitimate doubts about some animal or organism consciousness, such as small rodents, some birds and fish, and especially various insects. Nonetheless, it seems fair to say that most philosophers today readily accept the fact that a significant portion of the animal kingdom is capable of having conscious mental states, though there are still notable exceptions to that rule (Carruthers 2000, 2005). Of course, this is not to say that various animals can have all of the same kinds of sophisticated conscious states enjoyed by human beings, such as reflecting on philosophical and mathematical problems, enjoying artworks, thinking about the vast universe or the distant past, and so on. However, it still seems reasonable to believe that animals can have at least some conscious states from rudimentary pains to various perceptual states and perhaps even to some level of self-consciousness. A number of key areas are under continuing investigation. For example, to what extent can animals recognize themselves, such as in a mirror, in order to demonstrate some level of self-awareness? To what extent can animals deceive or empathize with other animals, either of which would indicate awareness of the minds of others? These and other important questions are at the center of much current theorizing about animal cognition. (See Keenan et. al. 2003 and Beckoff et. al. 2002.) In some ways, the problem of knowing about animal minds is an interesting sub-area of the traditional epistemological “problem of other minds”: How do we even know that other humans have conscious minds? What justifies such a belief?
The possibility of machine (or robot) consciousness has intrigued philosophers and non-philosophers alike for decades. Could a machine really think or be conscious? Could a robot really subjectively experience the smelling of a rose or the feeling of pain? One important early launching point was a well-known paper by the mathematician Alan Turing (1950) which proposed what has come to be known as the “Turing test” for machine intelligence and thought (and perhaps consciousness as well). The basic idea is that if a machine could fool an interrogator (who could not see the machine) into thinking that it was human, then we should say it thinks or, at least, has intelligence. However, Turing was probably overly optimistic about whether anything even today can pass the Turing Test, as most programs are specialized and have very narrow uses. One cannot ask the machine about virtually anything, as Turing had envisioned. Moreover, even if a machine or robot could pass the Turing Test, many remain very skeptical as to whether or not this demonstrates genuine machine thinking, let alone consciousness. For one thing, many philosophers would not take such purely behavioral (e.g., linguistic) evidence to support the conclusion that machines are capable of having phenomenal first person experiences. Merely using words like “red” doesn’t ensure that there is the corresponding sensation of red or real grasp of the meaning of “red.” Turing himself considered numerous objections and offered his own replies, many of which are still debated today.
Another much discussed argument is John Searle’s (1980) famous Chinese Room Argument, which has spawned an enormous amount of literature since its original publication (see also Searle 1984; Preston and Bishop 2002). Searle is concerned to reject what he calls “strong AI” which is the view that suitably programmed computers literally have a mind, that is, they really understand language and actually have other mental capacities similar to humans. This is contrasted with “weak AI” which is the view that computers are merely useful tools for studying the mind. The gist of Searle’s argument is that he imagines himself running a program for using Chinese and then shows that he does not understand Chinese; therefore, strong AI is false; that is, running the program does not result in any real understanding (or thought or consciousness, by implication). Searle supports his argument against strong AI by utilizing a thought experiment whereby he is in a room and follows English instructions for manipulating Chinese symbols in order to produce appropriate answers to questions in Chinese. Searle argues that, despite the appearance of understanding Chinese (say, from outside the room), he does not understand Chinese at all. He does not thereby know Chinese, but is merely manipulating symbols on the basis of syntax alone. Since this is what computers do, no computer, merely by following a program, genuinely understands anything. Searle replies to numerous possible criticisms in his original paper (which also comes with extensive peer commentary), but suffice it to say that not everyone is satisfied with his responses. For example, it might be argued that the entire room or “system” understands Chinese if we are forced to use Searle’s analogy and thought experiment. Each part of the room doesn’t understand Chinese (including Searle himself) but the entire system does, which includes the instructions and so on. Searle’s larger argument, however, is that one cannot get semantics (meaning) from syntax (formal symbol manipulation).
Despite heavy criticism of the argument, two central issues are raised by Searle which continue to be of deep interest. First, how and when does one distinguish mere “simulation” of some mental activity from genuine “duplication”? Searle’s view is that computers are, at best, merely simulating understanding and thought, not really duplicating it. Much like we might say that a computerized hurricane simulation does not duplicate a real hurricane, Searle insists the same goes for any alleged computer “mental” activity. We do after all distinguish between real diamonds or leather and mere simulations which are just not the real thing. Second, and perhaps even more important, when considering just why computers really can’t think or be conscious, Searle interestingly reverts back to a biologically based argument. In essence, he says that computers or robots are just not made of the right stuff with the right kind of “causal powers” to produce genuine thought or consciousness. After all, even a materialist does not have to allow that any kind of physical stuff can produce consciousness any more than any type of physical substance can, say, conduct electricity. Of course, this raises a whole host of other questions which go to the heart of the metaphysics of consciousness. To what extent must an organism or system be physiologically like us in order to be conscious? Why is having a certain biological or chemical make up necessary for consciousness? Why exactly couldn’t an appropriately built robot be capable of having conscious mental states? How could we even know either way? However one answers these questions, it seems that building a truly conscious Commander Data is, at best, still just science fiction.
In any case, the growing areas of cognitive science and artificial intelligence are major fields within philosophy of mind and can importantly bear on philosophical questions of consciousness. Much of current research focuses on how to program a computer to model the workings of the human brain, such as with so-called “neural (or connectionist) networks.”
- Alter, T. and S.Walter, eds. Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism. New York: Oxford University Press, 2007.
- Armstrong, D. A Materialist Theory of Mind. London: Routledge and Kegan Paul, 1968.
- Armstrong, D. "What is Consciousness?" In The Nature of Mind. Ithaca, NY: Cornell University Press, 1981.
- Baars, B. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press, 1988.
- Baars, B. In The Theater of Consciousness. New York: Oxford University Press, 1997.
- Baars, B., Banks, W., and Newman, J. eds. Essential Sources in the Scientific Study of Consciousness. Cambridge, MA: MIT Press, 2003.
- Balog, K. "Conceivability, Possibility, and the Mind-Body Problem." In Philosophical Review 108: 497-528, 1999.
- Bayne, T. & Chalmers, D. “What is the Unity of Consciousness?” In Cleeremans, 2003.
- Bayne, T. The Unity of Consciousness. New York: Oxford University Press, 2010.
- Beckoff, M., Allen, C., and Burghardt, G. The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition. Cambridge, MA: MIT Press, 2002.
- Blackmore, S. Consciousness: An Introduction. Oxford: Oxford University Press, 2004.
- Block, N. "Troubles with Functionalism.” In Readings in the Philosophy of Psychology, Volume 1, Ned Block, ed., Cambridge, MA: Harvard University Press, 1980a.
- Block, N. "Are Absent Qualia Impossible?" Philosophical Review 89: 257-74, 1980b.
- Block, N. "Inverted Earth." In Philosophical Perspectives, 4, J. Tomberlin, ed., Atascadero, CA: Ridgeview Publishing Company, 1990.
- Block, N. "On a Confusion about the Function of Consciousness." In Behavioral and Brain Sciences 18: 227-47, 1995.
- Block, N. "Mental Paint and Mental Latex." In E. Villanueva, ed. Perception. Atascadero, CA: Ridgeview, 1996.
- Block, N. "The higher order approach to consciousness is defunct.” Analysis 71: 419-431, 2011.
- Block, N, Flanagan, O. & Guzeledere, G. eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
- Block, N. & Stalnaker, R. "Conceptual Analysis, Dualism, and the Explanatory Gap." Philosophical Review 108: 1-46, 1999.
- Botterell, A. “Conceiving what is not there.” In Journal of Consciousness Studies 8 (8): 21-42, 2001.
- Boyd, R. "Materialism without Reductionism: What Physicalism does not entail." In N. Block, ed. Readings in the Philosophy of Psychology, Vol.1. Cambridge, MA: Harvard University Press, 1980.
- Brentano, F. Psychology from an Empirical Standpoint. New York: Humanities, 1874/1973.
- Brook, A. Kant and the Mind. New York: Cambridge University Press, 1994.
- Brook, A. & Raymont, P. 2006. A Unified Theory of Consciousness. Forthcoming.
- Byrne, A. "Some like it HOT: Consciousness and Higher-Order Thoughts." In Philosophical Studies 86:103-29, 1997.
- Byrne, A. "Intentionalism Defended." In Philosophical Review 110: 199-240, 2001.
- Byrne, A. “What Phenomenal Consciousness is like.” In Gennaro 2004a.
- Campbell, N. A Brief Introduction to the Philosophy of Mind. Ontario: Broadview, 2004.
- Carruthers, P. “Brute Experience.” In Journal of Philosophy 86: 258-269, 1989.
- Carruthers, P. Phenomenal Consciousness. Cambridge, MA: Cambridge University Press, 2000.
- Carruthers, P. “HOP over FOR, HOT Theory.” In Gennaro 2004a.
- Carruthers, P. Consciousness: Essays from a Higher-Order Perspective. New York: Oxford University Press, 2005.
- Carruthers, P. “Meta-cognition in animals: A skeptical look.” Mind and Language 23: 58-89, 2008.
- Caston, V. “Aristotle on Consciousness.” Mind 111: 751-815, 2002.
- Chalmers, D.J. "Facing up to the Problem of Consciousness." In Journal of Consciousness Studies 2:200-19, 1995.
- Chalmers, D.J. The Conscious Mind. Oxford: Oxford University Press, 1996.
- Chalmers, D.J. “What is a Neural Correlate of Consciousness?” In Metzinger 2000.
- Chalmers, D.J. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press, 2002.
- Chalmers, D.J. “The Representational Character of Experience.” In B. Leiter ed. The Future for Philosophy. Oxford: Oxford University Press, 2004.
- Churchland, P. S. "Consciousness: the Transmutation of a Concept." In Pacific Philosophical Quarterly 64: 80-95, 1983.
- Churchland, P. S. Neurophilosophy. Cambridge, MA: MIT Press, 1986.
- Cleeremans, A. The Unity of Consciousness: Binding, Integration and Dissociation. Oxford: Oxford University Press, 2003.
- Crick, F. and Koch, C. "Toward a Neurobiological Theory of Consciousness." In Seminars in Neuroscience 2: 263-75, 1990.
- Crick, F. H. The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Scribners, 1994.
- Cytowic, R. The Man Who Tasted Shapes. Cambridge, MA: MIT Press, 2003.
- Dainton, B. Stream of Consciousness. New York: Routledge, 2000.
- Dainton, B. The Phenomenal Self. Oxford: Oxford University Press, 2008.
- Damasio, A. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt, 1999.
- Dennett, D. C. "Quining Qualia." In A. Marcel & E. Bisiach eds. Consciousness and Contemporary Science. New York: Oxford University Press, 1988.
- Dennett, D.C. Consciousness Explained. Boston: Little, Brown, and Co, 1991.
- Dennett, D. C. Sweet Dreams. Cambridge, MA: MIT Press, 2005.
- Dretske, F. Naturalizing the Mind. Cambridge, MA: MIT Press, 1995.
- Droege, P. Caging the Beast. Philadelphia & Amsterdam: John Benjamins Publishers, 2003.
- Edelman, G. The Remembered Present: A Biological Theory of Consciousness. New York: Basic Books, 1989.
- Edelman, G. & Tononi, G. “Reentry and the Dynamic Core: Neural Correlates of Conscious Experience.” In Metzinger 2000.
- Flohr, H. "An Information Processing Theory of Anesthesia." In Neuropsychologia 33: 9, 1169-80, 1995.
- Fodor, J. "Special Sciences.” In Synthese 28, 77-115, 1974.
- Foster, J. The Immaterial Self: A Defence of the Cartesian Dualist Conception of Mind. London: Routledge, 1996.
- Gendler, T. & Hawthorne, J. eds. Conceivability and Possibility. Oxford: Oxford University Press, 2002.
- Gennaro, R.J. “Brute Experience and the Higher-Order Thought Theory of Consciousness.” In Philosophical Papers 22: 51-69, 1993.
- Gennaro, R.J. Consciousness and Self-consciousness: A Defense of the Higher-Order Thought Theory of Consciousness. Amsterdam & Philadelphia: John Benjamins, 1996a.
- Gennaro, R.J. Mind and Brain: A Dialogue on the Mind-Body Problem. Indianapolis: Hackett Publishing Company, 1996b.
- Gennaro, R.J. “Leibniz on Consciousness and Self Consciousness.” In R. Gennaro & C. Huenemann, eds. New Essays on the Rationalists. New York: Oxford University Press, 1999.
- Gennaro, R.J. “Jean-Paul Sartre and the HOT Theory of Consciousness.” In Canadian Journal of Philosophy 32: 293-330, 2002.
- Gennaro, R.J. “Higher-Order Thoughts, Animal Consciousness, and Misrepresentation: A Reply to Carruthers and Levine,” 2004. In Gennaro 2004a.
- Gennaro, R.J., ed. Higher-Order Theories of Consciousness: An Anthology. Amsterdam and Philadelphia: John Benjamins, 2004a.
- Gennaro, R.J. “The HOT Theory of Consciousness: Between a Rock and a Hard Place?” In Journal of Consciousness Studies 12 (2): 3-21, 2005.
- Gennaro, R.J. “Between Pure Self-referentialism and the (extrinsic) HOT Theory of Consciousness.” In Kriegel and Williford 2006.
- Gennaro, R.J. “Animals, consciousness, and I-thoughts.” In R. Lurz ed. Philosophy of Animal Minds. New York: Cambridge University Press, 2009.
- Gennaro, R.J. The Consciousness Paradox: Consciousness, Concepts, and Higher-Order Thoughts. Cambridge, MA: MIT Press, 2012.
- Goldman, A. “Consciousness, Folk Psychology and Cognitive Science.” In Consciousness and Cognition 2: 264-82, 1993.
- Graham, G. “Recent Work in Philosophical Psychopathology.” In American Philosophical Quarterly 39: 109-134, 2002.
- Graham, G. The Disordered Mind. New York: Routledge, 2010.
- Gunther, Y. ed. Essays on Nonconceptual Content. Cambridge, MA: MIT Press, 2003.
- Guzeldere, G. “Is Consciousness the Perception of what passes in one’s own Mind?” In Metzinger 1995.
- Hameroff, S. "Quantum Computation in Brain Microtubules? The Pemose-Hameroff "Orch OR" Model of Consciousness." In Philosophical Transactions Royal Society London A 356:1869-96, 1998.
- Hardin, C. Color for Philosophers. Indianapolis: Hackett, 1986.
- Harman, G. "The Intrinsic Quality of Experience." In J. Tomberlin, ed. Philosophical Perspectives, 4. Atascadero, CA: Ridgeview Publishing, 1990.
- Heidegger, M. Being and Time (Sein und Zeit). Translated by J. Macquarrie and E. Robinson. New York: Harper and Row, 1927/1962.
- Hill, C. S. "Imaginability, Conceivability, Possibility, and the Mind-Body Problem." In Philosophical Studies 87: 61-85, 1997.
- Hill, C. and McLaughlin, B. "There are fewer things in Reality than are dreamt of in Chalmers' Philosophy." In Philosophy and Phenomenological Research 59: 445-54, 1998.
- Hirstein, W. Brain Fiction. Cambridge, MA: MIT Press, 2005.
- Horgan, T. and Tienson, J. "The Intentionality of Phenomenology and the Phenomenology of Intentionality." In Chalmers 2002.
- Husserl, E. Ideas: General Introduction to Pure Phenomenology (Ideen au einer reinen Phänomenologie und phänomenologischen Philosophie). Translated by W. Boyce Gibson. New York: MacMillan, 1913/1931.
- Husserl, E. Cartesian Meditations: an Introduction to Phenomenology. Translated by Dorian Cairns.The Hague: M. Nijhoff, 1929/1960.
- Jackson, F. "Epiphenomenal Qualia." In Philosophical Quarterly 32: 127-136, 1982.
- Jackson, F. "What Mary didn't Know." In Journal of Philosophy 83: 291-5, 1986.
- James, W. The Principles of Psychology. New York: Henry Holt & Company, 1890.
- Kant, I. Critique of Pure Reason. Translated by N. Kemp Smith. New York: MacMillan, 1965.
- Keenan, J., Gallup, G., and Falk, D. The Face in the Mirror. New York: HarperCollins, 2003.
- Kim, J. "The Myth of Non-Reductive Physicalism." In Proceedings and Addresses of the American Philosophical Association, 1987.
- Kim, J. Supervenience and Mind. Cambridge, MA: Cambridge University Press, 1993.
- Kim, J. Mind in Physical World. Cambridge: MIT Press, 1998.
- Kind, A. “What’s so Transparent about Transparency?” In Philosophical Studies 115: 225-244, 2003.
- Kirk, R. Raw Feeling. New York: Oxford University Press, 1994.
- Kirk, R. Zombies and Consciousness. New York: Oxford University Press, 2005.
- Kitcher, P. Kant’s Transcendental Psychology. New York: Oxford University Press, 1990.
- Kobes, B. “Telic Higher-Order Thoughts and Moore’s Paradox.” In Philosophical Perspectives 9: 291-312, 1995.
- Koch, C. The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and Company, 2004.
- Kriegel, U. “PANIC Theory and the Prospects for a Representational Theory of Phenomenal Consciousness.” In Philosophical Psychology 15: 55-64, 2002.
- Kriegel, U. “Consciousness, Higher-Order Content, and the Individuation of Vehicles.” In Synthese 134: 477-504, 2003a.
- Kriegel, U. “Consciousness as Intransitive Self-Consciousness: Two Views and an Argument.” In Canadian Journal of Philosophy 33: 103-132, 2003b.
- Kriegel, U. “Consciousness and Self-Consciousness.” In The Monist 87: 182-205, 2004.
- Kriegel, U. “Naturalizing Subjective Character.” In Philosophy and Phenomenological Research, forthcoming.
- Kriegel, U. “The Same Order Monitoring Theory of Consciousness.” In Kriegel and Williford 2006.
- Kriegel, U. Subjective Consciousness. New York: Oxford University Press, 2009.
- Kriegel, U. & Williford, K. Self-Representational Approaches to Consciousness. Cambridge, MA: MIT Press, 2006.
- Kripke, S. Naming and Necessity. Cambridge, MA: Harvard University Press, 1972.
- Leibniz, G. W. Discourse on Metaphysics. Translated by D. Garber and R. Ariew. Indianapolis: Hackett, 1686/1991.
- Leibniz, G. W. The Monadology. Translated by R. Lotte. London: Oxford University Press, 1720/1925.
- Levine, J. "Materialism and Qualia: the Explanatory Gap." In Pacific Philosophical Quarterly 64,354-361, 1983.
- Levine, J. "On Leaving out what it's like." In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
- Levine, J. Purple Haze: The Puzzle of Conscious Experience. Cambridge, MA: MIT Press, 2003.
- Loar, B. "Phenomenal States." In Philosophical Perspectives 4, 81-108, 1990.
- Loar, B. "Phenomenal States". In N. Block, O. Flanagan, and G. Guzeldere eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
- Loar, B. “David Chalmers’s The Conscious Mind.” Philosophy and Phenomenological Research 59: 465-72, 1999.
- Locke, J. An Essay Concerning Human Understanding. Ed. P. Nidditch. Oxford: Clarendon, 1689/1975.
- Ludlow, P., Nagasawa, Y, & Stoljar, D. eds. There’s Something about Mary. Cambridge, MA: MIT Press, 2004.
- Lurz, R. “Neither HOT nor COLD: An Alternative Account of Consciousness.” In Psyche 9, 2003.
- Lurz, R. “Either FOR or HOR: A False Dichotomy.” In Gennaro 2004a.
- Lycan, W.G. Consciousness and Experience. Cambridge, MA: MIT Press, 1996.
- Lycan, W.G. “A Simple Argument for a Higher-Order Representation Theory of Consciousness.” Analysis 61: 3-4, 2001.
- Lycan, W.G. "The Superiority of HOP to HOT." In Gennaro 2004a.
- Macpherson, F. “Colour Inversion Problems for Representationalism.” In Philosophy and Phenomenological Research 70: 127-52, 2005.
- Mandler, G. Mind and Emotion. New York: Wiley, 1975.
- Marshall, J. and Zohar, D. The Quantum Self: Human Nature and Consciousness Defined by the New Physics. New York: Morrow, 1990.
- McGinn, C. "Can we solve the Mind-Body Problem?" In Mind 98:349-66, 1989.
- McGinn, C. The Problem of Consciousness. Oxford: Blackwell, 1991.
- McGinn, C. "Consciousness and Space.” In Metzinger 1995.
- Metzinger, T. ed. Conscious Experience. Paderbom: Ferdinand Schöningh, 1995.
- Metzinger, T. ed. Neural Correlates of Consciousness: Empirical and Conceptual Questions. Cambridge, MA: MIT Press, 2000.
- Moore, G. E. "The Refutation of Idealism." In G. E. Moore Philosophical Studies. Totowa, NJ: Littlefield, Adams, and Company, 1903.
- Nagel, T. "What is it like to be a Bat?" In Philosophical Review 83: 435-456, 1974.
- Natsoulas, T. “The Case for Intrinsic Theory I. An Introduction.” In The Journal of Mind and Behavior 17: 267-286, 1996.
- Neander, K. “The Division of Phenomenal Labor: A Problem for Representational Theories of Consciousness.” In Philosophical Perspectives 12: 411-434, 1998.
- Papineau, D. Philosophical Naturalism. Oxford: Blackwell, 1994.
- Papineau, D. "The Antipathetic Fallacy and the Boundaries of Consciousness." In Metzinger 1995.
- Papineau, D. “Mind the Gap.” In J. Tomberlin, ed. Philosophical Perspectives 12. Atascadero, CA: Ridgeview Publishing Company, 1998.
- Papineau, D. Thinking about Consciousness. Oxford: Oxford University Press, 2002.
- Perry, J. Knowledge, Possibility, and Consciousness. Cambridge, MA: MIT Press, 2001.
- Penrose, R. The Emperor's New Mind: Computers, Minds and the Laws of Physics. Oxford: Oxford University Press, 1989.
- Penrose, R. Shadows of the Mind. Oxford: Oxford University Press, 1994.
- Place, U. T. "Is Consciousness a Brain Process?" In British Journal of Psychology 47: 44-50, 1956.
- Polger, T. Natural Minds. Cambridge, MA: MIT Press, 2004.
- Preston, J. and Bishop, M. eds. Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. New York: Oxford University Press, 2002.
- Radden, J. Ed. The Philosophy of Psychiatry. New York: Oxford University Press, 2004.
- Ramachandran, V.S. A Brief Tour of Human Consciousness. New York: PI Press, 2004.
- Ramachandran, V.S. and Blakeslee, S. Phantoms in the Brain. New York: Harper Collins, 1998.
- Revonsuo, A. Consciousness: The Science of Subjectivity. New York: Psychology Press, 2010.
- Robinson, W.S. Understanding Phenomenal Consciousness. New York: Cambridge University Press, 2004.
- Rosenthal, D. M. “Two Concepts of Consciousness." In Philosophical Studies 49:329-59, 1986.
- Rosenthal, D. M. "The Independence of Consciousness and Sensory Quality." In E. Villanueva, ed. Consciousness. Atascadero, CA: Ridgeview Publishing, 1991.
- Rosenthal, D.M. “State Consciousness and Transitive Consciousness.” In Consciousness and Cognition 2: 355-63, 1993a.
- Rosenthal, D. M. "Thinking that one thinks." In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993b.
- Rosenthal, D. M. "A Theory of Consciousness." In N. Block, O. Flanagan, and G. Guzeldere, eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
- Rosenthal, D. M. “Introspection and Self-Interpretation.” In Philosophical Topics 28: 201-33, 2000.
- Rosenthal, D. M. “Varieties of Higher-Order Theory.” In Gennaro 2004a.
- Rosenthal, D.M. Consciousness and Mind. New York: Oxford University Press, 2005.
- Rosenthal, D.M. “Exaggerated reports: reply to Block.” Analysis 71: 431-437, 2011.
- Ryle, G. The Concept of Mind. London: Hutchinson and Company, 1949.
- Sacks, 0. The Man who mistook his Wife for a Hat and Other Essays. New York: Harper and Row, 1987.
- Sartre, J.P. Being and Nothingness. Trans. Hazel Barnes. New York: Philosophical Library, 1956.
- Seager, W. Theories of Consciousness. London: Routledge, 1999.
- Seager, W. “A Cold Look at HOT Theory.” In Gennaro 2004a.
- Searle, J. “Minds, Brains, and Programs.” In Behavioral and Brain Sciences 3: 417-57, 1980.
- Searle, J. Minds, Brains and Science. Cambridge, MA: Harvard University Press, 1984.
- Searle, J. The Rediscovery of the Mind. Cambridge. MA: MIT Press, 1992.
- Siewert, C. The Significance of Consciousness. Princeton, NJ: Princeton University Press, 1998.
- Shallice, T. From Neuropsychology to Mental Structure. Cambridge: Cambridge University Press, 1988.
- Shear, J. Explaining Consciousness: The Hard Problem. Cambridge, MA: MIT Press, 1997.
- Shoemaker, S. "Functionalism and Qualia." In Philosophical Studies, 27, 291-315, 1975.
- Shoemaker, S. "Absent Qualia are Impossible." In Philosophical Review 90, 581-99, 1981.
- Shoemaker, S. "The Inverted Spectrum." In Journal of Philosophy, 79, 357-381, 1982.
- Silberstein, M. "Emergence and the Mind-Body Problem." In Journal of Consciousness Studies 5: 464-82, 1998.
- Silberstein, M. "Converging on Emergence: Consciousness, Causation and Explanation." In Journal of Consciousness Studies 8: 61-98, 2001.
- Skinner, B. F. Science and Human Behavior. New York: MacMillan, 1953.
- Smart, J.J.C. "Sensations and Brain Processes." In Philosophical Review 68: 141-56, 1959.
- Smith, D.W. “The Structure of (self-)consciousness.” In Topoi 5: 149-56, 1986.
- Smith, D.W. Mind World: Essays in Phenomenology and Ontology. Cambridge, MA: Cambridge University Press, 2004.
- Stubenberg, L. Consciousness and Qualia. Philadelphia & Amsterdam: John Benjamins Publishers, 1998.
- Swinburne, R. The Evolution of the Soul. Oxford: Oxford University Press, 1986.
- Thau, M. Consciousness and Cognition. Oxford: Oxford University Press, 2002.
- Titchener, E. An Outline of Psychology. New York: Macmillan, 1901.
- Turing, A. “Computing Machinery and Intelligence.” In Mind 59: 433-60, 1950.
- Tye, M. Ten Problems of Consciousness. Cambridge, MA: MIT Press, 1995.
- Tye, M. Consciousness, Color, and Content. Cambridge, MA: MIT Press, 2000.
- Tye, M. Consciousness and Persons. Cambridge, MA: MIT Press, 2003.
- Van Gulick, R. "Physicalism and the Subjectivity of the Mental." In Philosophical Topics 13, 51-70, 1985.
- Van Gulick, R. "Nonreductive Materialism and Intertheoretical Constraint." In A. Beckermann, H. Flohr, J. Kim, eds. Emergence and Reduction. Berlin and New York: De Gruyter, 1992.
- Van Gulick, R. "Understanding the Phenomenal Mind: Are we all just armadillos?" In M. Davies and G. Humphreys, eds., Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
- Van Gulick, R. "What would count as Explaining Consciousness?" In Metzinger 1995.
- Van Gulick, R. "Inward and Upward: Reflection, Introspection and Self-Awareness." In Philosophical Topics 28: 275-305, 2000.
- Van Gulick, R. "Higher-Order Global States HOGS: An Alternative Higher-Order Model of Consciousness." In Gennaro 2004a.
- Van Gulick, R. “Mirror Mirror – is that all?” In Kriegel and Williford 2006.
- Velmans, M. and S. Schneider eds. The Blackwell Companion to Consciousness. Malden, MA: Blackwell, 2007.
- Weisberg, J. “Same Old, Same Old: The Same-Order Representation Theory of Consciousness and the Division of Phenomenal Labor.” Synthese 160: 161-181, 2008.
- Weisberg, J. “Misrepresenting consciousness.” Philosophical Studies 154: 409-433, 2011a.
- Weisberg, J. “Abusing the Notion of What-it’s-like-ness: A Response to Block.” Analysis 71: 438-443, 2011b.
- Weiskrantz, L. Blindsight. Oxford: Clarendon, 1986.
- Wilkes, K. V. "Is Consciousness Important?" In British Journal for the Philosophy of Science 35: 223-43, 1984.
- Wilkes, K. V. "Yishi, Duo, Us and Consciousness." In A. Marcel & E. Bisiach, eds., Consciousness in Contemporary Science. Oxford: Oxford University Press, 1988.
- Williford, K. “The Self-Representational Structure of Consciousness.” In Kriegel and Williford 2006.
- Wundt, W. Outlines of Psychology. Leipzig: W. Engleman, 1897.
- Yablo, S. "Concepts and Consciousness." In Philosophy and Phenomenological Research 59: 455-63, 1999.
- Zelazo, P, M. Moscovitch, and E. Thompson. Eds. The Cambridge Handbook of Consciousness. Cambridge: Cambridge University Press, 2007.
Rocco J. Gennaro
University of Southern Indiana
U. S. A.