Category Archives: Mind & Cognitive Science

Philosophy of Mental Illness

The Philosophy of Mental Illness is an interdisciplinary field of study that combines views and methods from the philosophy of mind, psychology, neuroscience, and moral philosophy in order to analyze the nature of mental illness. Philosophers of mental illness are concerned with examining the ontological, epistemological, and normative issues arising from varying conceptions of mental illness.

Central questions within the philosophy of mental illness include: whether the concept of a mental illness can be given a scientifically adequate, value-free, specification; whether mental illnesses should be understood as a form of distinctly mental dysfunction, and whether mental illnesses are best identified as discrete mental entities with clear inclusion/exclusion criteria or as points along a continuum between the normal and the ill. Philosophers critical of the concept of mental illness argue that it is not possible to give a value-neutral specification of mental illnesses. They argue that that our concept of mental illnesses is often used to disguise the ways in which mental illness categories enforce pre-existing norms and power relations. Questions remain about the relationship between the role that values play within the concept of mental illness and how those values relate to concepts of illness more generally. Philosophers who consider themselves a part of the neurodiversity movement claim that our concept of mental illness should be revised to reflect the diverse forms of cognition that humans are capable of without stigmatizing individuals that are statistically non-normal.

There are also epistemological issues concerning the relationship between mental illness and diagnosis. Historically, the central issue centers on how nosologies (or classification-schemas) of mental illness, especially the Diagnostic and Statistical Manual of Mental Disorders (the DSM), relate mental dysfunctions with observable symptoms. Mental dysfunction, on the DSM system, is identified via the presence or absence of a set of symptoms from a checklist. Those critical of the use of behavioral symptoms to diagnose mental disorders argue that symptoms are useless without a theoretically adequate conception of what it means for a mental mechanism to function poorly. A minimal constraint on a diagnostic system is that it must be able to distinguish a person with a genuine mental illness from a person suffering from a problem with living. Critics argue that the DSM, as currently constituted, cannot do this.

Lastly, there are a host of questions surrounding the relationship between mental illness and normativity. If mental illness undermines rational agency, then there are questions about the degree to which the mentally ill are capable of autonomous decision-making. This bears on questions regarding the degree of moral and legal responsibility that the mentally ill can be assigned. Further questions about agency arise over bioethical questions about the standing of the demands made on healthcare professionals by the mentally ill. For example, individuals with Body Integrity Identity Disorder (BIID) request that surgeons amputate their healthy limbs in order to restore a balance between their internal self-representation and their external body image. Bioethicists are divided over whether the requests of patients with BIID are genuinely autonomous and deserving of assent.

Table of Contents

  1. Conceptions of Mental Illness
    1. Alienism and Freud
    2. DSM I – II
    3. The Bio-psycho-social Model DSM III – 5
  2. Criticisms of the Bio-psycho-social Model
    1. Mental Illness as Dysfunction
    2. Neurobiological Eliminitivism
    3. The Role of Value
    4. Szasz's Myth of Mental Illness
  3. Neurodiversity
    1. Motivation
    2. Autism, Psychopathy
  4. Responsibility and Autonomy
    1. Psychopathy
    2. Body Integrity Identity Disorder and Gender Dysphoria
  5. References and Further Reading

1. Conceptions of Mental Illness

a. Alienism and Freud

Although there are many conceptions of madness found throughout the ancient world (demon possession, divine revelation or punishment, and so forth), the conception of a distinctly mental form of illness did not fully begin to crystallize, at least in the West, until the latter half of the nineteenth-century with the creation and rise of mental asylums. Individuals who were housed in asylums were thought to be psychotic or insane. Psychotic inmates were seen as distinctly different from the non-psychotic population and this justified the creation of special purpose institutions for the containment of psychotic individuals. Psychotics were construed as suffering from distinct and localizable organic brain disorders and were treated by medical professionals known as Alienists (Elliott 2004, 471). Writing at the time, German psychiatrist Emil Kraepelin’s nosology divided psychoses into one of two types: mood disorders and demtia praexcox (Kraepelin 1896a, 1896b). All other forms of distress were though to fall outside of the province of the asylum and of medical treatment.

Non-psychotic individuals who were unhappy with their lives, who felt intense anxiety, or who might vacillate between periods of high and low-motivation were not thought to have psychotic problem. These individuals were not treated or seen by alienists but instead sought help from their family, friends, or clergy (Horwitz 2001, 40). Non-psychotic dysphoria (unhappiness) was, in this context, understood not as a distinctly medical problem but instead in a variety of other forms: a typically social problem with living, a character flaw, or simply as a different way of life. The solution for the unhappiness that many individuals suffered was not found within the asylum but instead from the family, god, or other social institutions. There was, at this time, a clear distinction between medical problems resulting in psychosis and social problems that caused suffering.

Sigmund Freud grew up in the alienist tradition and received his medical degree in 1881. Freud's theory of the mental and of mental illness would revolutionize western understanding of psychology and would become the dominant paradigm in the psychological sciences until the middle of the twentieth-century. Where the alienists saw mental illnesses as manifestations of rather discrete brain dysfunctions, Freud would come to understand the distinction between normal persons and the mentally ill as arising from a conflict in psychological mechanisms that were a part of the normal human repertoire (Freud 1915/1977; Ghaemi 2003, 4). Where the alienist understood non-psychotic unhappiness as a problem to be solved by individuals and their support networks, Freud understood problems in living as the domain of the psychotherapist. Paul Roazen famously quotes Freud as claiming that “[t]he optimum conditions for (psychoanalysis) exist where it is not needed—that is, among the healthy” (Roazen 1992, 160).

Crucial to Freud's reorientation of mental disorder was his view of the relationship between observable behavioral symptoms and underlying psychological disorder. Unlike Kraepelin, who understood psychotic behavioral symptoms as closely tied to specific underlying brain dysfunction, Freud did not believe that behavioral symptoms could be tied to unique disorders. The underlying source of human psychological suffering, as Freud understood it, stemmed from universal childhood experiences that if poorly resolved or understood, could manifest in adulthood as neurosis. Freud saw repression, for example, as a normal part of development from child to adult. An individual could fail to properly apply repressive techniques. If this occurs then poorly repressed trauma can manifest itself in a myriad of ways from obsessive cleaning, chronic gambling, melancholia, and so forth. (Freud 1915/1989; Horwitz 2001, 43). Simply noting melancholia in a patient would not be enough for a psychoanalyst to understand the source of repressive dysfunction.

Because a client troubled by chronic gambling and another client troubled by hysteria could, in principle, be suffering from the same underlying repressive dysfunction, any diagnostic manual based on Freud's conception of mental disorders would not hold symptoms as fundamentally important to the diagnostic process. Instead, Freud claimed that the only way to truly understand a patient's underlying psychological dysfunction is to acquire detailed information about a person, including his or her dreams, in order to uncover repressed sexual urges (Freud 1905/1997).

The first two editions of the DSM were largely based on Freud's underlying theory of repression and mental disorder. This nosology would dominate western thinking about the mentally ill until the 1960s.

b. DSM I – II

When the  first edition of the Diagnostic and Statistical Manual of Mental Disorders was published in 1952, psychodynamic theorists dominated the clinical and academic landscape. Nearly 2/3 of the chairs of psychology departments in American universities were chaired by psychoanalysts and the emerging DSM strongly reflects their theoretical assumptions (Strand 2011, 277). By this point, psychiatry was seen as an extension of medical practice. This required the creation of a nosology, a catalogue of disorders for clinical practice (Graham 2010, 5).

The first-edition of the DSM represented a revolutionary change in the conception and treatment of mental illness. Given the expansive notion of mental illness proposed by Freud and his students, the first two editions of the DSM conclude that many individuals that, prior to this point,  were not  seen as mentally ill, would benefit from therapy. Because symptoms were only weakly correlated with underlying illness on the psychodynamic view, only repeated, and  intensive, conversations with a qualified analyst could help a person get to the root cause of his problems (Horwitz 2002, 45; Grob 1991, 425). The first-edition of the DSM devotes a significant proportion of its 145 pages to a classification of mental illness concepts and terms (American Psychiatric Association 1952, 73-119). Unlike future editions of the manual, illnesses are not identified in terms of a series of symptoms but instead in terms of the underlying psychological conflict responsible. For example, the manual defines Psychoneurotic Disorder as:

[T]hose disturbances in which “anxiety” is a chief characteristic, directly felt or expressed, or automatically controlled by such defenses as depression, conversion, dissociation, displacement, phobia formation, or repetitive thoughts and acts…a psychoneurotic reaction may be defined as one in which the personality, in its struggle for adjustment to internal and external stresses, utilizes the mechanisms listed above to handle the anxiety created (American Psychiatric Association 1952, 12-13).

Yet, The presence of anxiety is not sufficient to diagnose psychoneurotic disorder. Anxiety must result from an underlying conflict between the personality and other stressors. It is the role of the analyst , in this context, to discover whether this underlying conflict is present. This cannot be done by merely observing symptoms; only psychodynamic therapy can discover the true cause of a patient’s anxiety (Grob 1991, 423).

Dissent against this system of classification and diagnosis arose from many groups both external to psychiatry and internal to the psychiatric discipline; these criticisms solidified in the 1960s. The emerging “anti-psychiatry” movement would come to challenge the assumptions that had grounded psychiatric practice in the first half of the 20th century. Conceptions of mental illness, the underlying assumptions behind the process of diagnosis, and even the status of psychiatry as a science were all subject to sustained critiques. Several of the most vocal critics of psychiatry were themselves clinical psychiatrists: R.D. Laing, David Rosenhan, and Thomas Szasz. The latter’s critique of psychiatric practice and the conceptions of mental illness are outlined in detail below in section 2(b).

Rosenahn conducted a pair of famous studies that would radically undermine the scientific legitimacy of clinical diagnosis, especially in the eyes of the public. In his initial study, Rosenhan, along with seven other volunteers, attempted to have themselves admitted several mental health institutions (Rosenhan 1973, 179-180). Rosenhan instructed his collaborators to claim that they heard a voice which said only two words: “thud” and “hollow.” For all other questions, Rosenhan instructed his subjects to answer honestly. The words ‘thud’ and ‘hollow’ were chosen specifically because they did not correspond to a known pattern of neurosis in the DSM II. Rosenhan, and all of his confederates, were admitted to mental institutions; all but one of Rosenhan’s subjects were admitted under a diagnosis of schizophrenia (Rosenhan 1973, 180). Once admitted, subjects took as long as 52 days before they were released, despite the fact that they did not play-act any symptoms of any mental illness. Rosenhan noted that once he and his confederates had been admitted, everyday behavior began to be interpreted as a sign of their underlying mental illness. Subjects who were taking notes for later use, for example, were noted as engaging in unusual “writing behavior;” subjects speaking with a psychiatrist about their childhood and family were construed as having telltale neurotic early-childhood issues (Rosenhan 1973, 183). Since these subjects were not otherwise in distress, Rosenhan claimed that the diagnostic process was not representing an underlying ‘mental illness’ in any of the pseudopatients but instead that the diagnostic process was unscientific and unfalsifiable.

Once Rosenhan publicized the results of his initial study, several institutions challenged his results by re-asserting the validity of the diagnostic process. They claimed that their institutions would not have fallen for Rosenhan’s ruse and challenged him to send pseudopatients to them for analysis. Rosenhan agreed and, despite the fact that no psuedopatients were actually sent, these institutions suspected at least 41 of their new patients (more than 20% of new patients over a three month period) of being pseudopatients sent by Rosenhan (Rosenhan 1973, 181). Again it seemed as if the diagnostic process was incapable of accurately separating the mentally ill from the healthy. In part resulting from critiques of the diagnostic process like Rosenhan’s studies, the diagnostic model of psychiatry would be radically altered. Beginning as early as 1974, the American Psychiatric Association would assign a taskforce to prepare for the publication of the next edition of the DSM. The DSM III that would result from this process, published in 1980, would represent a rejection of the psychodynamic assumptions built into the previous versions of the manual and provide a framework for all future editions of tDSM.  

c. The Bio-psycho-social Model DSM III – 5

The most recent edition of the Diagnostic and Statistical Manual of Mental Disorders, the DSM 5, was published in 2013. This edition does not substantially modify the conception of mental disorder that has been offered by the manual since its third edition, first published in 1980. In comparison with the first edition of the DSM, the DSM 5 includes diagnostic criteria for more than 400 individual disorders. The conception of mental disorders used in the DSM 5 presents them as biological, psychological, or social dysfunctions in an individual; this model has, unsurprisingly come to be called the Bio-psycho-social model.  It represents the current consensus view of mental disorder among psychological researchers and clinical practitioners. Psychologists disagree about whether to understand this definition conjunctively or disjunctively (Ghaemi 2007, 8). The Biopsychosocial model states:

A mental disorder is a syndrome characterized by clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the   psychological, biological, or developmental processes underlying mental functioning. Mental disorders are usually associated with significant distress or disability in social, occupational, or other important activities. An expectable or culturally approved response to a common stressor or loss, such as the death of a loved one, is not a mental disorder. Socially deviant behavior (e.g., political, religious, or sexual) and conflicts that are primarily between the individual and society are not mental disorders unless the deviance or conflict results from a dysfunction in the individual, as described above (American Psychiatric Association 2013, 20).

From this characterization we can extract four criteria that serve to a genuine mental disorder from other sorts of issues (problems in living, character flaws, and so forth). In order for a disturbance to be classified as a mental disorder it must:

  1. Be a clinically significant disturbance in cognition, emotion regulation, or behavior
  2. Reflect a dysfunction in biological, psychological, or developmental processes
  3. Usually cause distress or disability
  4. Not reflect a culturally approved response to a situation or event
  5. Not result purely from a problem between an individual and her society

All of the criteria, with the exception of the 'distress' criterion, are individually necessary and jointly sufficient for the classification of a patient's symptoms as stemming from a mental disorder. Prior to the seventh printing of the DSM II, homosexuality had been included as a mental disorder. The revisions to the text that took place between the DSM II and the DSM III were meant to make clear that homosexuality (“an interest in sexual relations or contact with members of the same sex”), does not satisfy the criteria for a mental disorder so long as it is not accompanied by clinically significant dysphoria (American Psychiatric Association 1973, 2). However, an individual who feels dysphoria as a result of their homosexuality can be diagnosed with an Unspecific Sexual Dysfunction in the DSM 5 (American Psychiatric Association 2013, 450).

The third, 'distress,' criterion is neither necessary nor sufficient to qualify a mental disturbance as a disorder. This can be seen by examining the process for the diagnosis of the 'cluster B' personality disorders (histrionic, anti-social, borderline, and narcissistic personality disorders). Subjects with cluster B disorders often do not suffer as a result of their condition. Indeed, those with Antisocial Personality Disorder, for example, may not see themselves as disordered and may even approve of their condition. This has led some individuals with personality disorders to align with the emerging Neurodiversity movement (see section 3 below). The patterns of behavior manifested by those with cluster B personality disorders are, nonetheless, understood as reflecting clinically significant disturbances in cognition, emotion regulation, and behavior. They form a distinct class of mental disorders in the DSM (American Psychiatric Association 2013, 645-684). Some philosophers have argued that the cluster B personality disorders should not be understood as mental disorders but instead that they are better understood as distinctly moral disorders. Louis Charland argues for this conclusion. He claims that, unlike the cluster A and C personality disorders, the only treatment for the cluster B disorders is distinctly moral improvement; because this fact about the treatment of cluster B personality disorders uniquely distinguishes them from all other mental disorders in the DSM. Thus Charland concludes that they reflect moral (as opposed to value-neutral) dysfunction (Charland 2004a, 67).

Since the publication of the DSM III, mental disorders have been defined as being caused by a clinically significant dysfunction of a mental mechanism. Because the definition of mental illness invokes the concept of dysfunction, it is often subject to critique (see the following section). Although the general definition of mental disorder used by the DSM invokes the concept of dysfunction, the diagnostic criteria for particular mental illnesses do not. It is instructive to provide an example of how particular disorders are defined within the manual. Anorexia Nervosa, for example, is defined by the presence of three clusters of behavioral symptoms (American Psychiatric Association 2013, 338-339):

A: Restriction of energy intake relative to requirements, leading to a significantly low body weight in the context of age, sex, developmental trajectory, and physical health.

B: Intense fear of gaining weight or of becoming fat, or persistent behavior that interferes with weight gain, even though at a significantly low weight

C: Disturbance in the way in which one's body weight or shape is experienced, undue influence of body weight or shape on self-evaluation, or persistent lack of recognition of the seriousness of the current low body weight

Importantly, this characterization of Anorexia Nervosa presents the disorder as a distinct, specifiable, condition that is present in the person and that the underlying dysfunction is uniquely picked out by the presence of the behavioral symptoms identified as A and C; “B” symptoms are seen as common but not essential to diagnosis (American Psychiatric Association 2013, 340). Given the underlying conception of mental disorder offered by the authors of the DSM, Anorexia Nervosa cannot simply be the result of a conflict between the individual and society. It also must not result from an individual accurately trying to adopt social norms about beauty or appearance or diet. It must instead result from a combination of biological, psychological, and/or social dysfunctions however, the diagnostic criteria do not indicate what this underlying dysfunction consists in nor does it offer any evidence that the symptoms associated with the disorder are caused by the same underlying dysfunction.

Stemming in part for reasons of this sort, both the general bio-psycho-social model of mental disorder and the uses of the model to characterize particular disorders, like Anorexia Nervosa, have been subject to repeated criticism by philosophers.

2. Criticisms of the Bio-psycho-social Model

The definition of mental disorder that stems from the bio-psycho-social model has been subject to several criticisms. Philosophical critiques of the definition of disorder have ranged from calling for revision and specification of the concept of disorder to abandonment of the concept altogether. Many of the 400+ disorders that appear in the DSM have also been criticized. In some cases, these critiques are internal: the disorders do not appear to match the criteria of mental disorder offered in the DSM itself; in other cases, as with some critics of schizophrenia, the aim is to undermine both the existence of the disorder and the conception of mental disorder that results in its inclusion (Bentall 1990).

Many members of the antipsychiatry movement described in section 1b were responsible for setting the stage for the criticisms of the bio-psycho-social model. Although in part political, this movement saw the rise of several alternative conceptualizations of human function and dysfunction that have come to challenge the DSM’s conception of a mental disorder. Chief among these were Thomas Szasz’s influential arguments that mental illness is a ‘myth’ and the rise of ‘positive psychology’ as a viable alternative psychological ideology.

a. Mental Illness as Dysfunction

Nassir Ghaemi has criticized the current conception of mental disorder as resting on an unscientific political compromise between factions within clinical and research psychologists and to stave off the looming threat of neurobiological eliminitivism (see section 2b). Ghaemi argues that many psychologists view the Bio-psycho-social conception of mental illness disjunctively and focus predominantly on their preferred method for understanding a disorder depending on their own assumptions of dysfunction (Ghaemi 2003, 10). Although this compromise presents the appearance of consensus, Ghaemi argues that it is an illusion. He advocates for a form of integrationism about mental disorder that has become popular in some circles (Ghaemi 2003, 291; Kandel 1998, 458). A true integration of biology and psychology requires solving the currently unresolved issue over consciousness and how consciousness is realized by the brain. Because this question does not appear to be resolvable in the near-term, integrationists of Ghaemi’s stripe have offered a placeholder for a replacement to the Bio-psycho-social model instead of a true alternative to current models.

Philosophers have also criticized the DSM conception of mental disorder for its lack of a unified theory of dysfunction. The current DSM requires that mental disorders reflect a dysfunction of biological, psychological, or social mechanisms though the text itself is silent on what it would mean for a mechanism to be dysfunctional and does not provide any evidence that the symptoms used for clinical diagnosis of a disorder are caused by a single underlying dysfunction.

Philosophers have appealed to at least three distinct senses of dysfunction to craft a unified theory of mental disorder: etiological, propensity, and normative dysfunction. Etiological function (and dysfunction) is construed in evolutionary terms. A mechanism is functioning, in the etiological sense, if it evolved to serve a specific purpose and if it is, currently, serving its evolved purpose. In order to discover the function of a mental mechanism, it is necessary to discover its evolved function. Dysfunction can then be construed relative to this purpose (Wakefield 1999, 374; Boorse 1997, 12). A mechanism is dysfunctional if it is not fulfilling its evolutionary purpose. Depression, for example, may, in some cases, represent a dysfunction of a mechanism evolved for affective regulation. However, evolutionary psychological theories of mental function are still in their early stages. Furthermore, some philosophers want to allow for the possibility that many of our mental mechanisms may not have evolved to serve the functions to which we currently put them to use.

A propensity function is not constrained by past selective pressures but instead defines function and dysfunction based upon current and future selective success. Male aggression, for example, may have been adaptive in our ancestral environment and hence may represent a case of proper functioning on the etiological theory. On the propensity view, however, male aggression may not be adaptive for life in modern societies even if it was fitness-enhancing in our ancestral environments. Male aggression might therefore, on a propensity account of function and dysfunction, represent a dysfunctional mechanism and hence a mental disorder (Woolfolk 1999, 663). As with the evolutionary view, propensity function conceptions of mental dysfunction have the advantage of appealing to descriptive evidence in order to determine whether or not a specific pattern of behavior is fitness-enhancing in its current context (Boorse 1975, 52). However, crafting a theory of function and dysfunction in terms of present-day fitness appears to allow some conditions to count as mental disorders that we may be averse to label mental illnesses. One major issue with appealing to propensity function is that it appears to resurrect defunct mental illness. Drapetomania, the mental illness that was applied to runaway slaves in the nineteenth century, would appear to satisfy the definition of a propensity dysfunction. Dysphoria caused by the conditions of slavery and a strong desire to abandon one’s current condition are arguably not fitness-enhancing, in a strictly evolutionary sense, and therefore appear to satisfy the criteria for a propensity dysfunction (Woolfolk 1999, 664).

Purely normative accounts of dysfunction have not garnered much favor within the psychological or philosophical disciplines. On a purely normative account of dysfunction, a person is said to be mentally ill based upon whether or not the behavior fits within the context of a larger normative network. Whether we choose to call a person mentally ill or merely ‘bad’ may depend on whether or not we believe agents like this should be held morally responsible and the concept of responsibility may not be reducible to non-normative elements (Edwards 2009, 78). On such conceptions, it is impossible to avoid invoking evaluative concepts when describing what a mental illness is or why a particular set of behaviors is best understood as an illness (Fulford 2001, 81).

George Graham argues for what he calls an unmediated defense of realism about mental illness; Graham's defense in unmediated in the sense that he does not believe that it must be shown that mental illnesses are natural kinds or result from brain-disorders in order to qualify as legitimate classification-independent kinds (Graham 2014, 133-134). Instead, he argues that “the very idea of a mental disorder or illness is the notion of a type of impairment or incapacity in the rational or reasons-responsive operation of one or more basic psychological faculties or capacities in persons” (Graham 2014, 135-136; see also Graham 2013a and 2013b). These capacities could be described or analyzed at various levels of implementation according to Graham though their malfunction is understood in normative terms.

Perhaps the most influential theory of dysfunction within the philosophical literature is offered by Jerome Wakefield. Wakefield’s conception of mental disorder attempts to bridge the gap between purely objective conceptions of disorder and subjective or normative views. On Wakefield’s view, a mental disorder arises only when a ‘harmful dysfunction’ is present. This combines two different types of concepts: a concept of dysfunction and a concept of harm. Wakefield’s conception of dysfunction is etiological. A mechanism is dysfunctional if it fails to perform the purpose that it evolved to perform. Etiological function is objective in the sense that etiological functions are pan-cultural: they are not dependent on cultural conceptions of function or value. They are, instead, a set of universally shared facts about human nature. The ‘harmfulness’ criterion, on the other hand, is sensitive to cultural context. (Wakefield 1992, 381; Wakefield 1999, 380). As Wakefield understands it, a person is harmed by a disorder if the disorder causes a “deprivation of benefit to a person as judged by the standards of the person’s culture” (Wakefield 1992, 384). In order to be diagnosed with a mental illness, it must be true that an agent’s behavior is caused by a malfunction of an evolutionary mental mechanism and, furthermore, it must also be true that this dysfunction, in the context of that individual’s culture, deprives her of a benefit.

Wakefield, and others like him, argue that it is crucial to distinguish between mental disorders and other sources of distress (Horwitz 1999). The crucial factor in determining proper treatment for a person’s dysphoria, these philosophers argue, is a proper identification of the cause of his or her distress. Mental disorders are caused by harmful mental dysfunctions. Other sources of distress are better understood as problems in living. Many types of unhappiness that are typically diagnosed as depression, on this view, are better understood not as stemming from depression but instead by an examination of the larger social factors that may be causing unhappiness. Because the DSM’s conception of mental disorder is cause-insensitive and identifies depression only via symptoms, it fails to distinguish between these two forms of unhappiness. The danger, these philosophers argue, is that mental disorders are construed as being problems that reside within an agent and that treatments are therefore focused only on, usually pharmaceutically aided, symptom relief. If distress has an underlying social cause, if it is a problem in living, then treatment unhappiness should have a radically different focus. For example, the symptoms described by Betty Friedan as caused by “the problem that has no name” fit relatively easily within the rubric of depression (Friedan 1963, 17). However, Wakefieldian views would resist this diagnosis. The underlying cause of the distress Freidan describes is social and the best treatment of this form of distress is social change. Sadness that is caused by patriarchal or misogynist cultures does not represent a malfunction in the evolved mechanisms in a person (it may represent just the opposite). On the DSM model, treatment may merely mask these depressive symptoms pharmacologically and would only serve to maintain the unjust social situations that give rise to it. The best understanding for “the problem that has no name” is to identify it as a problem in living stemming from misogynist assumptions about the roles available to women in a culture. Wakefield's view is realist in the sense that its conception of mental dysfunction is independent of our acts of classification (Graham 2014, 125). Because function is grounded on etiology, there is a culturally-independent fact-of-the-matter regarding the presence or absence of a dysfunction in a person.

Wakefield’s harmfulness criterion allows for different cultures to come to different conclusions about which evolutionary dysfunctions will rightfully count as a mental disorder. On Wakefield’s view, homosexuality may represent a genuine evolutionary dysfunction (in the sense that exclusive homosexual behavior threatens the propagation of genes into future generations) but homosexuality is not harmful in a contemporary broadly Western cultural-context. Because it is not harmful in this cultural-context, it is a mistake to think of homosexuality as a disorder. This leaves open the possibility that the harmfulness criterion would allow homosexuality to be a legitimate mental disorder in other cultural-contexts.

Other critics have assailed Wakefield’s appeal to etiological dysfunction. Aside from the general epistemological problem that results from identifying the evolutionary function of psychological mechanisms, there are two problems that arise with an appeal to etiological dysfunction. First, some have argued that depression is an evolved response and hence could not be construed as a mental disorder on Wakefield’s view (Bentall 1992, 96; Woolfolk 1999, 660). Second, some have argued that many of our mental mechanisms may not have arisen as a result of evolutionary selection pressures. They may be evolutionary “spandrels” in Stephen Gould’s sense. The white color of bones necessarily results from the composition of bone but is itself not a property explicitly selected in an evolutionary sense. A spandrel cannot dysfunction in Wakefield’s terms because it lacks an evolutionary cause for its existence. Although spandrels can confer adaptive advantages, they are importantly not themselves traits that are selected for. If any of our mental mechanisms are spandrels then Wakefield’s view cannot explain disorders arising from their use (Gould and Lewontin 1979, 581; Woolfolk 1999, 664, Zachar 2014, 120). Famously, some philosophers have argued that complex human abilities, like our capacity for language may themselves be evolutionary spandrels (Chomsky 1988; Lilienfeld and Marino 1995, 413). Furthermore, recent critics have suggested that too much of the recent work on mental illness has focused exclusively on elucidating the concept of illness or dysfunction and have neglected to consider how advances within the philosophy of mind and the cognitive sciences might change our conception of the ‘mental’ component of mental illness (Brülde and Radovic 2006, 99).

Philosophers who are critical of attempts to define a distinctly mental conception of disorder have been motivated, in part due to the arguments above, to move in two different directions. Some have proposed that we replace the concept of mental disorder with a strictly neurological conception of dysfunction. Doing so, they argue, would place disorders on a clearer and more scientific footing.

b. Neurobiological Eliminitivism

The transition from the DSM II to DSM III brought with it the adoption of the biomedical model for diagnosis. Unlike the psychodynamic model, which saw symptoms as providing little insight into the underlying cause of distress, the biomedical model afforded symptoms pride of place in diagnosis. For much of the 20th century, the biomedical model of diagnosis understood the symptoms that a patient brought to her clinician as providing insight into the underlying disorder(s) that caused her patient to consult the clinician in the first place.

Psychology, as a therapeutic discipline, adopted this model of diagnosis and, in the process, began to categorized patient symptoms into discrete groupings, each caused by a specific mental disorder. However, some philosophers have noted that the biomedical model itself has changed rapidly in the 21st century and that this has created a dilemma for clinical psychological models of diagnosis. Patient reports, in current biomedical models of diagnosis, have lost their pride of place as the key markers for diagnosis. In their place clinicians turn to laboratory test results to determine the true illness responsible for a patient’s suffering. One motivation for this change, within general clinical practice, is that symptoms underdetermine diagnosis. Adopting this new biomedical model for mental illnesses, however, has been seen by some as presenting an eliminitivist threat to mental disorders (Broome and Bortolotti 2009, 27).

Eliminative materialism arose in the 20th century in order to challenge to views about the mind that assign mental states explanatory/causal roles. The views targeted by the eliminitivist were grounded in common-sense or “folk” ideas about everyday mental states like beliefs and desires. These views situated mental states as entities belonging to proper scientific explanation. Eliminitivists argued that folk psychological theories of the mind would fare no better than our folk biological or physical theories and that the folk mental states should be eliminated from scientific explanations (Churchland 1981). Mature cognitive and neuro-sciences do not need to make reference to folk psychological states like beliefs and desires in order to explain human behavior; furthermore, the neural architecture of the brain itself does not appear to house discrete localizable states like beliefs and desires that are assumed by folk psychology (Ramsey, Stich and Garon 1990). Folk psychological theories tell us that the best explanation of human behavior (including mental illness) should be given in terms of dysfunctional mental states (delusions, compulsive desires, etc.). The eliminitivist, on the other hand, undermines this view by claiming that nothing in the brain corresponds to these folk-psychological states and that we are better off without appealing to them.

Eliminitive materialism has arisen as a challenge to the DSM construal of mental disorders in the form of cognitive neuropsychology. “This process may start as a process of reduction (from the disorder behaviorally defined to its neurobiological bases), but in the end psychiatry as we know it will not just be given solid scientific foundations by being reduced to neurobiology; it will disappear altogether” (Broome and Bortolotti 2009, 27). Just as biomedical diagnosis has shifted away from patient report toward more direct assessments using bio-physiological metrics, the eliminitivist argues that the same process should occur with mental disorders. Neurological dysfunction should supplant folk psychological discussions of mental dysfunction. In much the same way as Alzheimer’s disease is understood as a neurological brain disorder; the eliminitivist claims that a mature cognitive neuroscience will replace contemporary classifications of mental disorders with neurological dysfunction (Roberson and Mucke 2006, 781).

Philosophers who resist the eliminitivist reduction of the mental to the neurological argue that at least some types of mental disorders cannot be understood without appealing to mental states. Plausible candidates for this type of disorder include delusions (Broome and Bortolotti 2009, 30), personality disorders (Charland 2004a 70) and various sexual disorders (Soble 2004, 56; Goldman 2002, 40). Personality disorders, especially those falling under the category of ‘Cluster-B’ disorders, appear to require that individuals have acquired bad characters in order to accurately explain why the behavior stemming from the illness is disordered. If normative competence necessarily makes reference to belief-forming mechanisms (having knowledge about moral concepts, recognition of the agency of other persons, etc.) then Cluster-B personality disorders cannot be fully reduced to their neurobiological underpinnings without a meaningful loss of the disordered element of the disorder (Pickard 2011, 182).

On a related note, philosophers have attempted to resist the purely mechanistic neuro-scientific explanations of psychology. Jeffrey Poland and Barbara Von Eckardt argue that the DSM's bio-psycho-social model relies on a mechanistic model of mental illness but that purely mechanistic models fail to explain the representational aspects of a mental illness; in their words “[a]ny such account will extend well beyond what one would naturally assume to be the mechanism of (or the breakdown of the mechanism of) the cognition or behavior in question” (Von Eckardt and Poland 2004 982). Peter Zachar argues for a view he calls the Imperfect Community Model. This model is based on a rejection of essentialism grounded in pragmatism; Zachar argues that mental illnesses are united as a class despite lacking any necessary and sufficient conditions to define them; mental disorders bear a prototypical or family resemblance to one another, however, that suggests a rough unity to the concept (Zachar 2014, 121-8).

c. The Role of Value

There are related questions that arise about the nature and role of value and mental illness. The first has to do with whether mental illness is a value-neutral concept. Nosologies of mental illness attempt to create value-neutral definitions of the disorders they contain. In the ideal, the concepts picked out by manuals like the DSM are supposed to reflect an underlying universal human reality. The mental disorders contained therein are, with only minor exception, not meant to represent culturally relative normative value judgments onto the domain of the mental.

The DSM includes a “cultural formulation” section meant to distinguish culturally specific, explicitly normative disorders from the supposed pan-cultural, value-neutral disorders that make up the bulk of the manual (American Psychiatric Association 2013, 749). In part this approach stems from the idea that psychologists adhering to the bio-psycho-social model of mental disorders view their project as being on par with nosologies of non-mental disorders. There are two questions worth raising here. The first is whether or not this “likeness argument” has any merit, the second is whether or not the biomedical illness concept is, itself, value-neutral (Pickering 2003, 244). A heart attack, for example, is a disorder, on this model, no matter the time or location of the infarction. Heart attacks are, in this sense, natural kinds and proper objects for scientific study. A heart attack represents a particular form of cardiovascular dysfunction that is agnostic about the cultural or moral values of a particular community. Despite the fact that heart attacks may not present the same symptoms across different sufferers (some may grab their left arms, some may scream, some may fall to the ground, etc), what unites these heterogeneous seeming symptoms is an underlying causal story that explains them (Boyd 1991, 127). Mental disorders are thought to operate on the same principle. On the one hand, the view that psychological symptoms are united by a common cause may result from pre-theoretical assumptions about mental states (Murphy 2014 111-121).  Critics of the bio-psycho-social model argue that values are an essential component of the concept of mental illness. If values are an ineliminable part of the concept of mental illness, we should be led to ask what kinds of values are invoked by the concept

Michel Foucault was an early critic of mental illness and mental health institutions. In his Madness and Civilization: A History of Insanity in the Age of Reason, Foucault argued that asylums, being institutions where ‘the mad’ were separated from the rest of society, emerged historically by the application of models of rationality that privileged individuals already in power. This model served to exclude many members of society from the circle of rational agency. Asylums functioned as a place for society to house these undesirable persons and to reinforce pre-existing power relations; cures, when available, represented conformity to existing power structures (Foucault 1961/1988). Foucault’s critique of mental disorder inspired a generation of psychologists, many of which see themselves as part of a new counter-movement from within the discipline: the Positive Psychology movement. The constructivist and value-laden interpretation of the DSM’s bio-psycho-social model of mental disorder has led some within this movement to call for the abandonment of the model. There is an intrinsic problem, they argue, with viewing individuals as, primarily, vehicles of dysfunction. Those within the positive psychology movement argue that a new, openly value-laden, conception of human beings should supplant the manual: “[t]he illness ideology's conception of “mental disorder” and the various specific DSM categories of mental disorders are not reflections and mappings of psychological facts about people. Instead, they are social artifacts that serve the same sociocultural goals as our constructions of race, gender, social class, and sexual orientation—that of maintaining and expanding the power of certain individuals and institutions and maintaining social order as defined by those in power” (Maddux 2001, 15).

Hybrid views, like those of Jerome Wakefield, which attempt to delineate a value-neutral and a value-laden component to the concept of mental illness have also been subject to criticism for the role they assign value. Richard Bentall, for example, has argued that the supposedly objective components of these theories contain value-laden assumptions. Bentall argues that happiness satisfies the objective criteria for mental dysfunction (happiness is a rare mental state, it impairs judgment and decision making, and its neural correlates are at least partially well-understood); however, happiness is not viewed as a dysfunction (and consequently is not categorized as a mental illness) because we value the state for its own sake (Bentall 1999, 97). This view is echoed by constructivists about mental illness.

Constructivists about mental illness can hold a variety of positions about where the concept of social construction operates with regards to mental illness. At the least radical level, constructivists can hold that cultures impose models of ideal agency that are used to label sets of human behaviors as instances of ordered and disordered agency; behavioral syndromes, on this view, can be more or less pan-cultural though each culture develops a theory of ideal agency that renders some of these syndromes ‘illnesses’ while other cultures may group the syndromes differently according to different values (Sam and Moreira 2012). A more thorough-going constructivism understands these packages or syndromes of behavior as themselves objects of constructivism; for example, the set of behaviors currently associated with depression would not be seen as a natural (categorization-independent) grouping of properties. Instead, the set of behaviors we call 'depressive' exist only because they have been grouped together by clinicians (for any number of reasons) (Church 2001, 396-397). This form of constructivism claims that the only way to explain why a set of behaviors, feelings, thoughts, and so forth, are grouped into a syndrome is that clinicians have created this grouping. Unlike the set of behaviors characteristic of a heart attack, for which we have a readily available causal story that unifies them, mental illnesses lack a clinician-independent explanation for their grouping. On this view, syndromes are akin to what Ian Hacking has called “interactive kinds” (Hacking 1995, Hacking 1999). For Hacking, while natural kinds represent judgment-independent groupings in the world, an interactive kind “when known, by people or those around them, and put to work in institutions, change the ways in which individuals experience themselves—and may even lead people to evolve their feelings and behaviors in part because they are so classified” (Hacking 1999, 103). To think of mental illnesses, like multiple personality disorder (now Dissociative Identity Disorder), as an interactive kind is to say that multiple personality disorder is not a basic fact about human neurology discoverable by the neuroscientist; instead, once the concept of multiple personality disorder is identified, once a set of behaviors has come to be seen as a manifestation of the condition and clinicians have been trained to identify and treat it, then individuals will begin to understand themselves in terms of the new concept and behave accordingly. Some have argued that many paraphilias and personality disorders are best understood on the interactive kind model (Soble 2004, 60; Charland 2004a, 70).

Critics will note that the natural kind -the socially constructed kind- distinction does note exhaust the alternatives. According to Nick Haslam, the natural kind distinction is tacitly invoked by realists of mental illness; this distinction, however, masks several possible alternative accounts of mental illness that allow for intermediate, less essentialist, even pluralist views (Haslam 2014, 13-20; see also Murphy 2014, 109).

d. Szasz's Myth of Mental Illness

Perhaps the best-known critic of mental illness to arise out of the anti-psychiatry movement of the 1960’s is Thomas Szasz. He published The Myth of Mental Illness in 1961 initiating a wide-ranging discussion of how best to understand the concept of a mental illness and its relation to physical illnesses. Szasz’s work was (and continues to be) the subject of significant discussion and debate. Szasz’s main claim is that the psychiatric field, and its concomitant conception of a mental illness, rests “on a serious, albeit simple, error: it rests on mistaking or confusing what is real with what is simulation; literal meaning with metaphorical meaning; medicine with morals...mental illness is a metaphorical disease” (Szasz 1974/1962, x). Mental illness should be understood as a metaphorical disease, according to Szasz, because it results from clinicians making a kind of category mistake. It involves the use of concepts derived from one disciplinary body, medicine and the natural sciences, and applying them to a realm where they do not rightfully apply: human agency (Cresswell, 24).

According to Szasz, the proper world-view of the natural sciences is to construe its objects of study as law-like and deterministic. All knowledge in this domain is thought to be reducible to, and explainable in terms of, physicalism. Medicine, being a branch of science, understands medical illness on this model. A malfunctioning heart-valve has characteristic physical discontinuity with a functional one, it has typical effects on the function of the valve, and these effects are identifiable independent of patient symptoms. The treatment for medical illnesses relies on a thoroughly physicalist picture of the workings of the human body. Szasz believed that adopting the concept of a physical illness into the realm of mental illness is fundamentally incompatible with our concept of human agency. This results from two lines of argument. The first is that mental illnesses, unlike physical ones, are not typically reducible to biophysical causes (Szasz 1979, 22). If biological dysfunction cannot be used as a basis for delimiting mental illness then the only option left is to appeal to non-normative behavior. Szasz’s second concern is similar to the worries of neurobiological elimintivism mentioned in section 2(b). Szasz argues that the eliminitivist’s picture of human agency is, at best, incomplete. The root of the problem stems from the fact that Szasz believes that we must view agents as necessarily free, capable of choice, and as responsible; “in behavioral science the logic of physicalism is patently false: it neglects the differences between persons and things and the effects of language on each” (Szasz 1974, 187). Szasz’s argument here is sometimes construed as an appeal to dualism. The physical world is deterministic but the mental world must necessarily be free. Because the bio-psycho-social model uses concepts derived from natural sciences in a realm where they do not rightfully apply (that is, human agency) mental illness, as a concept derived from the natural sciences, is a myth resulting from this category mistake. To say that mental illness is a myth, however, is not meant as a denigration of individuals who suffer. It is, instead, meant to more accurately categorize their suffering as resulting from a failure to conform to social, legal, or ethical norms (Pickard 2009, 85).

Szasz’s critics have responded along several lines. Some do not take issue with his underlying understanding of the illness concept but disagree with his claim that it is not applicable to mental phenomena. Mental illnesses, according to these critics, have been (or will soon be) reducible to neurological or neurochemical dysfunction. They argue that advances in neuroscience give us reason for thinking that the prospect for finding the neurological or neurochemical correlates for at least some of our mental illnesses categories is high (Bentall 2004, 307). Other critics have argued instead in the other direction and attacked Szasz’s construal of physical illness. Szasz’s arguments have been taken, by some, to imply that physical illness itself is a deeply evaluated category reflective of value-judgments in much the same way mental illness is meant to on Szasz’s account (Fulford 2004; Kendell 2004). Still others have aimed to preserve Szasz’s primary claim that the overarching category of ‘mental illness’ will prove to be a non-natural interactive-kind, reflective of our values and practices, while simultaneously maintaining that “particular kinds of mental illnesses may yet constitute valid scientific kinds” (Pickard 2009, 88).

3. Neurodiversity

Human cognitive and physical functions range widely across the species. Although most individuals fall within a statistically normal range in terms of their abilities in all of these arenas, statistical normalcy has long been criticized as a normative marker (Daniels 2007, 37-46). Advocates for what has come to be known as the ‘neurodiversity movement’ have begun, in part stemming from the criticisms of psychiatry and the DSM begun in the 1960’s, to push for widespread acceptance of the  forms of cognition beyond the “neuro-normal” that individuals operate with (Hererra 2013, 11). Members of the neurodiversity movement understand it as “associated with the struggle for the civil rights of all those diagnosed with neurological or neurodevelopmental disorders (Fenton and Krahn, 2007, 1). Forms of cognition currently seen as dysfunctional, ill, or disordered are better understood as representing diverse ways of seeing and understanding the space of reasons. Proponents of neurodiversity claim that agents on the autism spectrum, those with personality disorders, attention deficit and hyperactivity disorder, dyslexia, and perhaps even those with psychopathic traits should not suffer from the stigma associated with the illness label. Individuals to whom these label apply often demonstrate profound capabilities (artistic, mathematic, and scientific) that are inseparable from the condition underlying their illness-label (Glannon 2007, 3; Ghaemi 2011). Pluralism about forms of human agency should be encouraged once we fully understand the problematic ways in which norms have come to influence illness categories.

a. Motivation

Applying the label “mentally ill” or “disordered” can have long-term negative effects not only by  affecting how individuals to whom we apply the label view themselves (Charland 2004b, 338-340; Rosenhan 1973, 256) but also by affecting how others view and treat them (Didlake and Fordham 2013, 101). Often, the decision to create a new mentally ill class is decided without the consultation of the groups involved. Homosexuality, for example, had been labeled a mental disorder in the first two editions of the DSM until social and political movements, largely headed by homosexuals themselves, caused the American Psychiatric Association to re-assess its stance (Bayer and Spitzer 1982, 32). The effects that being labeled mentally ill or disordered have on persons are wide-ranging and durable enough to warrant caution; those in the neurodiversity movement argue, from various perspectives, that clinicians continue to mistake diverse forms of cognition (variations from the neuro-normal) with mental illness because of the assumption, which advocates argue is mistaken, that deviation from statistically-normal neural-development and function constitutes disorder. Advocates for neurodiversity typically argue along two lines. The first is to argue that our current concepts of mental dysfunction are in need of revision because they contain one or more of the problems described in section 2 of this entry. This line of argument focuses especially on issues over the role of power and value in the construction of mental illness categories. The second line of argument is “firmly grounded in motivations of an egalitarian nature that seek to re-weight the interests of minorities so that they receive just consideration with the analogous interests of those currently privileged by extant social institutions” (Fenton and Krahn 2007, 1). Any resulting account of neurodiversity must aim to preserve useful categories of illness or mental disorder (if only for the purposes of treatment).

Perhaps the most forceful arguments from the neurodiversity perspective target the status of autism as a form of mental disorder. Much controversy has followed the APA’s decision to fold the diagnosis of Asperger’s syndrome into the more general category of Autism Spectrum Disorder.

b. Autism, Psychopathy

Autism Spectrum Disorder is the diagnosis applied to a wide-ranging number of individuals who have demonstrated persistent difficulty with social understanding and communication and whose symptoms emerge quite early in development. For example, the DSM-5 lists “[i]mpairment of the ability to change communication to match context or the needs of the listener,” “[d]ifficulties following rules for conversation and storytelling,” and “[d]ifficulties understanding what is not explicitly stated (e.g., making inferences) and nonliteral or ambiguous meanings of language” as diagnostic for ASD (American Psychiatric Association 2013, 50-51). Advocates for neurodiversity argue that it is unjust to attempt to force those with ASD to modify their behavior in order to more closely match neurotypical behavior especially as a form of treatment for a disease or disorder. For example, efforts to “change the diets of people with ASD, force them to inhale oxytocin, and expose children to countless hours of floor time or social stories to try to make persons with ASD more like neurotypicals” fail to realize that these attempts at changing individual cognition imposes a narrow conception of proper functioning as a form of treatment. Furthermore, treatments whose aim is to reduce ASD symptoms, some argue, resemble arguments made by those wishing to eradicate other minority-cultures defined by functioning (that is, deaf-communities) (Barnbaum 2013, 134). Some individuals with ASD argue that they constitute their own unique culture that deserves respect (Glannon 2007, 2). Advocates for neurodiversity argue that conceptions of mental illness that include ASD assume that deviation from neurotypical function is evidence of mental dysfunction rather than a sign of the forms of neurodiversity present in any human population. Autistic flourishing must be understood as being different from (though not a degenerate form of) neurotypical flourishing. Equally important within the call to neurodiversity is the project to identify and articulate the ways that social institutions are built around and advantage persons of “neurotypical” function over others (Nadesan 2005, 30). Given the proper account of functional agency, many individuals with ASD should be seen as functional and not disordered or mentally ill. Although not as common, similar arguments are sometimes advanced for other mental disorders including psychopathy.

Psychopathy is a controversial construct. As currently understood, it is a spectrum-disorder and is diagnosed using the revised version of what is known as the “Psychopathy Checklist” (PCL-R). Importantly, psychopathy does not appear in any version of the DSM as a distinct disorder. In its place, the DSM offers Antisocial Personality Disorder (ASPD). ASPD is intended as an equivalent diagnosis, though there is significant evidence that ASPD and Psychopathy are distinct (Gurley 2009, 289; Ramirez 2013, 221-223). Psychopathy, discussed in more detail in section 4a, is characterized by an inability to feel empathic distress (to find the suffering of others painful) along with a pronounced difficulty in understanding the differences between norms that are purely conventional versus other types of norms (Dolan and Fullam 2010, 995). Beyond these symptoms, however, psychopathy is characterizable as a distinct form of agency that raises concern about neurodiversity. Some psychopaths are ‘successful’ in the sense that they avoid incarceration while satisfying PCL-R diagnostic criteria. Psychopaths of this sort are much more likely to be found in corporate and other institutional settings (academia and legal, medical, or corporate professions) (Babiak 2010, 174). In these contexts, some have argued that psychopathic personality traits should be seen as virtues (Anton 2013, 123-125). A more contextual understanding of psychopathy as a distinct way of relating to reasons, persons, and situations may lead us to appreciate the distinct contributions persons with these traits can make. Psychopathy, especially the effects that psychopathy has on emotional and moral competence, has raised challenges to traditional theories of moral responsibility.

4. Responsibility and Autonomy

Accounts of mental illness are closely tied to accounts of agency and responsibility. It is not unusual, following an especially horrific crime, for public discourse to include questions about a suspect’s mental health history and whether a suspect's alleged mental illness should excuse them from responsibility. Eric Harris, one of the teens responsible for the Columbine High School massacre, was called a psychopath by psychologist Robert Hare (Cullen); media commentators noted that Adam Lanza, the man responsible for killing 26  at Sandy Hook Elementary School in Connecticut had been diagnosed with autism and raised questions about the role this may have played (Lysiak and Hutchinson). One reason why discussions like these happen so quickly after a crime likely has to do with the relationship between mental illness and the effects that mental illness are thought to have on responsibility. One view on the matter states that “[t]o diagnose someone as mentally ill is to declare that the person is entitled to adopt the sick role and that we should respond as though the person is a passive victim of the condition. Thus, the distinguishing features of dysfunction that we should look for are not a universally consistent set of exclusive qualities, but things that provide the grounds for the normative claim made by applying the label ‘mental illness’” (Edwards 2009, 80). A more careful analysis of the relationship between mental illness and theories of moral responsibility indicates that several factors are often thought to matter when it comes to holding a person with a mental illness responsible for what s/he has done.

a. Psychopathy

Philosophical theories of moral responsibility often make a distinction between two different aspects of responsibility: attributability and accountability (Watson 1996, 228). Attributability refers to all of the capacities that someone must have in order to be responsible. One minimal condition may be that an action is attributable to a person if it stems from her agency in the right sort of way. Accidental muscle spasms, for example, are not typically attributable to an agent.

If we are dealing with an agent that has satisfied these attributability conditions, we can ask further questions about how we should treat this person after she has acted. This is a question about accountability. Some philosophers have claimed that there are many different forms of accountability, each requiring its own justification (Fischer and Tognazzini 2012, 390). It is one thing to make sure that I intentionally made the rude comment at dinner, it is another to decide what should be done to me as a result. The former is a question about attributability, the latter is a question about accountability.

Emotional capacities form an important component of many theories of moral responsibility (Fischer and Ravizza 1999; Strawson 1962; Wallace 1994; Brink and Nelkin 2013). Reactive attitude theories give moral emotions a central location within a conception of attributability and accountability. The term 'reactive attitude' was originally coined by Peter Strawson as a way to refer to the emotional responses that operate in the context of responding (that is, reacting) to what people do (Strawson, 1962). Resentment, indignation, disgust, guilt, hatred, love, and shame (and potentially many others) are reactive attitudes. For Strawson, and philosophers who have followed him, to respond to a person's action with one of these reactive attitudes is to simultaneously hold him accountable. A theory of moral attributability could be derived, in principle, via an examination of the conditions under which we believe it to be appropriate to respond to someone with a reactive attitude.

Reactive attitudes focus on the quality of their target's will. What this means is that our reactive emotions are sensitive to facts about an agent's intentions, desires, her receptivity to reason, and so forth. Philosophers refer to this as the Quality of Will Thesis. Reactive attitude theorists explain excuses and an exemption from responsibility by analyzing how an agent’s will affects our attitudes. Legitimate excuses, for example, lead us to believe that we should extinguish our reactive response to a person. Excuses, in effect, show us that we were wrong about the quality of a target's will (Wallace 1994, 136-147). If you push me and I fall, I might resent you; however, if I realize that you pushed me in order to save me from oncoming traffic, my attitude will be modified. My resentment will have been extinguished and the pushing has been excused. Excuses inform us that we were mistaken about what action was done. Excuses are singular events, they do not cast doubt on a person's agency, their attributability, but instead inform us that we were wrong about what intention/purpose we attributed to them. Agents that appear to be universally excused are more traditionally said to be exempt from responsibility.

An exemption occurs when we are led to question whether a person meets our attributability requirements. Imagine again that I am knocked over except this time I learn that the person who pushed me suffers from significant and persistent psychotic delusions. She believed, in that moment, that I was a member of the reptilian illuminati and pushing me would get the grey aliens to repossess her hated neighbor's house. Unlike a case involving excuse, a person whose agency is hampered by delusions as severe as these is not a proper target for our reactive attitudes at all (Strawson, 1962; Broome and Bartolotti, 2009, 30). Agency as abnormal as this is better seen as exempt from judgments of attributability or accountability. Exempt agents are not true sources of their actions because exempt agents lack the ability to regulate their behavior in an intelligibly rational way (Wallace 1994, 166-180). It would not be appropriate to resent these agents.

The logic of excuses and exemptions has been thought to show that responsible agency requires that a responsible agent have epistemic access to moral reasons along with the ability to understand how these reasons fit together (Fischer and Ravizza 1997). Furthermore, some have proposed that an agent must have the opportunity to avoid wrongdoing (Shoemaker 2011, 6). Psychopaths seem to be rational and mentally ill at the same time; because of these features, they create difficulty for many theories of responsibility.

Perhaps the most notable diagnostic feature shared by psychopaths is an inability to feel empathic distress. You feel empathic distress when you are pained by the perception of others in pain. The processes that ground empathic distress are not thought to be under conscious control. Psychopaths do not respond as most people do when exposed to signs of others in pain (Patrick, Bradley and Lang 1993) Although the degree to which someone can have the capacity for empathic distress varies, psychopaths are significantly different from non-psychopaths (Flor et.al., 2002).

Furthermore psychopaths have significant difficulty distinguishing between different types of norms. Psychologists have noted that most people are readily able to note the difference between a violation of moral norms from violations of conventional norms (Dolan and Fullam 2010). Normal persons tend to characterize moral norms as serious, harm-based, not dependent on authority, and generalizable beyond their present context; conventional norms are characterized as dependent on authority and contextual (Turiel 1977). Children began to mark the distinction between moral and conventional norms at around two years of age (Turiel 1977). Psychopaths, on the other hand, fail to consistently or clearly note the differences between them. Most psychopaths tend to treat all norms as norms of convention.  Non-psychopaths note a difference between punching someone (a paradigmatic moral norm violation) and failing to respond in the third-person to a formal invitation (a violation of a conventional norm).  Although there is significant controversy about how much we can infer from the psychopath's inability to mark the 'moral / conventional' distinction, the inability, along with their previously noted empathic deficit, has led some philosophers to argue that psychopaths cause problems for traditional theories of moral responsibility(Turiel 1977).

Reactive attitude theorists have argued that psychopaths should be exempt or excused from moral responsibility on both epistemic and fairness grounds. Given their difficulty distinguishing between moral and conventional norms, many reactive attitude theorists conclude that psychopaths are not properly sensitive to moral reasons and cannot be fairly held accountable (Fischer and Ravizza 1998; Wallace 1994; Russell 2004). It would be unfair to hold someone morally responsible if they cannot understand moral reasons; it is therefore inappropriate to express reactive attitudes at psychopaths (Fischer and Ravizza 1998, 78-79). However, some have argued that psychopathic agency can ground accountability ascriptions.

David Shoemaker, for example, has argued that: “[a]s long as [the psychopath] has sufficient cognitive development to come to an abstract understanding of what the laws are and what the penalties are for violating them, it seems clear that he could arrive at the conclusion that [criminal] actions are not worth pursuing for purely prudential reasons, say. And with this capacity in place, he is eligible for criminal responsibility” (Shoemaker 2011, 119). Although Shoemaker's claim about legal responsibility has struck many as correct, the larger debate is over whether psychopaths are morally responsible for their choices given what we know about psychopathic agency.

If moral responsibility requires the capacity to understand moral reasons as distinctly moral and if, as many philosophers have supposed, this capacity is grounded on the ability to empathize with others, then psychopaths cannot understand moral reasons and should be excused. This puts pressure on Shoemaker’s characterization of psychopathic responsibility. If a psychopath’s understanding of moral reasons can be gauged by, for example, their poor ability to distinguishing moral norms from conventional norms then this also appears to be evidence for their lack of receptivity to moral reasons. Some philosophers have excused psychopaths for just this reason: “[c]ertain psychopaths...are not capable of recognizing...that there are moral reasons...this sort of individual is not appropriately receptive to reasons, on our account, and thus is not a morally responsible agent” (Fischer and Ravizza 1998, 79). Others, like Patricia Greenspan, have argued that psychopaths do have a form of moral disability, stemming from their emotional impairments, but that this form of disability should serve to mitigate, not extinguish, their responsibility (Greenspan 2003, 437).

Some philosophers note the consequences of psychopathic moral receptivity on the quality of will thesis. If reactive attitudes are sensitive to the quality of an agent's will, then psychopaths cannot express immoral wills if they do not understand morality. If psychopaths cannot act on a will that merits reactive accountability then they lack attributability altogether. Jay Wallace has argued that “[w]hat makes it appropriate to exempt the psychopath from accountability...is the fact that psychopathy...disables an agent's capacities for reflective self control” (Wallace 1994, 178).

Others argue that psychopaths may be held accountable by appealing to non-moral reactive attitudes like hatred, disgust or contempt. These attitudes, they claim, can be targeted at the quality of a psychopath’s will even if it is granted that they cannot act on immoral wills (Talbert 2012, 100). This is true even if the psychopath cannot appreciate that we have moral reasons for caring about our status as agents. Insofar as the psychopath can make judgments like these, then, in the words of Patricia Greenspan, “[the psychopath] is a fair target of resentment for any harm attributable to his intention to the extent that the reaction is appropriate to his nature and deeds. He need not be “ultimately” responsible in the sense that implies freedom to escape blame” (Greenspan 2003, 427). Because psychopaths are incapable of understanding moral reasons it is unfair to hold them morally responsible but there are forms of accountability and reactive address that are outside the moral sphere that may remain appropriate to direct at them.

Shame, in particular, appears to be a normatively significant reactive attitude that psychopaths have access (Ramirez 2013, 232). Shame grounds a family of retributive forms of accountability and has been though to serve as another way to hold psychopaths accountable even if it can be established that psychopaths are not capable of feeling or understanding moral reactive attitudes. If psychopaths are susceptible to shame then they can be fairly held accountable on shame-based grounds.

It is fair to hold psychopaths accountable in these non-moral (shame-based) ways based if they are able to feel the emotion being levied against them and can express a quality of will that these attitudes are sensitive to. More importantly, although psychopaths do not understand the distinctiveness and weight of moral reasons, their judgments can still express condemnable attitudes about those reasons. Greenspan notes that all of us have “blind spots” about certain narrow classes of reasons and we stand to those reasons in the same relation that psychopaths stand to moral reasons; these blind spots don't excuse us from accountability (Greenspan 2003, 435).

b. Body Integrity Identity Disorder and Gender Dysphoria

Conceptions of mental illness, and mentally impaired agency, factor prominently over questions regarding the best way to treat a disorder. In 1997, Robert Smith, a surgeon at the Falkirk and District Royal Infirmary in Scotland, amputated one of this patient’s limbs at this patient's request. The limb itself was healthy. There did not exist any medical justification for the amputation. In 1999, Smith amputated another patient’s healthy limb, again at the request of the patient, and was scheduled to perform a third amputation (on a third patient) before the hospital’s board of directors forbade him from amputating any more healthy limbs. Smith’s patients came to him with a set of symptoms that do not correspond to any particular disorder in the DSM. Smith’s patients were not under the delusion that their limbs did not belong to them; they did not see their limbs as disfigured or disgusting. Instead, his patients claimed that, from a young age, they had not thought of the limb as part of their authentic selves. They were, the patients claimed, never meant to be born with the limb and were seeking surgery to allow their inner representation of their bodily identity to match their external body presentation. The only way to do this was to amputate their healthy limb.

Patients who seek to radically alter their body via repeated surgeries or extreme dieting are ordinarily (barring other symptoms) diagnosed with Body Dysmorphic Disorder (BDD). BDD, however, requires that patients seek to modify their bodies because they find a specific part of their body disgusting or revolting or flawed. Patients with BDD also tend to engage in obsessive behaviors related to the body-part’s appearance (grooming, ‘mirror checking,’ etc) (APA 2013, 248). Smith’s patients, although they claimed to experience significant dysphoria because of their condition, did not do so because they found their limbs revolting or disfigured. They identified themselves as having a different condition: Body Integrity Identity Disorder. Like psychopathy, BIID is not a disorder cataloged in the DSM. Although BIID is not a DSM disorder, the APA does recognize that it appears distinct from BDD. “Body Integrity Identity disorder (apotemnophilia)...involves a desire to have a limb amputated to correct an experience of mismatch between a person's sense of body identity and his or her actual anatomy. However, the concern does not focus on the limb's appearance, as it would be in body dysmorphic disorder” (APA 2013, 246-247). Vilayanur Ramachandran and Paul McGeoch claim that they have discovered several of the neural correlates of BIID and these appear distinct from BDD; specifically, they claim that the disorder arises in part from a dysfunction of the right parietal lobe (Ramachandran and McGeoch 2007, 252).

Apart from the conceptual question over whether BDD and BIID are underlying manifestations of the same mental illness, individuals who claim to suffer from BIID raise significant ethical questions over the nature of mental illness, autonomy, and surgical treatments for dysphoria. Patients with BIID request that surgeons recognize and grant their request for surgical intervention to cure psychological suffering. Although the case of BIID has not received widespread philosophical attention, several different approaches have been advanced with regards to BIID patient requests for amputation. The purpose of these amputations is, they claim, to correct what they see as a mismatch between their inner and outer selves. Some philosophers have raised doubts about the ability of BIID patients to act on genuinely autonomous decisions (Mueller 2009, 35). One worry about challenging the autonomy of otherwise rational agents is that, in other domains, we appear to allow individuals significant freedom to modify their bodies for many reasons (aesthetic, political, self-expression, and so forth) without thereby questioning their status as autonomous agents (Bridy 2004). The right to bodily autonomy is typically construed as one of the guiding values in biomedical decision-making. Furthermore, BIID sufferers who have their requests for amputation denied often resort to self-harm. Many will harm their limbs to the point where amputation becomes medically necessary. Some have argued that it is morally permissible to grant BIID requests for amputation on the basis of harm-prevention (Bayne and Levy 2005, 78). Others have expressed concern over the use of surgical treatments for mental illnesses (if it is granted that BIID is a mental illness), given that the surgery persons with BIID are requesting involve the permanent removal of a capacity typically thought to important (Johnston and Elliot 2002, 430).

Given that BIID patients appear to have a locatable dysfunction in their temporal lobes (an area where internal body representations are thought to be located), some philosophers have argued that surgical treatments are unjustified if a non-surgical solution can be found. That is, if BIID results from the suffering that is caused by a mismatch between a patient’s internal representation of herself and her outer presentation, then if it possible to change the inner representation, and thereby evade surgery, and thus we have an obligation to ought to do so (Johnston and Elliot 2002, 432). This approach, however, forces us to confront philosophical responses to other conditions that involve mismatches between a person’s inner representation of their bodies and their external bodily presentation. In particular, patients with BIID argue that their condition is analogous to the suffering faced by those with gender dysphoria. These individuals often seek sexual reassignment surgery to alleviate their perceived embodiment mismatch (Bayne and Levy 2005, 80). Individuals who are suffering as a result of their assigned sex/gender and who exhibit a strong desire to alter their sex and gender characteristics can be diagnosed with Gender Dysphoria (APA 2013, 451-459). Unlike other patients desiring surgical body modification (for self-expression, to meet unrealistic gender ideals, and so forth), individuals with BIID or Gender Dysphoria both report that their desires for surgical alteration of their body presentation originate at a young age. Both groups seek to have their request for surgical alteration respected by those around them as a recognition of their autonomy and of the value that gender (or bodily integrity) play in the formation of an authentic self (Lombardi 2001, 870).

The discussion of BIID, its status as a mental disorder, and the ethics of granting a person’s request for amputation are all relatively new and hotly debated topics within the Philosophy of Mental Illness and Bioethics generally. This debate is, however, connected to a larger, better established, questions concerning patient autonomy and what it means for an agent to make autonomous choices. At the moment there does not exist a clear-consensus on the status of BIID as disorder or a received view on how to treat BIID requests for amputation.

5. References and Further Reading

  • American Psychiatric Association. (1952). Diagnostic and statistical Manual of Mental Disorders Washington, DC.
  • American Psychiatric Association. (1973). “Homosexuality and Sexual Orientation Disturbance: Proposed Change in DSM-II, 6th Printing, page 44 POSITION STATEMENT (RETIRED).” Arlington VA.
  • American Psychiatric Association. (2013). Diagnostic and statistical Manual of Mental Disorders 5th ed. Washington, DC.
  • Anton, Audrey L. (2013) “The Virtue of Psychopathy: How to Appreciate the Neurodiversity of Psychopaths and Sociopaths Without Becoming A Victim.” Ethics and Neurodiversity Cambridge Scholars Publishing: Newcastle upon Tyne: 111-130.
  • Babiak P., Neumann C., and Hare R.D. (2010). “Corporate Psychopathy: Talking the Walk.” Behavioral Sciences and the Law 28(2): 174-193.
  • Barnabaum, Deborah. (2013). “The Neurodiverse and the Neurotypical: Still Talking Across an Ethical Divide.” Ethics and Neurodiversity Cambridge Scholars Publishing: Newcastle upon Tyne: 131-145.
  • Bayer,Ronald and Robert L. Spitzer. (1983). “Edited correspondence on the status of homosexuality in DSM-III.”  Journal of the History of the Behavioral Sciences Vol. 18(1): 32–52.
  • Bayne, Tim and Neil Levy. (2005). “Amputees By Choice: Body Integrity Identity Disorder and the Ethics of Amputation.” Journal of Applied Philosophy 22(1): 75-86.
  • Bentall, Richard. (1990). “The Syndromes and Symptoms of Psychosis: Or why you can’t play ‘twenty questions’ with the concept of schizophrenia and hope to win.” Reconstructing Schizophrenia Routledge: London.
  • Bentall, Richard. (1992). “A Proposal to Classify Happiness as A Mental Disorder.” Journal of Medical Ethics 18(2): 94-98.
  • Bentall, Richard. (2004). “Sideshow? Schizophrenia construed by Szasz and the neoKrapelinians.” In J.A. Schaler (Ed.) Szasz under Fire: The Psychiatric Abolitionist Faces His Critics. Peru, Illinois: Open Court.
  • Boorse, C. (1975). “On the distinction between disease and illness.” Philosophy and Public Affairs, 5: 49-68.
  • Boorse, C. (1997). “A rebuttal on health.” In J.M. Humber and R.F. Almeder (eds.), What Is Disease? Totowa N.J.: Humana Press: 1-134.
  • Boyd, Richard. (1991). “Realism, antifoundationalism, and the entuhusiasm for natural kinds.” Philosophical Studies 61: 127-148.
  • Broome, Matthew and Lisa Bortolotti. (2009). “Mental Illness as Mental: In Defense of Psychology Realism.” Humana Mente 11: 25-44.
  • Bridy, A. (2004). “Confounding extremities: Surgery at the medico- ethical limits of self-modification.” Journal of Law, Medicine and Ethics 32(1): 148–158.
  • Brink, David and Dana Nelkin. (2013). “Fairness and the Architecture of Responsibility.” In David Shoemaker (Ed). Oxford Studies in Agency and Responsibility Volume 1. Oxford University Press.
  • Brülde, B., and F. Radovic. (2006). “What is mental about mental disorder?” Philosophy, Psychiatry, & Psychology 13(2): 99–116.
  • Charland, Louis. (2004a). “Character Moral Treatment and Personality Disorders.” Philosophy of Psychiatry. Oxford University Press: 64-77.
  • Charland, Louis. (2004b). “A Madness for Identity: Psychiatric Labels, Consumer Autonomy, and the Perils of the Internet.” Philosophy, Psychiatry, and Psychology 11(4): 335-349.
  • Chomsky, Noam. (1988). Language and Problems of Knowledge: The Managua Lectures. Cambridge, Mass. / London, England: MIT Press (Current Studies in Linguistics Series 16).
  • Church, Jennifer. (2004). “Social Constructionist Models” The Philosophy of Psychiatry Oxford University Press: 393-406.
  • Churchland, P. M., (1981). “Eliminative Materialism and the Propositional Attitudes,” Journal of Philosophy 78: 67–90.
  • Cresswell, Mark. (2008). “Szasz and His Interlocutors: Reconsidering Thomas Szasz’s “Myth of Mental Illness” Thesis” Journal for the Theory of Social Behavior 38(1): 23-44.
  • Cullen, Dave. (2004). “The Depressive and the Psychopath: At last we know why the Columbine killers did it.” Slate. Web. April 2004.
  • Daniels, Norman. (2007). Just Health: Meeting Health Needs Fairly. Cambridge University Press: NY.
  • Dolan, M.C., Fullam, R.S. (2010). “Moral/conventional Transgression Distinction and Psychopathy in Conduct Disordered Adolescent Offenders.” Personality and Individual Differences Vol. 49: 995–1000.
  • Edwards, Craig. (2009). “Ethical Decisions in the Classification of Mental Conditions as Mental Illness.” Philosophy, Psychiatry, and Psychology 16(1): 73-90.
  • Elliott, Carl. (2004). “Mental Illness and Its Limits” The Philosophy of Psychiatry Oxford University Press: 426-436.
  • Fenton, Andrew and Tim Krahn. (2007). “Autism, Neurodiversity and Equality Beyond the 'Normal'” Journal of Ethics in Mental Health 2(2): 1-6.
  • Fischer J.M., Ravizza M. (1998). Responsibility and Control: A Theory of Moral Responsibility. New York: Cambridge University Press.
  • Fischer J.M., Tognazzini N.A. (2011). “The Physiognomy of Responsibility.” Philosophy and Phenomenological Research 82(2): 381-417.
  • Freud, Sigmund. (1905/1997). Dora: An Analysis of a Case of Hysteria. Simon and Schuster: NY.
  • Freud, Sigmund. (1915-1917 / 1977). Introductory Lectures on Psychoanalysis. W.W. Norton and Company: NY.
  • Friedan, Betty. (1963). The Feminine Mystique. W.W. Norton and Company: NY.
  • Foucault, Michel. (1961/1988). Madness and Civilization: A History of Insanity in the Age of Reason. Random House: NY.
  • Fulford, K.W.M. .(2001). “What is (mental) disease?: An open letter to Christopher Boorse.” Journal of Medical Ethics 27(2): 80–85.
  • Fulford, K.W.M. (2004). “Values Based Medicine: Thomas Szasz’s Legacy to Twenty-First Century Psychiatry.” In J.A. Schaler (Ed.) Szasz under Fire: The Psychiatric Abolitionist Faces His Critics. Peru, Illinois: Open Court.
  • Ghaemi, Nassir. (2003). The Concepts of Psychiatry Johns Hopkins University Press.
  • Ghaemi, Nassir. (2011). A First Rate Madness. Penguin Press: NY.
  • Glannon, Walter. (2007). “Neurodiversity” Journal of Ethics in Mental Health 2(2): 1-5.
  • Goldman, Alan. (2002). “Plain Sex.” In Alan Soble (ed.), The Philosophy of Sex: Contemporary Readings, 4th ed. Lanham, MD: Rowman and Littlefield: 39-55.
  • Graham, George. (2010). The Disordered Mind: An Introduction to the Philosophy of Mind and Mental Illness. Routledge: NY.
  • Graham, George. (2013a). The Disordered Mind: An Introduction to the Philosophy of Mind and Mental Illness. Routledge: NY.
  • Graham, George. (2013b). “Ordering Disorder: Mental Disorder, Brain Disorder, and Therapeutic Intervention” in K. Fulford (ed) Oxford Handbook of Philosophy and Psychiatry. Oxford UP.
  • Graham, George. (2014). “Being a Mental Disorder” in Harold Kincaid & Jacquieline A. Sullivan (eds.) Classifying Psychopathology: Mental Kinds and Natural Kinds: 123-143.
  • Greenspan, Patricia. (2003). “Responsible Psychopaths” Philosophical Psychology 16(3): 417-429.
  • Grob, G.N. (1991). Origins of DSM-I: a study in appearance and reality.American Journal of Psychiatry 148(4): 421-431.
  • Gurley, Jessica. (2009). “A History of Changes to the Criminal Personality in the DSM” History of Psychology 12(4): 285-304.
  • Hacking, Ian. (1995). Rewriting the Soul: Multiple Personality and the Science of Memory. Princeton, NJ: Princeton University.
  • Hacking, Ian. (1999). The Social Construction of What? Cambridge: Harvard University Press.
  • Hansen, Jennifer. (2004). “Affectivity: Depression and Mania” Philosophy of Psychiatry Oxford University Press: 36-53.
  • Hare, R.D., Clark D., Grann M., Thornton D. (2000). “Psychopathy and the Predictive Validity of the PCL-R: An International Perspective.” Behavioral Sciences and the Law 18(5): 623-45.
  • Haslam, Nick. (2014). “Natural Kinds in Psychiatry: Conceptually Implausible, Emprically Questionable, and Stigmatizing” in Harold Kincaid & Jacquieline A. Sullivan (eds.) Classifying Psychopathology: Mental Kinds and Natural Kinds: 11-28.
  • Herrera, C.D. (2013).“What’s the Difference?” Ethics and Neurodiversity Cambridge Scholars Publishing: Newcastle upon Tyne: 1-17.
  • Horowitz, Allan V. (2001). Creating Mental Illness. University of Chicago Press.
  • Johnston, Josephine and Carl Elliott. (2002). “Healthy limb amputation: ethical and legal aspects” Clinical Medicine 2(5): 431-435.
  • Kandel, Eric. (1998). “A new intellectual framework for psychiatry.” American Journal of Psychiatry 155: 457-469.
  • Kendell, R.E. (2004). “The Myth of Mental Illness.” In J.A. Schaler (Ed.) Szasz under Fire: The Psychiatric Abolitionist Faces His Critics. Peru, Illinois: Open Court.
  • Kraepelin, Emile. (1896a) Psychiatrie (8th edn). Reprinted (1971) in part as Dementia Praecox and Paraphrenia (trans. R. M. Barclay). Huntington, NY: Robert E. Kreiger.
  • Kraepelin, Emile. (1896b) Psychiatrie (8th edn). Reprinted (1976) in parts as Manic—Depressive Insanity and Paranoia (trans. R. M. Barclay). Huntington, NY: Robert E. Kreiger.
  • Levy, Neil. (2007). “The Responsibility of the Psychopath Revisited” Philosophy, Psychiatry, and Psychology: 129-138.
  • Lilienfeld, S.O. and L. Marino. (1995). “Mental disorder as a Roschian concept: a critique of Wakefield's "harmful dysfunction" analysis.” Journal of Abnormal Psychology 104(3): 411-20.
  • Lombardi, E. (2001). “Enhancing Transgender Care.” American Journal of Public Health 91(6): 869-872.
  • Lysiakm M. and Bill Hutchinson. (2013). “Emails show history of illness in Adam Lanza's family, mother had worries about gruesome images.” New York Daily News. Web. April 2013.
  • Maddux, James. (2001). “Stopping the Madness.” The Handbook of Positive Psychology: 13-25.
  • Mueller S. (2009). “Body integrity identity disorder (BIID)-Is the amputation of healthy limbs ethically justified?” American Journal of Bioethics; 9: 36–43.
  • Murphy, Dominic. (2014). “Natural Kinds in Folk Psychology and in Psychiatry.” in Harold Kincaid & Jacquieline A. Sullivan (eds.) Classifying Psychopathology: Mental Kinds and Natural Kinds: 105-122.
  • Nadesan, M.H. (2005). Constructing Autism. Milton Park, Oxfordshire: Routledge.
  • Nichols, Shaun and Manuel Vargas. (2007). “How to Be Fair to Psychopaths.” Philosophy, Psychiatry, and Psychology 14(2): 153-155.
  • Philips Katharine, et. al. (2010). “Body Dysmorphic Disorder: Some Key Issues for DSM-V”. Depression and Anxiety 27:573-591.
  • Pickard, Hannah. (2009). “Mental Illness is Indeed A Myth” Psychiatry as Cognitive Neuroscience: 83-101.
  • Pickard, Hannah. (2011). “What is Personality Disorder?” Philosophy, Psychiatry, and Psychology Vol. 18 (3): 181-184.
  • Pickering, Neil. (2003). “The Likeness Argument and the Reality of Mental Illness” Philosophy, Psychiatry, and Psychology 243-254.
  • Ramachandran, V., and McGeoch, P. (2007).” Can vestibular caloric stimulation be used to treat apotemnophilia?” Medical Hypotheses 8: 250–252.
  • Ramirez Erick. (2013). “Psychopathy Moral Reasons and Responsibility Ethics and Neurodiversity Cambridge Scholars Publishing: Newcastle upon Tyne: 217-237.
  • Ramsey, W., Stich, S. and Garon, J., (1990). “Connectionism, Eliminativism and the Future of Folk Psychology,” Philosophical Perspectives 4: 499–533.
  • Robertson, Erik D. and Lennart Mucke. (2006). “100 Years and Counting: Prospects for Defeating Alzheimer's Disease.” Science: Vol. 314 no. 5800 pp. 781-784.
  • Rosenhaun, David. (1973). “On Being Sane in Insane Places” Science: 250-258.
  • Sam, David and Virginia Moreira. (2012). “Revisiting the Mutual Embeddedness of Culture and Mental Illness” Online Readings in Psychology and Culture.
  • Soble, Alan. (2004) “Desire Paraphilia and Distress in DSM IV.” Philosophy of Psychiatry Oxford University Press: NY: 54-63.
  • Strawson, P.F. (1962). “Freedom and Resentment.” Proceedings of the British Academy 48: 1-25.
  • Szasz, Thomas. (1961/1984). The Myth of Mental Illness Harper Perennial.
  • Szasz, Thomas. (1979). Schizophrenia: The Sacred Symbol of Psychiatry. Oxford: Oxford University Press.
  • Talbert, Matthew. (2012) “Moral Competence, Moral Blame, and Protest.” Journal of Ethics 16(1): 89-109.
  • Vargas, Manual and Shaun Nichols. “Psychopaths and Moral Knowledge” Philosophy, Psychiatry, and Psychology 2007: 157-162.
  • Von Eckardt, Barbara and Jeffrey Poland. (2005). “Mechanism and Explanation in Cognitive Neuroscience” Proceedings of the Philosophy of Science Association: 972: 984.
  • Wakefield, Jerome. (1992). “The Concept of Mental Disorder: On the Boundary Between Biological Facts and Social Values” American Psychologist: 373-388.
  • Wakefield, Jerome. (1999). “Evolutionary versus prototype analyses of the concept of disorder.” Journal of Abnormal Psychology 108: 374-399.
  • Wakefield, Jerome. (2006). “What Makes A Mental Disorder Mental?” Philosophy, Psychiatry, & Psychology 13(2): 123-131.
  • Wallace, R.J. (1994). Responsibility and the Moral Sentiments. Cambridge, Mass: Harvard University Press.
  • Watson, Gary. (1996). “Two Faces of Responsibility.” Philosophical Topics 2: 227-248.
  • Woolfolk, Robert. (1999). “Malfunction and Mental Illness” The Monist 82(4): 658-670.
  • Zachar, Peter. (2014). A Metaphysics of Psychopathology, MIT Press: Cambridge Massachusetts.

 

Author Information

Erick Ramirez
Email: ejramirez@scu.edu
Santa Clara University
U. S. A.

Intentionality

If I think about a piano, something in my thought picks out a piano. If I talk about cigars, something in my speech refers to cigars. This feature of thoughts and words, whereby they pick out, refer to, or are about things, is intentionality. In a word, intentionality is aboutness.

Many mental states exhibit intentionality. If I believe that the weather is rainy today, this belief of mine is about today’s weather—that it is rainy. Desires are similarly directed at, or about things: if I desire a mosquito to buzz off, my desire is directed at the mosquito, and the possibility that it depart. Imaginings seem to be directed at particular imaginary scenarios, while regrets are directed at events or objects in the past, as are memories. And perceptions seem to be, similarly, directed at or about the objects we perceptually encounter in our environment. We call mental states that are directed at things in this way ‘intentional states’.

The major role played by intentionality in affairs of the mind led Brentano (1884) to regard intentionality as “the mark of the mental”; a necessary and sufficient condition for mentality. But some non-mental phenomena seem to display intentionality too—pictures, signposts, and words, for example. Nevertheless, the intentionality of these phenomena seems to be derived from the intentionality of the mind that produces them. A sound is only a word if it has been conferred with meaning by the intentions of a speaker or perhaps a community of speakers; while a painting, however abstract, seems only to have a subject matter insofar as its painter intends it to. Whether or not all mental phenomena are intentional, then, it certainly seems to be the case that all intentional phenomena are mental in origin.

The root of the word ‘intentionality’ reflects the notion that it expresses, deriving from the Latin intentio, meaning ‘directed at’. Intentionality has been studied since antiquity and has generated numerous debates that can be broadly categorized into three areas that are discussed in the following sections:

Section 1 concerns the intentional relation: the relation between intentional states and their objects. Here we aim to answer the question “What determines why any given intentional state is about one thing and not another?” For example, what makes a thought about a sheep about that sheep? Does the thought look like the sheep? Or does it perhaps have a causal origin in an encounter with the sheep?

Section 2 explores the nature of the objects of intentional states. Are these objects independent of us, or somehow constituted by the nature of our minds? Do they have to exist, or can we have thoughts about non-existent objects like The Grinch?

Section 3 explores the nature of intentional states themselves. For example, are intentional states essentially rational states, such that only rational creatures can have them? Or might intentional states be necessarily conscious states? And is it possible to give a naturalized theory of intentionality that appeals only to facts describable in the natural sciences?

This article explores these questions, and the dominant theories that have been designed to answer them.

Table of Contents

  1. The Intentional Relation
    1. Formal Theories of Intentionality
    2. Problems for Forms, and the Causal Alternative
  2. Intentional Objects
    1. Intentional Inexistence
    2. Thinking About Things that Do Not Exist
    3. Direct versus Indirect Intentionality
  3. Intentional States
    1. Intentionality and Reason
    2. Intentionality and Intensionality
    3. Intentionality and Consciousness
    4. Naturalizing Intentionality
  4. References and Further Reading

1. The Intentional Relation

If I am thinking about horses, what is it about my thought that makes it about horses and not, say, sheep? That is, in what relation do intentional states stand to their objects? This is the question “What is the intentional relation?” There have been many answers proposed to this question, and a broad division can be discerned in the history of philosophy between what can be called ‘formal’ and ‘causal’ theories.

a. Formal Theories of Intentionality

One answer to the question is that mental states refer to the things they do because of the intrinsic features of those mental states. The earliest version of this theory is based on Plato’s theory of forms. Plato held that apart from the matter (hyle) they are composed of, all things have another aspect, which he called their ‘form’ (morphê). All horses, for example, although individually made of different material, have something in common – and this is their form. The exact meaning of Plato’s ‘form’ is a controversial issue. On one reading, two things have the same form or are ‘conformal’ if they share the same shape; on a broader interpretation, two things are conformal if there is a one-to-one mapping between the essential features of the two—as there is between a building and an architect’s blueprint for the building. Plato held that when we think about an object, we have the form of the object in our mind, so that our thought literally shares the form of the object. Aristotle further developed this theory, arguing that in perception (sensu) the form of an object perceived is transmitted from the object to the mind of the perceiver. In the Middle Ages Thomas Aquinas defended and elaborated Aristotle’s theory, and in the Early Modern period the theory finds an heir in the work of the ‘British Empiricists’ Locke and Hume. Locke and Hume argued that ‘ideas’, which they considered to be the fundamental components of thought, refer to their objects because they are images of those objects, impressed on the mind through the action of the perceptual faculties.

Although images or shapes may play a role in thought, it is generally accepted that they cannot provide a complete account of intentionality. The relation between an image and its object is a relation of resemblance. But this presents a difficulty that was first raised against the formal theory by Ockham in the Middle Ages (King, 2007). The problem is that the relation of resemblance is ambiguous in a way that the intentional relation cannot be. An image of a man walking up a hill also resembles a man walking backwards down a hill (Wittgenstein, 1953), whereas a thought about a man walking up a hill is not also a thought about a man walking backwards down a hill. Similarly, while an image of Mahatma Gandhi resembles Mahatma Gandhi, it also resembles everyone who resembles Mahatma Gandhi (Goodman, 1976). Thoughts about Mahatma Gandhi on the other hand, are not thoughts about anyone who looks like Mahatma Gandhi.

An alternative formal model that seems to avoid this problem appeals to descriptions (Frege 1892, Russell 1912). This view holds that if I am thinking about something, then I must have in mind a description that uniquely identifies that thing. Descriptions seem to avoid the problem of ambiguity faced by images. There may be many people who resemble Mahatma Gandhi, but probably only one person that satisfies the description ‘the Indian Nationalist leader assassinated on the 30th of January 1948’. Since the ‘descriptivist’ account takes concepts to refer to their objects by describing them, so that the features of a concept somehow correspond to the features of its object, the descriptivist theory is arguably also a formal theory of intentionality.

In addition to answering the question why an intentional state refers to one object and not another, the formal approach is also helpful in explaining how thinkers understand what it is they are thinking about. One thing that we seem to be able to do when we have mental states that are directed at particular things objects is to reflect upon different aspects of those objects, reason about them, describe them, and even make reliable predictions about them. For example, if I understand what horses are, and what sheep are, I ought to be in a position to tell you about their differences, and perhaps make good predictions about their behavior. If intentional states are conformal with their objects, we have some explanation for how such understanding is possible, since the form of the object the intentional state is directed at should be available to me if I reflect upon my own thoughts.

And we have another reason still for expecting that thoughts have a formal component. Frege (1892) observed that we can have multiple thoughts about the same thing, without realizing that we are thinking of the same thing in each case. The Ancient Greeks believed that Hesperus and Phosphorus (two Greek names for Venus) were two different stars in the sky, one of which appeared in the morning, while the other appeared in the evening. As a result they believed that Hesperus rises in the evening while simultaneously believing that Phosphorus does not. Of course Hesperus and Phosphorus, as it turns out, are the same object – the planet Venus, which rises both in the morning and in the evening. And so the Ancient Greeks had two contradictory beliefs about Venus, without realizing that both beliefs were about the same thing. The upshot is that it is possible for us to have distinct concepts that pick out the same thing without our knowing.

Frege proposed as an explanation that our concepts must vary in more ways than in what they refer to. They also vary, he proposed, in what he called their ‘sense’, so that two concepts could refer to the same object while differing in sense. He described the sense as the ‘mode of presentation’ of the object that a concept picks out. It would appear that by ‘mode of presentation’ he meant something like a description of the object. So, while the reference of someone’s hesperus and phosphorus concepts might be the same, the sense of hesperus might be ‘the star that appears in the evening’, while the sense of phosphorus could be ‘the star that appears in the morning’. Since it is perfectly rational to suppose that the object that satisfies the description ‘the star that appears in the morning’ might not be the same as the object that satisfies the description ‘the star that appears in the evening’, we now have an explanation for how one could have two concepts that pick out the same thing without knowing.

Supposing that the intentional relation is one of conformality, then, allows us to explain i) why a thought refers to what it does, (ii) how we can have introspective knowledge of the things we think about, and (iii) how two or more of our concepts could pick out the same thing without our knowing. But there are problems facing the formal approach, which have lead many to look for alternatives.

b. Problems for Forms, and the Causal Alternative

The formal theory of intentionality faces two major objections.

The first objection, sometimes called ‘the problem of ignorance and error’, is that the descriptions we have at our disposal of the objects we think about might be insufficient to uniquely identify those objects. Putnam (1975) articulated this objection using a now famous thought-experiment. Suppose that you are thinking of water. If the descriptive theory is right, for example, you must have at your disposal a description that uniquely distinguishes water from all other things. For most of us – chemists aside – such a description will amount to something like ‘the clear drinkable liquid in the rivers, lakes, and taps around here’. But suppose, suggests Putnam, that there is another planet far away from here, which looks to its inhabitants just like Earth looks to us. On that planet, let’s call it Twin-Earth, there is a clear drinkable liquid that the inhabitants of the planet refer to (coincidentally) as ‘water’, but that is in fact a different chemical substance; rather than H2O, it has a different chemical composition—let’s call it XYZ. If this were true, we should expect that the description most people here on Earth are in a position to give of what we call ‘water’ will be just the same as the description the inhabitants of the other planet give of what they call ‘water’. But, by hypothesis, when we think about water we are thinking of the substance on our planet, H2O, and when they think of what they call ‘water’, they are thinking of a different thing—XYZ. As a result, it would seem that descriptions are not sufficient to explain what we are thinking of, since a member of either of these groups will give the same description for what they call ‘water’, even though their thoughts pick out different substances. This is the ‘ignorance’ part of the problem—we often don’t have enough descriptive knowledge of the things we think about to uniquely identify those things. The ‘error’ part is that it often turns out to be the case that our beliefs about the things we think about are false. For example, many people believe tomatoes are vegetables not fruit; and as a result, the description they will give of ‘tomato’ will include the claim that tomatoes are vegetables. If these people are indeed thinking of tomatoes, so the argument goes, it cannot be as a result of their being in possession of a description that picks out tomatoes, since no tomato truly falls under the description ‘fruit’.

The second difficulty for the formal accounts, specifically directed at the descriptive account, is that descriptions do not identify the essential nature of the things they pick out, whereas many words and concepts do (Searle 1958, Kripke 1980). The description someone might offer of Hesperus could be ‘the brightest celestial object in the evening sky’. But it is perfectly coherent to suppose that Hesperus could have existed without having been visible in the evening. It could have drifted into a different orbital pattern, or have been occluded by a belt of asteroids, and therefore never have been visible in the evening. This description does not, therefore, capture an essential feature of Hesperus. The term ‘Hesperus’ in our thoughts, on the other hand, does pick out an essential feature of Hesperus—being Hesperus. That this is an important difference can be seen when we realize that concepts and descriptions seem to behave differently in thoughts about counterfactual possibilities—or, alternative ways the world could have turned out. For example, the thought ‘Hesperus could have failed to have been the brightest celestial object in the evening sky’, is clearly true—this could have been the case had it drifted into a different orbital pattern. But the thought ‘Hesperus could have failed to have been Hesperus’, is not true: there is no way the world could have turned out such that Hesperus could have failed to have been itself. The name ‘Hesperus’ therefore identifies the essence of Hesperus—what it couldn’t fail to be; but the description does not. So now we have a further reason for thinking that concepts are not cognitively equivalent to descriptions—since they behave differently in thoughts about counterfactual possibility.

As an alternative to descriptions, images, or forms of any sort, Putnam (1975) and Kripke (1980) propose a ‘causal’ model of intentionality. On this alternative model, our concepts do not have intrinsic formal features that determine what they refer to. Rather, a concept picks out the thing that originally caused it to occur in the mind of a thinker, or the thing it is causally related to in the mind-independent world. On this view, if I have a concept that picks out horses, this concept must have initially been caused to occur in me by a physical encounter with horses. If I have a concept that picks out water, the concept must have been caused to occur in me by a causal interaction with water. And if I have a concept that picks out Hesperus, this concept must have a causal origin in my apprehension of Hesperus, perhaps by seeing it in the sky.

We can see how the causal theory can be used to address the two major objections to the formal theory. Firstly, on the causal account, the ‘water’ thoughts of those on Earth can be distinguished from the ‘water’ thoughts of those on Twin-Earth: the substance Earthlings are causally interacting with when they have ‘water’ thoughts is H2O, while the substance that Twin-Earthlings are causally interacting with is XYZ—explaining why the thoughts of each thinker refer to different things, even though the descriptions they might offer of those things are identical. Similarly, I can causally interact with water, or tomatoes, even if I have false beliefs about these things, so the causal model allows that the descriptions I might offer of the things I think about can be false. The causal model therefore seems to handle the problem of ignorance and error. Secondly, if we reject that my hesperus concept is cognitively equivalent to a description, the worry that the description fails to identify the essence of the object simply doesn’t arise. The causal model therefore also seems to handle the problem concerning reference to essential properties (sometimes called the ‘modal problem’).

However, the causal model has trouble explaining some of the things the formal model was designed to explain (see last paragraph of Section 1a above). Firstly, the causal model has trouble explaining (ii), how we can reflect on the objects of our thoughts, and say something about them. If concepts have no formal component that somehow describes their objects this becomes mysterious. The causal model also fails to explain (iii), how we can have multiple thoughts about the same thing without realizing. While formal models can explain this by holding that different concepts can be cognitively equivalent to different descriptions of the same thing, the causal model has trouble explaining this. Since the thoughts of an Ancient Greek about hesperus, and the thoughts of an Ancient Greek about phosphorus have a causal origin in the same object, namely Venus, the causal relation that stands between these concepts and their object is identical in each case; as a result, there ought to be no difference between the concepts on the causal model.

The formal and causal models therefore each provide good explanations for one set of phenomena, but run into trouble in explaining another.

Perhaps the best account of the intentional relation will be one that draws on aspects of both theories—something that so-called ‘two-dimensional’ accounts of intentionality aim to do (Chalmers 1996, 2006, Lewis 1997, Jackson 1998). On this approach, although it is necessary to know what environment a thinker is causally connected to in order to know what her thoughts refer to, this need not rule out that her concepts also have a formal component. The trick is to find a formal component that does not run into the problems raised by the causal theorist. To deal with the problem of error, for example, it has been proposed that the formal component of a concept might be a description of the appearance of the object the concept refers to (Searle 1983). Although I can be wrong that the things my tomato concept picks out are vegetables, it would seem that I cannot be mistaken that they are apparently red shiny edible objects—since I cannot be wrong about how the world appears to me. Such content would therefore avoid the problem of error—these descriptions couldn’t turn out to be false. To deal with the problem of ignorance, where my descriptive knowledge fails to uniquely determine which thing I am thinking of, it has been proposed to write the causal origin of my experience into the formal component. So, my concept water might be cognitively equivalent not just to ‘the apparently clear drinkable liquid in the lakes and rivers’, which fails to distinguish the water on Earth from the water on Twin-Earth, but to ‘the stuff causing my current experiences of an apparently drinkable liquid in the lakes and rivers’ (Searle 1983). This description, it would seem, does indeed distinguish water from Twin-Earth water, since only water is the causal source of my experiences (because I am on Earth, not Twin-Earth). And to get descriptions to behave the same way as concepts in thoughts about counterfactual possibility, it has been proposed to include the specification ‘actual’ in the descriptive content of a concept (Davies and Humberstone 1980). Although it is true that ‘the brightest celestial object in the evening sky could have failed to have been Hesperus’, it seems not to be true that ‘the actual thing that is the brightest celestial object in the evening sky could have failed to have been Hesperus’. By including ‘actual’ in the description, we can therefore get the description to behave in the same way as the concept in counterfactual thoughts. In sum, the descriptive content of a concept like water would be something like ‘the actual stuff causing my experience of an apparently clearly drinkable liquid in the lakes and rivers’. Such content, it is hoped, can account for the phenomena formal models explain without running into the difficulties faced by earlier formal accounts. Whether these modifications really succeed in handling the problems raised by the causal theorist is, however, a topic of ongoing controversy (see Soames 2001, 2005 and Recanati 2013 for recent defenses of the causal approach; see Chalmers 2006 for a defense of the two-dimensional approach, and an advanced overview of the debate).

2. Intentional Objects

Having seen some of the layout of the debate about what determines the object of any intentional state, we can now consider issues that arise when we consider the objects themselves. Do they all have something in common that makes them appropriate as objects of intentional states? Might there be non-existent intentional objects? Do our thoughts connect directly with these objects or only indirectly, via our senses?

a. Intentional Inexistence

Franz Brentano has been mentioned already in this article, in part because his work set the tone for much of the debate over intentionality in the 20th century. One of his claims was that the objects of intentional states have a special type of existence, which he called ‘intentional inexistence’. Whether he meant by that a special sort of existence ‘in’ the intentional, or that intentional objects do not exist, is debated. Supposing that intentionality is always directed at objects that do not exist, however, is particularly problematic, and we’ll look at the difficulties it raises in the next section. So first I’ll explore the possibility that Brentano supposed that intentional objects have a special sort of existence as objects of intentional states.

This idea had a particularly strong influence on the work of Edmund Husserl, who founded a branch of philosophy of mind known as phenomenology, which he conceived of as the study of experience. Husserl emphasizes that the objects of thought have a particular character insofar as they are objects of thought. First, they have to be related to other concepts and ideas in the mind of the thinker in a coherent way, a feature he refers to as their ‘noematic’ character. If our ideas of the objects we encounter in experience conflict too severely with the constraints that our understanding of how the world works, those ideas will disintegrate (something he calls ‘noematic explosion’). Visual illusions present a good example of this. If we are presented with an object that appears to be a cube sitting on a flat surface, we will approach the object with certain expectations, for example that if we turn our heads to one side we will see the side of the cube now out of view, if we grab a hold of it our grasp will be resisted, and so on. If the object turns out to be an image painted in such a way that it only appears as a cube from a certain angle, when we discover this by trying to pick it up, for example, the idea we are working with of the object will disintegrate. It is in this sense that Husserl at least took the objects of thought to have a special sort of existence as objects of thought (Føllesdal 1992, Mooney 2010).

Husserl (1900) proposed that we can study the nature of the constraints that the character of our mind places on the possible objects of thought through a method he calls ‘phenomenological reduction’, which involves uncovering the conditions of our awareness of objects through reflection on the nature of experience. The approach inherits a great deal from Kant’s transcendental idealism, since in both cases we are required to recognize that the nature of our minds may impose a very specific character on objects as we encounter them in experience – a character that we should not be tempted to assume is imposed on our experience by facts about the external world. The idea that the nature of our minds imposes constraints on the way we experience the world is in fact a claim that is increasingly widely accepted, and phenomenology has become an area of particular interest for the emerging field of cognitive science (see for example Varela, Thompson and Rosch 1991).

b. Thinking About Things that Do Not Exist

The second possible interpretation of Brentano’s claim – that intentional objects do not exist – is particularly problematic. Whether or not all objects of thought are non-existent, it certainly seems that many are, including those that are obviously fictitious (The Grinch, Sherlock Holmes) or likely non-existent even if many people believe in them (Faeries, Hell). But deep puzzles arise when we consider what it means to say something about a non-existent object. Can we, for example, coherently state that Santa Claus has flying reindeer? If he does not exist, how can it be true that he has flying reindeer?  Can we indeed even coherently state that Santa Claus does not exist? If he does, our statement is false. But if he does not exist, then it seems that our claim is not about anything – and hence apparently meaningless. Another way of putting the puzzle involves definite descriptions. It seems reasonable to say the following:

(1)     The fairy king does not exist

But upon further consideration 1) is quite puzzling, because the appearance of the definite article ‘the’ in that statement seems to presuppose that there is such a thing as the fairy king to which we refer.

Russell proposed a famous solution to this puzzle. It involves first analyzing definite descriptions to show how we can use these to express claims about things that do not exist, and second to show that most terms that we use to make negative existential claims are actually definite descriptions in disguise. The first move is accomplished by Russell’s analysis of the logical structure of definite descriptions. He takes definite descriptions to have the logical form ‘a unique thing that has the properties F and G’. So, the definite description ‘the fairy king’ in 1) on Russell’s reading is logically equivalent to the description ‘a unique thing that is both a king and a fairy’. Notably, this eliminates the term ‘the’ from the description, and with it the presupposition that there is a fairy king. And rather than being meaningless, the claim that such a thing does not exist is true, if no unique thing exists that is both a king and a fairy:

(2)     There is no unique thing that is a king and a fairy

And, of course, false if there is a unique thing that is a king and a fairy. The second step of Russell’s solution is to hold that most referring terms in ordinary language are actually disguised definite descriptions. The term ‘Santa Claus’ on this view is actually a sort of shorthand for a description, perhaps ‘the man with the flying reindeer’. And this description is in turn to be analyzed as Russell proposes, so that the claim ‘Santa Claus does not exist’ in fact amounts to the denial that a unique individual that has the properties of being a man and having flying reindeer exists. And that seems to be perfectly coherent.

Are there any terms, in language or thought, on this account, that are not descriptions? Russell’s view is that the simplest terms in thought, out of which definite descriptions are composed, are not descriptions but singular terms, whose meaning is simply the object they refer to. These are demonstrative terms like ‘that’ and ‘this’, and our concepts of sensible properties like colors, sounds and smells. The meaning of these terms are fixed by what Russell called ‘acquaintance’ – they are conferred with meaning as a result of a direct interaction between the thinker and thing referred to, for example when we point at a color and simply think to ourselves ‘that’. These terms are only meaningful if in fact there are objects in the world to which they refer. Notice that on this view the second interpretation of Brentano’s claim – that in general the objects of thought do not exist – will become impossible to maintain. Since the descriptions that can pick out non-existent objects are composed of terms that are only meaningful if they refer to existing things, the objects of at least singular terms must exist for the view to make any sense.

c. Direct versus Indirect Intentionality

Even supposing that many objects of thought do exist, a further question arises as to whether the objects that we encounter in experience are products of our minds, or mind-independent objects. The view that the objects of experience are mind-dependent can be motivated by two complementary considerations. First, it seems reasonable to suppose that two different persons’ experiences in the same environment can be different. A color-blind person and a person with perfect color vision might have visually very different experiences in the same environment. Conversely, it seems that one person’s experiences in two very different environments could be the same. When I look at an oasis in the desert, I have a visual experience that might seem to be identical to the experience I have when faced with a mirage, even though these two environments are very different.

These considerations have lead many to argue that our experiences – even those of ordinary objects – are mediated by what have been called ‘sense data’. According to the sense-data theorist, what we immediately experience are not mind-independent objects, but sense-data that are produced at least partly by our minds. This allows us to explain the two puzzles considered above. If what we encounter in experience are sense data and not mind-independent objects, then two people could have very different experiences in the same mind-independent environment, and correlatively, one person could have two indistinguishable experiences in two very different mind-independent environments. Note that these sense-data may correspond very closely to the way things stand in the mind-independent world around us, so the view need not imply that our interactions with the world should be dysfunctional.

This ‘indirect’ theory of perception, however, raises worries about our knowledge of the world. When we say of the ketchup before us that it is red, are we saying this about the ketchup, or about the sense-data that we experience as a result of looking at the ketchup? If we really only experience the sense-data, this would suggest that most of the beliefs we have about the world around us are false. We believe our intentional states are directed at mind-independent objects, but the indirect theory suggests that they are not. We believe we’ve seen red ketchup, but this theory suggests that in fact we’ve only seen sense-data of red ketchup. And if we only have experience of sense-data produced by our minds, this seems to imply that we have never really had any direct experience with the world. It suggests that we’ve never seen waterfalls, smelled flowers, or heard the voices of our friends, but have only experienced sense-data of these things.

An early reply to these concerns involves jettisoning the indirect-theory of perception, and adopting the view that there are no sense-data or any other kind of representations mediating our experiences of the objects around us – a view sometimes called ‘naive realism’, and associated with Moore (1903). But on this approach, explanations of hallucinations or variations between different individuals’ experiences of the same objects are strained. An interesting middle ground is known as ‘disjunctivism’ (Hinton 1967, Snowdon 1981, McDowell 1994, Martin 2002). The disjunctivist holds that the argument for the indirect theory of perception based on hallucinations is fallacious. Although the experiences of the oasis and the mirage might well be indistinguishable for the subject of the experience, this need not imply that the experiences are really the same. Rather, since one experience is the product of an encounter with an oasis, and the other is not, there is a difference between the experiences—it is just one that the subject is unable to identify. As a result, the disjunctivist holds that when we have veridical experiences, we have direct encounters with objects in the world, and when we have hallucinations, what we experience are sense-data produced by our mind. The disjunctivist view, then, at least allows us to see that we might not be forced into the indirect theory of perception by the existence of hallucinations.

3. Intentional States

So far we have looked at the question what determines the object of any given intentional state, and the question what is the nature of the objects of intentional states. What we have not examined is whether there are broad conditions for a state to count as intentional in the first place. Are only rational creatures capable of intentional states? Are intentional states essentially conscious states? Can we provide an account of intentional states in natural terms?

a. Intentionality and Reason

The centrality of reason to the intentional is an important strand in Kant’s famous Critique of Pure Reason (1787), and has informed an influential line of thinking taken up in the work of Sellars (1956), Strawson (1959) and Brandom (1996). Kant argues that in the apprehension of any object, an individual must have a range of concepts at her disposal that she can use to rationally assess the nature of the object apprehended. In order to apprehend a material object, for example, a thinker must understand what causation is. If she does not understand what causation is, she will not understand that if the material object were to be pushed, it would move. Or if it were picked up and thrown against a wall, it would not go straight through the wall or disappear, but would be caused by the solidity of the wall to bounce backward.  Without having the capacity to understand any of these issues, Kant argued, it would not be true to say that an individual apprehends the material object.

The appeal to the necessity of reason for concept-possession often goes hand in hand with the claim that our intentional states are all interdependent. Since I cannot have the concept material object, without the concept cause, then the two concepts depend on one another—and this may be the case for all our concepts, leading to a view known as ‘concept holism’. This raises a puzzle, however, that many think undermines the view. The concern is that if our concepts are interdependent in this way, then if any of my concepts change, all the others change with it. If for example I can only grasp the concept horse if I have the concept animal, then if my animal concept changes in some way, my horse concept will change along with it. If we couple this with the observation that our beliefs about the world are almost constantly being updated, as our day to day experience progresses, then the worry arises that we could literally never have the same thought twice. Any time my beliefs about the world change they will change at least one of my concepts, and if all of my concepts are interdependent, then whenever any of my beliefs change, they will all change. As a result, although it might seem to me that I had thoughts about horses both yesterday and today, this would not be true since the concept that occurred in my thoughts yesterday would not be the same concept as occurred in my thoughts today. Some who think this is an intolerable result adopt the view known as ‘concept atomism’, which holds that our concepts do not stand in essential relations to one another, but only to the external objects they refer to (Fodor and Lepore 1992). Atomism, however, seems to be committed to the claim that I could possess the concept horse without knowing what an animal is, and to the holist that seems as intolerable as concept holism seems to the atomist.

b. Intentionality and Intensionality

Another feature of intentional states that is sometimes thought to be essential is what is called ‘intensionality’ (with an ‘s’). This is the phenomenon whereby the objects of thought are presented to a thinker from a certain point of view—what Frege called a ‘mode of presentation’. We already encountered one of the puzzles that motivate this idea above discussing Frege’s puzzle, where the answer to the question why two concepts can be co-referential without a thinker knowing is proposed to be the fact that a thinker’s concepts pick out an object under a particular mode of presentation.

The potentially essential connection between intentionality and intensionality can be seen when we try to describe someone’s intentional states without bearing in mind their point of view. Recall the beliefs that Lois Lane has about Superman. Lois Lane believes she loves Superman, but does not believe she loves her colleague Clark Kent, not knowing that Superman is Clark Kent. 1) seems like a true description of Lois Lane’s belief about Superman:

(1)     Lois Lane believes that she loves Superman

If (1) is true, however, and Superman is Clark Kent, then we might expect that we would state exactly the same thing if we substitute the name ‘Clark Kent’ for the name ‘Superman’ in (1). That would give us (2):

(2)     Lois Lane believes that she loves Clark Kent

To many, however, it seems that there is something wrong with (2). If Superman walks into the room in his Clark Kent disguise, Lois will not light up as she does when he walks in without the disguise. If Lois is told that Clark Kent is in trouble, she will not infer that the man she loves in is in trouble. A natural explanation for these facts is that the belief reported in (1) is not the same as the belief reported in (2). Since our reports about the beliefs of others may be false if we do not take into consideration the mode of presentation under which the objects of those beliefs are thought of by the holder of the belief, it seems like intensionality may be an essential feature of intentional states.

Another phenomenon that seems to tie intentionality to intensionality is shown in the fact that we cannot infer from the fact that someone has a belief about x, that x exists. This is unusual, since for most cases of predication (ascription of a property to an object), we can infer from the fact that we have ascribed a property to an object that the object exists. For example, if the claim that the sun is bright is true, it would seem to follow that there must be such a thing as the sun. That is, predication ordinarily permits existential generalization: if a property is truly predicated of an object, then some object with that property exists (Fa → ∃xFx). However, from the fact that I believe the sun is bright, it does not follow that there is such a thing as the sun. After all, I might just as easily believe, as Kant did, that phlogiston is the cause of combustion, but as we know, there is no such thing as phlogiston. If we combine these two claims we get a third claim: that neither the assertion nor the denial of a report of an intentional state entails that the proposition the intentional state is about is true or false. For example, we could truly assert that Kant believed that phlogiston causes combustion, but this does not entail that it is true that phlogiston causes combustion.

Chisholm (1956) thought that an intentional state is any state whose description has these three features: failure to preserve truth given the inter-substitution of co-referring terms (such as ‘Superman’ for ‘Clark Kent’), failure to allow existential generalization entailment (the existence of the intentional object), and failure to entail the truth of the object proposition (such as the belief ascribed to the thinker).

However, these criteria do not seem to hold up for all intentional states. While it does not follow from the fact that Kant believes phlogiston causes combustion that there is such a thing as phlogiston, or that it is true that phlogiston causes combustion, these things would seem to follow if it we held that Kant knew that phlogiston causes combustion. That is, it does not seem possible to have knowledge of things that do not exist, or of propositions that are not true, so if someone knows Fa, then an object with the property F must exist, and if someone knows that p, then p must be true. Knowledge ascriptions therefore do not satisfy the second and third conditions proposed by Chisholm, and yet they are surely intentional states. And perceptual states, which also seem to be intentional states, do not obviously satisfy any of the conditions. You cannot perceive something that does not exist, and you cannot perceive that p is the case if p is not the case, and additionally it is possible to intersubstitute co-referring terms in descriptions of perceptions. If it is true that Jimi Hendrix saw Bob Dylan at Woodstock, then it is true that Jimi Hendrix saw Robert Zimmerman at Woodstock, because Bob Dylan is Robert Zimmerman. Hendrix might not have believed that he saw Robert Zimmerman, or have known that he saw Robert Zimmerman, but nevertheless, if he saw Bob Dylan, he saw Robert Zimmerman. And perceptual states also seem quite clearly to be intentional states.

There is surely an important connection between intentionality and intensionality, then, but how it works in detail is clearly more complex than Chisholm thought.

c. Intentionality and Consciousness

A state of a creature is a conscious state if there is something it is like for the creature to be in that state. There is something it feels like for a person to have their hand pressed onto a hot grill, but there is not anything it feels like for a cheese sandwich to be pressed onto a hot grill. Do these conscious states have an essential connection to intentionality? Might intentionality depend on consciousness, or vice versa?

Some views take conscious states to be a kind of intentional state—thus holding that consciousness depends on intentionality. There are good prima facie grounds for holding this view. It is not obvious how I could be conscious of a horse being before me without my conscious state being directed at, or about, the horse. The idea that conscious states are a species of intentional state can be teased out in various ways. We might say that conscious content is simply intentional content that is available for rational evaluation, so that if I am conscious that it is raining, I have a mental state about the rain that I can reflect upon (Dennett 1991). Or we could say that conscious states always represent the world as being in such-and-such a way, so that if I am conscious that it is raining, I have a mental state that represents the world as being rainy right now (Tye 1995). Or, that conscious states are states that are naturally selected to indicate to a subject that her environment is in such and such a way, and again therefore intentional (Dretske 1995).

However the view that there are ‘raw feels’ in our conscious experience that do not say anything at all about the world also has considerable pull. For example, you might think that when you’re conscious of the warmth of the sun on your face you can indeed reflect upon the fact and judge that it is sunny where you are, but that the warm feeling itself does not tell you that it is sunny. On this view there are two things here, the warm feeling, and the subsequent judgment ‘it is sunny’, which although formed on the basis of the feeling is nevertheless distinct from it (Ryle 1949, Sellars 1956, Peacocke 1983). On this view, conscious states are not intentional in themselves, since they do not in themselves represent the world as being in any particular way, even if they can be used to make judgments about the world.

On the other hand, we might think the dependence runs the other way: that intentional states depend on consciousness. We might suppose that it is hard to make sense of the claim that we could have mental states about the world without the world feeling any way at all to us. Searle (1983), for example, thinks that our notion of the mind essentially involves the notion of consciousness, so he denies that there could be essentially unconscious mental states. To deal with the case of beliefs or desires that I am not currently consciously entertaining, he argues that these must at least have the potential to become conscious in order to be properly understood as mental states.

This dependence claim has its skeptics too, however. The position known as ‘epiphenomenalism’ holds that there is no essential role for consciousness to play in our lives: that consciousness is caused by, but itself plays no causal role in, other mental events. We may happen to have conscious experiences concurrent with some of the events in our lives (such as intentional events), and they may even stand in constant conjunction with those events, but this in itself is not evidence that a creature could not exist that carries out the same activities with no conscious experiences at all. A real life example can get this intuition going. In a phenomenon sometimes called ‘blindsight’, subjects display above chance capacity to discriminate features of their environment while at least reporting that they have no corresponding conscious experience of these features. In one experiment, a subject is shown two drawings of a house, each identical in every respect except that one house is represented as being on fire. When asked, the subjects insist that they can see no difference between the two houses (the house on fire is in the visual region that the subject is having problems with). When pressed on which house they would prefer to live in, however, the subjects show an above chance preference for the house that is not represented as being on fire. Since the subjects seem to have distinct attitudes to the two pictures, hence distinct intentional states directed at each picture, and since there is no apparent variation in conscious experience, some take such cases to motivate the claim that it is possible to have intentional states without any conscious component.

d. Naturalizing Intentionality

Whatever the essence of intentionality might be, a further question that arises is whether we can ‘naturalize’ our account of it. That is to say, whether we can give an account of intentionality that can be exhaustively described in the terms in which the laws of nature are expressed. There is a long tradition of holding that the mind is outside of space and time – that it is an immaterial substance – and on that view, since intentional states are mental states, intentionality could not be naturalized. But particularly in the 20th century, there has been a push to reject the view that the mind is immaterial and to try to account for the mind in terms of natural processes, such as causal relations, natural selection, and any other process that can be explained in terms of the laws of the natural sciences.

The attempt faces various challenges. We have already looked at one, which is that if we take intentional states to depend on consciousness, and we hold that it is not possible to give a naturalized account of consciousness, then it follows that we cannot naturalize intentionality. But there is another particularly tricky puzzle facing the naturalization of intentionality in terms of causal relations. As we saw above (3b) at least some intentional states have the property of intensionality: it does not follow from the fact that I believe p that p is the case, and it does not follow from the fact that I do not believe p that p is not the case. Another way to put this is that our concepts do not always co-vary with the objects they represent. On the one hand we can encounter the objects our concepts refer to without our concepts triggering, for example, when Lois Lane meets Clark Kent and the thought ‘that’s Superman’ fails to occur to her. And conversely our concepts can be triggered when the object they refer to is not about, such as when I see a cow in the night and mistakenly think ‘there’s a horse’. Our concepts, in other words, can trigger when they should not, and can fail to trigger when they should. This is a problem for naturalizing intentionality, because the causal theory of intentionality (1b) is at the heart of attempts to naturalize intentionality, and the causal theory has trouble explaining intensionality. For example, the causal theory holds that a concept refers to whatever causes it to trigger. But if Lois Lane bumps into Clark Kent and her superman concept fails to trigger, this would suggest that Lois Lane’s superman concept does not refer to Clark Kent. And that’s not a good outcome, since Superman is Clark Kent. Similarly, if I see a cow in the night and my horse concept goes off, the causal account implies that my horse concept refers to cows in the night. And that’s no good either.

Dretske (1981) argues that causal relations can in fact exhibit intensionality, so that we can naturalize intentionality. A compass, he argues, indicates the location of the North Pole because the North Pole causes the compass needle to point at it. He takes a compass to be a ‘natural indicator’ of the North Pole, and so to exhibit natural intentionality. But he thinks the compass also exhibits intensionality. In addition to indicating the North Pole, the compass also indicates the location of polar bears, because there are polar bears at the North Pole. However, if the polar bears move south, the compass will not continue to indicate their location. As a result, suggests Dretske, the compass exhibits intensionality: the compass can fail to indicate the location of polar bears, even though the location of polar bears is the North Pole, just as Lois Lane’s superman concept can fail to indicate Clark Kent, even though Clark Kent is Superman. There is a problem with this account, however, because the relationship between the location of polar bears and the North Pole is very different to the relationship between Superman and Clark Kent. The location of the polar bears can fail to be where the North Pole is, but Clark Kent cannot fail to be where Superman is. That is, the kind of failure to trigger that we are concerned to explain is where a concept fails to trigger in response to what is necessarily identical to its reference – not in response to something that merely happens to be co-instantiated with its reference on some occasions.

Another attempt to allow for these cases within a causal theory appeals to the notion of a natural function or telos (Mathen and Levy 1984, Millikan 1984, Dretske 1995, Papineau 1993). If the heart has been selected by evolution to pump blood, then we can say that the natural function of the heart is to pump blood. But functions can malfunction, as we see when the heart stops, thus failing to continue to pump blood. What distinguishes the correct from the incorrect activities of the heart is whether the heart is doing what it was selected for by evolution. The teleological theory of intentionality proposes that this same mechanism distinguishes the correct and incorrect triggers of a concept. When my horse concept tokens in response to my encounter with a cow in the night, it is malfunctioning, because it was selected to alert me to the presence of horses. This account faces several objections, but the clearest is that it rules out the possibility of a creature having thoughts whose mental states did not come into being through natural selection. Although highly unlikely, it is does not seem impossible that a being formally identical to a thinking person could come into existence by chance, through the right freak coincidence of physical events (in one story it involves lightning hitting a swamp and the right chemicals instantaneously bonding to form a molecule-for-molecule match of an adult human (Davidson 1987)). If the teleological theory of intentionality were right, such a being would have no intentional states since its brain states would have no natural history, even though it would be physically and behaviorally indistinguishable from a thinking person. Many see this is as a reductio ad absurdum of the teleological account, since it seems that by hypothesis such a being would be able to perceive, form desires and beliefs about its environment, and so forth.

Another proposal still is that we can distinguish correct from incorrect triggers of a concept in terms of the relationship they stand in to one another: the incorrect triggers of a concept only cause the concept to trigger because the correct triggers do, but the correct triggers don’t trigger the concept because the incorrect ones do (Fodor 1987). To return to the cow in the night example, the proposal is that if horses didn’t cause my horse concept to trigger, cows in the night wouldn’t either: the reason cows in the night cause it to trigger is because horses cause it to trigger, and cows in the night look like horses. But the reverse is not the case: if cows in the night didn’t cause my horse concept to trigger, this needn’t mean that horses wouldn’t. Correct and incorrect triggers can therefore by identified by this ‘asymmetric dependence’ relation they have to one another. When we try to explain why the correct triggers would continue to cause a concept to token even if the incorrect triggers didn’t, however, the proposal becomes less convincing. Returning to the Twin-Earth example, if we travel to Twin-Earth our water concept will be triggered by the watery looking stuff there, presumably falsely. But since Twin-Earth water is by hypothesis ordinarily indistinguishable from Earth water, it seems wrong to say that if Twin-Earth water did not cause our water concept to trigger, Earth water still would. The reason Earth water causes our water concept to trigger, after all, is presumably because it looks, tastes and smells a certain way. But Twin-Earth water looks, tastes and smells exactly the same way, so it is far from clear why we should expect that if Twin-Earth water did not trigger our water concept Earth water still would. Fodor (1998) replies that we should discount Twin-Earth worries because Twin-Earth does not exist. But it is not clear that this helps, since we could surely discover a substance on Earth that we might not be able to distinguish from water, in which case the same worry can be raised without discussing Twin-Earth.

Needless to say there are further arguments made on behalf of these proposals, but as things stand, there is no widely accepted solution to the problem presented by intensionality for naturalizing intentionality.

4. References and Further Reading

  • Brandom, R. (1996). Making it Explicit. Harvard University Press.
  • Brentano, F. (1874/1911/1973). Psychology from an Empirical Standpoint, London: Routledge and Kegan Paul.
  • Chalmers, D. (1996). The Conscious Mind, Oxford: Oxford University Press.
  • Chalmers, D. (2006). “Foundations of Two-Dimensional Semantics.” In M. Garcia-Carpintero and J. Macia (eds). Two-Dimensional Semantics: Foundations and Applications. Oxford: Oxford University Press.
  • Chisholm, R. M. (1956). “Perceiving: a Philosophical Study,” chapter 11, selection in D. Rosenthal (ed.), The Nature of Mind, Oxford: Oxford University Press, 1990.
  • Davidson, D. (1980). Essays on Events and Actions, Oxford: Clarendon Press.
  • Davidson, D. (1987). “Knowing One’s Own Mind.” In Proceedings and Addresses of the American Philosophical Association, 60: 441–58.
  • Dennett, D.C (1991). Consciousness Explained. Boston: Little Brown.
  • Dretske, F. (1981). Knowledge and the Flow of Information, Cambridge, Mass.: MIT Press.
  • Dretske, F. (1995). Naturalizing the Mind. Cambridge, Mass.: MIT Press.
  • Dreyfus, H.L. (ed.) (1982). Husserl, Intentionality and Cognitive Science, Cambridge, Mass.: MIT Press.
  • Evans, G. (1979). “Reference and Contingency.” The Monist, 62, 2 (April, 1979), 161-189.
  • Fodor, J.A. (1975). The Language of Thought, New York: Crowell.
  • Fodor, J.A. (1987). Psychosemantics, Cambridge, Mass.: MIT Press.
  • Fodor, J.A. (1998). Concepts: Where Cognitive Science Went Wrong, New York: Oxford University Press.
  • Fodor, J. A. and Lepore, E. (1992). Holism: A Shopper’s Guide. Oxford: Blackwell.
  • Føllesdal, D. (1982). “Husserl’s notion of Noema,” in H.L. Dreyfus (ed.), The Nature of Mind, Oxford: Oxford University Press.
  • Frege, G. (1892/1952). “On Sense and Reference.” In P. Geach and M. Black (eds.), Philosophical Writings of Gottlob Frege, Oxford: Blackwell, 1952.
  • Goodman, N. (1968). Languages of Art: An Approach to a Theory of Symbols. Indianapolis: The Bobbs-Merrill Company.
  • Haugeland, J. (1981). “Semantic Engines: an Introduction to Mind Design.” In J. Haugeland (ed.), Mind Design, Philosophy, Psychology, Artificial Intelligence, Cambridge, Mass.: MIT Press, 1981.
  • Hinton, J.M., (1967). “Visual Experiences.” Mind, 76: 217–227.
  • Husserl, E. (1900/1970). Logical Investigations, (Engl. Transl. by Findlay, J.N.), London: Routledge and Kegan Paul.
  • Jackson, F. (1998). From Metaphysics to Ethics. Oxford: Oxford University Press.
  • Kaplan, D. (1979). “Dthat.” In P. French, T. Uehling, and H. Wettstein (eds.), Contemporary Perspectives in the Philosophy of Language, Minneapolis: University of Minnesota Press.
  • King, P. (2007). “Rethinking Representation in the Middle Ages.” In Representation and Objects of Thought in Medieval Philosophy, edited by Henrik Lagerlund, Ashgate Press: 81-100.
  • Kim, J. (1993). Mind and Supervenience, Cambridge: Cambridge University Press.
  • Kripke, S. (1972/1980). Naming and Necessity, Oxford: Blackwell.
  • Martin, M.G.F. (2002). “The Transparency of Experience.” Mind and Language, 17: 376–425.
  • Mohan, M. & Levy, E. (1984). “Teleology, Error, and the Human Immune System.” Journal of Philosophy 81 (7):351-372.
  • McDowell, J. (1994). Mind and World. Oxford: Oxford University Press.
  • McGinn, C. (1989). Mental Content, Oxford: Oxford University Press.
  • McGinn, C. (1990). Problems of Consciousness, Oxford: Blackwell.
  • Mill, J.S. (1884). A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation, New York: Harper.
  • Millikan, R.G. (1984). Language, Thought and Other Biological Objects, Cambridge, Mass.: MIT Press.
  • Mooney, T. (2010). “Understanding and Simple Seeing in Husserl.” Husserl Studies, 26: 19-48.
  • Moore, G.E. (1903). “The Refutation of Idealism.” Mind 12 (1903) 433-53.
  • Papineau, D. (1993). Philosophical Naturalism. Oxford: Blackwell.
  • Peacocke, C. (1983). Sense and Content: Experience, Thought and their Relations, Oxford: Oxford University Press.
  • Putnam, H. (1974). “The Meaning of ‘Meaning’,” in H. Putnam, Philosophical Papers, vol. II, Language, Mind and Reality, Cambridge: Cambridge University Press, 1975.
  • Recanati, F. (2013). Mental Files. Oxford University Press.
  • Russell, B. (1905/1956). “On Denoting,” in R. Marsh (ed.), Bertrand Russell, Logic and Knowledge, Essays 1901-1950, New York: Capricorn Books, 1956.
  • Russell, B. (1911). The Problems of Philosophy, (New York: Holt).
  • Ryle, G. (1949). The Concept of Mind. Oxford University Press.
  • Searle, J. (1958). “Do Proper Names have Sense?” Mind 67: 166-173.
  • Searle, J. (1983). Intentionality, Cambridge: Cambridge University Press.
  • Searle, J, (1994). “Intentionality (1),” in Guttenplan, S. (ed.) (1994) A Companion Volume to the Philosophy of Mind, Oxford: Blackwell.
  • Sellars, W. (1956/1997). “Empiricism and the Philosophy of Mind.” In Empiricism and the Philosophy of Mind: with an Introduction by Richard Rorty and a Study Guide by Robert Brandom, R. Brandom (ed.), Cambridge, MA: Harvard University Press.
  • Snowdon, P.F., (1981). “Perception, Vision and Causation.” Proceedings of the Aristotelian Society, New Series, 81: 175–92.
  • Soames, S. (2005). Reference and Description: The Case against Two-Dimensionalism. Princeton: Princeton University Press.
  • Strawson, P. (1959). The Bounds of Sense. Oxford University Press.
  • Tye, M. (1995). Ten Problems of Consciousness, Cambridge, Mass.: MIT Press.
  • Varela, F., Thompson, E., and Rosch E., (1991). The Embodied Mind: Cognitive Science and Human Experience, Cambridge, Mass.: MIT Press.
  • Wittgenstein, L. (1953). Philosophical Investigations. Oxford: Blackwell.

 

Author Information

Cathal O’Madagain
Email: cathalcom@gmail.com
Ecole Normale Superieure, Paris
France

Gottfried Leibniz: Philosophy of Mind

LeibnizGottfried Wilhelm Leibniz (1646-1716) was a true polymath: he made substantial contributions to a host of different fields such as mathematics, law, physics, theology, and most subfields of philosophy.  Within the philosophy of mind, his chief innovations include his rejection of the Cartesian doctrines that all mental states are conscious and that non-human animals lack souls as well as sensation.  Leibniz’s belief that non-rational animals have souls and feelings prompted him to reflect much more thoroughly than many of his predecessors on the mental capacities that distinguish human beings from lower animals.  Relatedly, the acknowledgment of unconscious mental representations and motivations enabled Leibniz to provide a far more sophisticated account of human psychology.  It also led Leibniz to hold that perception—rather than consciousness, as Cartesians assume—is the distinguishing mark of mentality.

The capacities that make human minds superior to animal souls, according to Leibniz, include not only their capacity for more elevated types of perceptions or mental representations, but also their capacity for more elevated types of appetitions or mental tendencies.  Self-consciousness and abstract thought are examples of perceptions that are exclusive to rational souls, while reasoning and the tendency to do what one judges to be best overall are examples of appetitions of which only rational souls are capable.  The mental capacity for acting freely is another feature that sets human beings apart from animals and it in fact presupposes the capacity for elevated kinds of perceptions as well as appetitions.

Another crucial contribution to the philosophy of mind is Leibniz’s frequently cited mill argument.  This argument is supposed to show, through a thought experiment that involves walking into a mill, that material things such as machines or brains cannot possibly have mental states.  Only immaterial things, that is, soul-like entities, are able to think or perceive.  If this argument succeeds, it shows not only that our minds must be immaterial or that we must have souls, but also that we will never be able to construct a computer that can truly think or perceive.

Finally, Leibniz’s doctrine of pre-established harmony also marks an important innovation in the history of the philosophy of mind.  Like occasionalists, Leibniz denies any genuine interaction between body and soul.  He agrees with them that the fact that my foot moves when I decide to move it, as well as the fact that I feel pain when my body gets injured, cannot be explained by a genuine causal influence of my soul on my body, or of my body on my soul.  Yet, unlike occasionalists, Leibniz also rejects the idea that God continually intervenes in order to produce the correspondence between my soul and my body.  That, Leibniz thinks, would be unworthy of God.  Instead, God has created my soul and my body in such a way that they naturally correspond to each other, without any interaction or divine intervention.  My foot moves when I decide to move it because this motion has been programmed into it from the very beginning.  Likewise, I feel pain when my body is injured because this pain was programmed into my soul.  The harmony or correspondence between mental states and states of the body is therefore pre-established.

Table of Contents

  1. Leibnizian Minds and Mental States
    1. Perceptions
      1. Consciousness, Apperception, and Reflection
      2. Abstract Thought, Concepts, and Universal Truths
    2. Appetitions
  2. Freedom
  3. The Mill Argument
  4. The Relation between Mind and Body
  5. References and Further Reading
    1. Primary Sources in English Translation
    2. Secondary Sources

1. Leibnizian Minds and Mental States

Leibniz is a panpsychist: he believes that everything, including plants and inanimate objects, has a mind or something analogous to a mind.  More specifically, he holds that in all things there are simple, immaterial, mind-like substances that perceive the world around them.  Leibniz calls these mind-like substances ‘monads.’  While all monads have perceptions, however, only some of them are aware of what they perceive, that is, only some of them possess sensation or consciousness.  Even fewer monads are capable of self-consciousness and rational perceptions.  Leibniz typically refers to monads that are capable of sensation or consciousness as ‘souls,’ and to those that are also capable of self-consciousness and rational perceptions as ‘minds.’  The monads in plants, for instance, lack all sensation and consciousness and are hence neither souls nor minds; Leibniz sometimes calls this least perfect type of monad a ‘bare monad’ and compares the mental states of such monads to our states when we are in a stupor or a dreamless sleep.  Animals, on the other hand, can sense and be conscious, and thus possess souls (see Animal Minds).  God and the souls of human beings and angels, finally, are examples of minds because they are self-conscious and rational.  As a result, even though there are mind-like things everywhere for Leibniz, minds in the stricter sense are not ubiquitous.

All monads, even those that lack consciousness altogether, have two basic types of mental states: perceptions, that is, representations of the world around them, and appetitions, or tendencies to transition from one representation to another.  Hence, even though monads are similar to the minds or souls described by Descartes in some ways—after all, they are immaterial substances—consciousness is not an essential property of monads, while it is an essential property of Cartesian souls.  For Leibniz, then, the distinguishing mark of mentality is perception, rather than consciousness (see Simmons 2001).  In fact, even Leibnizian minds in the stricter sense, that is, monads capable of self-consciousness and reasoning, are quite different from the minds in Descartes’s system.  While Cartesian minds are conscious of all their mental states, Leibnizian minds are conscious only of a small portion of their states.  To us it may seem obvious that there is a host of unconscious states in our minds, but in the seventeenth century this was a radical and novel notion.  This profound departure from Cartesian psychology allows Leibniz to paint a much more nuanced picture of the human mind.

One crucial aspect of Leibniz’s panpsychism is that in addition to the rational monad that is the soul of a human being, there are non-rational, bare monads everywhere in the human being’s body.  Leibniz sometimes refers to the soul of a human being or animal as the central or dominant monad of the organism.  The bare monads that are in an animal’s body, accordingly, are subordinate to its dominant monad or soul.  Even plants, for Leibniz, have central or dominant monads, but because they lack sensation, these dominant monads cannot strictly speaking be called souls.  They are merely bare monads, like the monads that are subordinate to them.

The claim that there are mind-like things everywhere in nature—in our bodies, in plants, and even in inanimate objects—strikes many readers of Leibniz as ludicrous.  Yet, Leibniz thinks he has conclusive metaphysical arguments for this claim.  Very roughly, he holds that a complex, divisible thing such as a body can only be real if it is made up of parts that are real.  If the parts in turn have parts, those have to be real as well.  The problem is, Leibniz claims, that matter is infinitely divisible: we can never reach parts that do not themselves have parts.  Even if there were material atoms that we cannot actually divide, they must still be spatially extended, like all matter, and therefore have spatial parts.  If something is spatially extended, after all, we can at least in thought distinguish its left half from its right half, no matter how small it is.  As a result, Leibniz thinks, purely material things are not real.  The reality of complex wholes depends on the reality of their parts, but with purely material things, we never get to parts that are real since we never reach an end in this quest for reality.  Leibniz concludes that there must be something in nature that is not material and not divisible, and from which all things derive their reality.  These immaterial, indivisible things just are monads.  Because of the role they play, Leibniz sometimes describes them as “atoms of substance, that is, real unities absolutely destitute of parts, […] the first absolute principles of the composition of things, and, as it were, the final elements in the analysis of substantial things”  (p. 142.  For a more thorough description of monads, see Leibniz: Metaphysics, as well as the Monadology and the New System of Nature, both included in Ariew and Garber.)

a. Perceptions

As already seen, all monads have perceptions, that is, they represent the world around them.  Yet, not all perceptions—not even all the perceptions of minds—are conscious.  In fact, Leibniz holds that at any given time a mind has infinitely many perceptions, but is conscious only of a very small number of them.  Even souls and bare monads have an infinity of perceptions.  This is because Leibniz believes, for reasons that need not concern us here (but see Leibniz: Metaphysics), that each monad constantly perceives the entire universe.  For instance, even though I am not aware of it at all, my mind is currently representing every single grain of sand on Mars.  Even the monads in my little toe, as well as the monads in the apple I am about to eat, represent those grains of sand.

Leibniz often describes perceptions of things of which the subject is unaware and which are far removed from the subject’s body as ‘confused.’  He is fond of using the sound of the ocean as a metaphor for this kind of confusion: when I go to the beach, I do not hear the sound of each individual wave distinctly; instead, I hear a roaring sound from which I am unable to discern the sounds of the individual waves (see Principles of Nature and Grace, section 13, in Ariew and Garber, 1989).  None of these individual sounds stands out.  Leibniz claims that confused perceptions in monads are analogous to this confusion of sounds, except of course for the fact that monads do not have to be aware even of the confused whole.  To the extent that a perception does stand out from the rest, however, Leibniz calls it ‘distinct.’  This distinctness comes in degrees, and Leibniz claims that the central monads of organisms always perceive their own bodies more distinctly than they perceive other bodies.

Bare monads are not capable of very distinct perceptions; their perceptual states are always muddled and confused to a high degree.  Animal souls, on the other hand, can have much more distinct perceptions than bare monads.  This is in part because they possess sense organs, such as eyes, which allow them to bundle and condense information about their surroundings (see Principles of Nature and Grace, section 4).  The resulting perceptions are so distinct that the animals can remember them later, and Leibniz calls this kind of perception ‘sensation.’  The ability to remember prior perceptions is extremely useful, as a matter of fact, because it enables animals to learn from experience.  For instance, a dog that remembers being beaten with a stick can learn to avoid sticks in the future (see Principles of Nature and Grace, section 5, in Ariew and Garber, 1989).  Sensations are also tied to pleasure and pain: when an animal distinctly perceives some imperfection in its body, such as a bruise, this perception just is a feeling of pain.  Similarly, when an animal perceives some perfection of its body, such as nourishment, this perception is pleasure.  Unlike Descartes, then, Leibniz believed that animals are capable of feeling pleasure and pain.

Consequently, souls differ from bare monads in part through the distinctness of their perceptions: unlike bare monads, souls can have perceptions that are distinct enough to give rise to memory and sensation, and they can feel pleasure and pain.  Rational souls, or minds, share these capacities.  Yet they are additionally capable of perceptions of an even higher level.  Unlike the souls of lower animals, they can reflect on their own mental states, think abstractly, and acquire knowledge of necessary truths.  For instance, they are capable of understanding mathematical concepts and proofs.  Moreover, they can think of themselves as substances and subjects: they have the ability to use and understand the word ‘I’ (see Monadology, section 30).  These kinds of perceptions, for Leibniz, are distinctively rational perceptions, and they are exclusive to minds or rational souls.

It is clear, then, that there are different types of perceptions: some are unconscious, some are conscious, and some constitute reflection or abstract thought.  What exactly distinguishes these types of perceptions, however, is a complicated question that warrants a more detailed investigation.

i. Consciousness, Apperception, and Reflection

Why are some perceptions conscious, while others are not?  In one text, Leibniz explains the difference as follows: “it is good to distinguish between perception, which is the internal state of the monad representing external things, and apperception, which is consciousness, or the reflective knowledge of this internal state, something not given to all souls, nor at all times to a given soul” (Principles of Nature and Grace, section 4).  This passage is interesting for several reasons: Leibniz not only equates consciousness with what he calls ‘apperception,’ and states that only some monads possess it.  He also seems to claim that conscious perceptions differ from other perceptions in virtue of having different types of things as their objects: while unconscious perceptions represent external things, apperception or consciousness has perceptions, that is, internal things, as its object.  Consciousness is therefore closely connected to reflection, as the term ‘reflective knowledge’ also makes clear.

The passage furthermore suggests that Leibniz understands consciousness in terms of higher-order mental states because it says that in order to be conscious of a perception, I must possess “reflective knowledge” of that perception.  One way of interpreting this statement is to understand these higher-order mental states as higher-order perceptions: in order to be conscious of a first-order perception, I must additionally possess a second-order perception of that first-order perception.  For example, in order to be conscious of the glass of water in front of me, I must not only perceive the glass of water, but I must also perceive my perception of the glass of water.  After all, in the passage under discussion, Leibniz defines ‘consciousness’ or ‘apperception’ as the reflective knowledge of a perception.  Such higher-order theories of consciousness are still endorsed by some philosophers of mind today (see Consciousness).  For an alternative interpretation of Leibniz’s theory of consciousness, however, see Jorgensen 2009, 2011a, and 2011b).

There is excellent textual evidence that according to Leibniz, consciousness or apperception is not limited to minds, but is instead shared by animal souls.  One passage in which Leibniz explicitly ascribes apperception to animals is from the New Essays: “beasts have no understanding … although they have the faculty for apperceiving the more conspicuous and outstanding impressions—as when a wild boar apperceives someone who is shouting at it” (p. 173).  Moreover, Leibniz sometimes claims that sensation involves apperception (e.g. New Essays p. 161; p. 188), and since animals are clearly capable of sensation, they must thus possess some form of apperception.  Hence, it seems that Leibniz ascribes apperception to animals, which in turn he elsewhere identifies with consciousness.

Yet, the textual evidence for animal consciousness is unfortunately anything but neat because in the New Essays—that is, in the very same text—Leibniz also suggests that there is an important difference between animals and human beings somewhere in this neighborhood.  In several passages, he says that any creature with consciousness has a moral or personal identity, which in turn is something he grants only to minds.  He states, for instance, that “consciousness or the sense of I proves moral or personal identity” (New Essays, p. 236).  Hence, it seems clear that for Leibniz there is something in the vicinity of consciousness that animals lack and that minds possess, and which is crucial for morality.

A promising solution to this interpretive puzzle is the following: what animals lack is not consciousness generally, but only a particular type of consciousness.  More specifically, while they are capable of consciously perceiving external things, they lack awareness, or at least a particular type of awareness, of the self.  In the Monadology, for instance, Leibniz argues that knowledge of necessary truths distinguishes us from animals and that through this knowledge “we rise to reflexive acts, which enable us to think of that which is called ‘I’ and enable us to consider that this or that is in us” (sections 29-30).  Similarly, he writes in the Principles of Nature and Grace that “minds … are capable of performing reflective acts, and capable of considering what is called ‘I’, substance, soul, mind—in brief, immaterial things and immaterial truths” (section 5).  Self-knowledge, or self-consciousness, then, appears to be exclusive to rational souls.  Leibniz moreover connects this consciousness of the self to personhood and moral responsibility in several texts, such as for instance in the Theodicy: “In saying that the soul of man is immortal one implies the subsistence of what makes the identity of the person, something which retains its moral qualities, conserving the consciousness, or the reflective inward feeling, of what it is: thus it is rendered susceptible to chastisement or reward” (section 89).

Based on these passages, it seems that one crucial cognitive difference between human beings and animals is that even though animals possess the kind of apperception that is involved in sensation and in an acute awareness of external objects, they lack a certain type of apperception or consciousness, namely reflective self-knowledge or self-consciousness.  Especially because of the moral implications of this kind of consciousness that Leibniz posits, this difference is clearly an extremely important one.  According to these texts, then, it is not consciousness or apperception tout court that distinguishes minds from animal souls, but rather a particular kind of apperception.  What animals are incapable of, according to Leibniz, is self-knowledge or self-awareness, that is, an awareness not only of their perceptions, but also of the self that is having those perceptions.

Because Leibniz associates consciousness so closely with reflection, one might wonder whether the fact that animals are capable of conscious perceptions implies that they are also capable of reflection.  This is another difficult interpretive question because there appears to be evidence both for a positive and for a negative answer.  Reflection, according to Leibniz, is “nothing but attention to what is within us” (New Essays, p. 51).  Moreover, as already seen, he argues that reflective acts enable us “to think of that which is called ‘I’ and … to consider that this or that is in us” (Monadology, section 30).  Leibniz does not appear to ascribe reflection to animals explicitly, and in fact, there are several texts in which he says in no uncertain terms that they lack reflection altogether.  He states for instance that “the soul of a beast has no more reflection than an atom” (Loemker, p. 588).  Likewise, he defines ‘intellection’ as “a distinct perception combined with a faculty of reflection, which the beasts do not have” (New Essays, p. 173) and explains that “just as there are two sorts of perception, one simple, the other accompanied by reflections that give rise to knowledge and reasoning, so there are two kinds of souls, namely ordinary souls, whose perception is without reflection, and rational souls, which think about what they do” (Strickland, p. 84).

On the other hand, as seen, Leibniz does ascribe apperception or consciousness to animals, and consciousness in turn appears to involve higher-order mental states.  This suggests that Leibnizian animals must perceive or know their own perceptions when they are conscious of something, and that in turn seems to imply that they can reflect after all.  A closely related reason for ascribing reflection to animals is that Leibniz sometimes explicitly associates reflection with apperception or consciousness.  In a passage already quoted above, for instance, Leibniz defines ‘consciousness’ as the reflective knowledge of a first-order perception.  Hence, if animals possess consciousness it seems that they must also have some type of reflection.

We are consequently faced with an interpretive puzzle: even though there is strong indirect evidence that Leibniz attributes reflection to animals, there is also direct evidence against it.  There are at least two ways of solving this puzzle.  In order to make sense of passages in which Leibniz restricts reflection to rational souls, one can either deny that perceiving one’s internal states is sufficient for reflection, or one can distinguish between different types of reflection, in such a way that the most demanding type of reflection is limited to minds.  One good way to deny that perception of one’s internal states is sufficient for reflection is to point out that Leibniz defines reflection as “attention to what is within us” (New Essays, p. 51), rather than as ‘perception of what is within us.’  Attention to internal states, arguably, is more demanding than mere perception of these states, and animals may well be incapable of the former.  Attention might be a particularly distinct perception, for instance.  Alternatively, one can argue that reflection requires a self-concept, or self-knowledge, which also goes beyond the mere perception of internal states and may be inaccessible to animals.  Perceiving my internal states, on that interpretation, amounts to reflection only if I also possess knowledge of the self that is having those states.  Instead of denying that perceiving one’s own states is sufficient for reflection, one can also distinguish different types of reflection and claim that while the mere perception of one’s internal states is a type of reflection, there is a more demanding type of reflection that requires attention, a self-concept, or something similar.  Yet, the difference between those two responses appears to be merely terminological.  Based on the textual evidence discussed above, it is clear that either reflection generally, or at least a particular type of reflection, must be exclusive to minds.

ii. Abstract Thought, Concepts, and Universal Truths

So far, we have seen that one cognitive capacity that elevates minds above animal souls is self-consciousness, which is a particular type of reflection.  Before turning to appetitions, we should briefly investigate three additional, mutually related, cognitive abilities that only minds possess, namely the abilities to abstract, to form or possess concepts, and to know general truths.  In what may well be Leibniz’s most intriguing discussion of abstraction, he says that some non-human animals “apparently recognize whiteness, and observe it in chalk as in snow; but it does not amount to abstraction, which requires attention to the general apart from the particular, and consequently involves knowledge of universal truths which beasts do not possess” (New Essays, p. 142).  In this passage, we learn not only that beasts are incapable of abstraction, but also that abstraction involves “attention to the general apart from the particular” as well as “knowledge of universal truths.”  Hence, abstraction for Leibniz seems to consist in separating out one part of a complex idea and focusing on it exclusively.  Instead of thinking of different white things, one must think of whiteness in general, abstracting away from the particular instances of whiteness.  In order to think about whiteness in the abstract, then, it is not enough to perceive different white things as similar to one another.

Yet, it might still seem mysterious how precisely animals should be able to observe whiteness in different objects if they are unable to abstract.  One fact that makes this less mysterious, however, is that, on Leibniz’s view, while animals are unable to pay attention to whiteness in general, the idea of whiteness may nevertheless play a role in their recognition of whiteness.  As Leibniz explains in the New Essays, even though human minds are aware of complex ideas and particular truths first as well as rather easily, and have to expend a lot of effort to subsequently achieve awareness of simple ideas and general principles, the order of nature is the other way around:

The truths that we start by being aware of are indeed particular ones, just as we start with the coarsest and most composite ideas.  But that doesn’t alter the fact that in the order of nature the simplest comes first, and that the reasons for particular truths rest wholly on the more general ones of which they are mere instances. … The mind relies on these principles constantly; but it does not find it so easy to sort them out and to command a distinct view of each of them separately, for that requires great attention to what it is doing. (p. 83f.)

Here, Leibniz says that minds can rely on general principles, or abstract ideas, without being aware of them, and without having distinct perceptions of them separately.  This might help us to explain how animals can observe whiteness in different white objects without being able to abstract: the simple idea of whiteness might play a role in their cognition, even though they are not aware of it, and are unable to pay attention to this idea.

The passage just quoted is interesting for another reason: It shows that abstracting and achieving knowledge of general truths have a lot in common and presuppose the capacity to reflect.  It takes a special effort of mind to become aware of abstract ideas and general truths, that is, to separate these out from complex ideas and particular truths.  It is this special effort, it seems, of which animals are incapable; while they can at times achieve relatively distinct perceptions of complex or particular things, they lack the ability to pay attention, or at least sufficient attention, to their internal states.  At least part of the reason for their inability to abstract and to know general truths, then, appears to be their inability, or at least very limited ability, to reflect.

Abstraction also seems closely related to the possession or formation of concepts: arguably, what a mind gains when abstracting the idea of whiteness from the complex ideas of particular white things is what we would call a concept of whiteness.  Hence, since animals cannot abstract, they do not possess such concepts.  They may nevertheless, as suggested above, have confused ideas such as a confused idea of whiteness that allows them to recognize whiteness in different white things, without enabling them to pay attention to whiteness in the abstract.

An interesting question that arises in this context is the question whether having an idea of the future or thinking about a future state requires abstraction.  One reason to think so is that, plausibly, in order to think about the future, for instance about future pleasures or pains, one needs to abstract from the present pleasures or pains that one can directly experience, or from past pleasures and pains that one remembers.  After all, just as one can only attain the concept of whiteness by abstracting from other properties of the particular white things one has experienced, so, arguably, one can only acquire the idea of future pleasures through abstraction from particular present pleasures.  It may be for this reason that Leibniz sometimes notes that animals have “neither foresight nor anxiety for the future” (Huggard, p. 414).  Apparently, he does not consider animals capable of having an idea of the future or of future states.

Leibniz thinks that in addition to sensible concepts such as whiteness, we also have concepts that are not derived from the senses, that is, we possess intellectual concepts.  The latter, it seems, set us apart even farther from animals because we attain them through reflective self-awareness, of which animals, as seen above, are not capable.  Leibniz says, for instance, that “being is innate in us—the knowledge of being is comprised in the knowledge that we have of ourselves.  Something like this holds of other general notions” (New Essays, p. 102).  Similarly, he states a few pages later that “reflection enables us to find the idea of substance within ourselves, who are substances” (New Essays, p. 105).  Many similar statements can be found elsewhere.  The intellectual concepts that we can discover in our souls, according to Leibniz, include not only being and substance, but also unity, similarity, sameness, pleasure, cause, perception, action, duration, doubting, willing, and reasoning, to name only a few.  In order to derive these concepts from our reflective self-awareness, we must apparently engage in abstraction: I am distinctly aware of myself as an agent, a substance, and a perceiver, for instance, and from this awareness I can abstract the ideas of action, substance, and perception in general.  This means that animals are inferior to us among other things in the following two ways: they cannot have distinct self-awareness, and they cannot abstract.  They would need both of these capacities in order to form intellectual concepts, and they would need the latter—that is, abstraction—in order to form sensible concepts.

Intellectual concepts are not the only things that minds can find in themselves: in addition, they are also able to discover eternal or general truths there, such as the axioms or principles of logic, metaphysics, ethics, and natural theology.  Like the intellectual concepts just mentioned, these general truths or principles cannot be derived from the senses and can thus be classified as innate ideas.  Leibniz says, for instance,

Above all, we find [in this I and in the understanding] the force of the conclusions of reasoning, which are part of what is called the natural light. … It is also by this natural light that the axioms of mathematics are recognized. … [I]t is generally true that we know [necessary truths] only by this natural light, and not at all by the experiences of the senses. (Ariew and Garber, p. 189)

Axioms and general principles, according to this passage, must come from the mind itself and cannot be acquired through sense experience.  Yet, also as in the case of intellectual concepts, it is not easy for us to discover such general truths or principles in ourselves; instead, it takes effort or special attention.  It again appears to require the kind of attention to what is within us of which animals are not capable.  Because they lack this type of reflection, animals are “governed purely by examples from the senses” and “consequently can never arrive at necessary and general truths” (Strickland p. 84).

b. Appetitions

Monads possess not only perceptions, or representations of the world they inhabit, but also appetitions.  These appetitions are the tendencies or inclinations of these monads to act, that is, to transition from one mental state to another.  The most familiar examples of appetitions are conscious desires, such as my desire to have a drink of water.  Having this desire means that I have some tendency to drink from the glass of water in front of me.  If the desire is strong enough, and if there are no contrary tendencies or desires in my mind that are stronger—for instance, the desire to win the bet that I can refrain from drinking water for one hour—I will attempt to drink the water.  This desire for water is one example of a Leibnizian appetition.  Yet, just as in the case of perceptions, only a very small portion of appetitions is conscious.  We are unaware of most of the tendencies that lead to changes in our perceptions.  For instance, I am aware neither of perceiving my hair growing, nor of my tendencies to have those perceptions.  Moreover, as in the case of perceptions, there are an infinite number of appetitions in any monad at any given time.  This is because, as seen, each monad represents the entire universe.  As a result, each monad constantly transitions from one infinitely complex perceptual state to another, reflecting all changes that take place in the universe.  The tendency that leads to a monad’s transition from one of these infinitely complex perceptual states to another is therefore also infinitely complex, or composed of infinitely many smaller appetitions.

The three types of monads—bare monads, souls, and minds—differ not only with respect to their perceptual or cognitive capacities, but also with respect to their appetitive capacities.  In fact, there are good reasons to think that three different types of appetitions correspond to the three types of perceptions mentioned above, that is, to perception, sensation, and rational perception.  After all, Leibniz distinguishes between appetitions of which we can be aware and those of which we cannot be aware, which he sometimes also calls ‘insensible appetitions’ or ‘insensible inclinations.’  He appears to further divide the type of which we can be aware into rational and non-rational appetitions.  This threefold division is made explicit in a passage from the New Essays:

There are insensible inclinations of which we are not aware.  There are sensible ones: we are acquainted with their existence and their objects, but have no sense of how they are constituted. … Finally there are distinct inclinations which reason gives us: we have a sense both of their strength and of their constitution. (p. 194)

According to this passage, then, Leibniz acknowledges the following three types of appetitions: (a) insensible or unconscious appetitions, (b) sensible or conscious appetitions, and (c) distinct or rational appetitions.

Even though Leibniz does not say so explicitly, he furthermore believes that bare monads have only unconscious appetitions, that animal souls additionally have conscious appetitions, and that only minds have distinct or rational appetitions.  Unconscious appetitions are tendencies such as the one that leads to my perception of my hair growing, or the one that prompts me unexpectedly to perceive the sound of my alarm in the morning.  All appetitions in bare monads are of this type; they are not aware of any of their tendencies.  An example of a sensible appetition, on the other hand, is an appetition for pleasure.  My desire for a piece of chocolate, for instance, is such an appetition: I am aware that I have this desire and I know what the object of the desire is, but I do not fully understand why I have it.  Animals are capable of this kind of appetition; in fact, many of their actions are motivated by their appetitions for pleasure.  Finally, an example of a rational appetition is the appetition for something that my intellect has judged to be the best course of action.  Leibniz appears to identify the capacity for this kind of appetition with the will, which, as we will see below, plays a crucial role in Leibniz’s theory of freedom.  What is distinctive of this kind of appetition is that whenever we possess it, we are not only aware of it and of its object, but also understand why we have it.  For instance, if I judge that I ought to call my mother and consequently desire to call her, Leibniz thinks, I am aware of the thought process that led me to make this judgment, and hence of the origins of my desire.

Another type of rational appetition is the type of appetition involved in reasoning.  As seen, Leibniz thinks that animals, because they can remember prior perceptions, are able to learn from experience, like the dog that learns to run away from sticks.  This sort of behavior, which involves a kind of inductive inference (see Deductive and Inductive Arguments), can be called a “shadow of reasoning,” Leibniz tells us (New Essays, p. 50).  Yet, animals are incapable of true—that is, presumably, deductive—reasoning, which, Leibniz tells us, “depends on necessary or eternal truths, such as those of logic, numbers, and geometry, which bring about an indubitable connection of ideas and infallible consequences” (Principles of Nature and Grace, section 5, in Ariew and Garber, 1989).  Only minds can reason in this stricter sense.

Some interpreters think that reasoning consists simply in very distinct perception.  Yet that cannot be the whole story.  First of all, reasoning must involve a special type of perception that differs from the perceptions of lower animals in kind, rather than merely in degree, namely abstract thought and the perception of eternal truths.  This kind of perception is not just more distinct; it has entirely different objects than the perceptions of non-rational souls, as we saw above.  Moreover, it seems more accurate to describe reasoning as a special kind of appetition or tendency than as a special kind of perception.  This is because reasoning is not just one perception, but rather a series of perceptions.  Leibniz for instance calls it “a chain of truths” (New Essays, p. 199) and defines it as “the linking together of truths” (Huggard, p. 73).  Thus, reasoning is not the same as perceiving a certain type of object, nor as perceiving an object in a particular fashion.  Rather, it consists mainly in special types of transitions between perceptions and therefore, according to Leibniz’s account of how monads transition from perception to perception, in appetitions for these transitions.  What a mind needs in order to be rational, therefore, are appetitions that one could call the principles of reasoning.  These appetitions or principles allow minds to transition, for instance, from the premises of an argument to its conclusion.  In order to conclude ‘Socrates is mortal’ from ‘All men are mortal’ and ‘Socrates is a man,’ for example, I not only need to perceive the premises distinctly, but I also need an appetition for transitioning from premises of a particular form to conclusions of a particular form.

Leibniz states in several texts that our reasonings are based on two fundamental principles: the Principle of Contradiction and the Principle of Sufficient Reason.  Human beings also have access to several additional innate truths and principles, for instance those of logic, mathematics, ethics, and theology.  In virtue of these principles we have a priori knowledge of necessary connections between things, while animals can only have empirical knowledge of contingent, or merely apparent, connections.  The perceptions of animals, then, are not governed by the principles on which our reasonings are based; the closest an animal can come to reasoning is, as mentioned, engaging in empirical inference or induction, which is based not on principles of reasoning, but merely on the recognition and memory of regularities in previous experience.  This confirms that reasoning is a type of appetition: using, or being able to use, principles of reasoning cannot just be a matter of perceiving the world more distinctly.  In fact, these principles are not something that we acquire or derive from perceptions.  Instead, at least the most basic ones are innate dispositions for making certain kinds of transitions.

In connection with reasoning, it is important to note that even though Leibniz sometimes uses the term ‘thought’ for perceptions generally, he makes it clear in some texts that it strictly speaking belongs exclusively to minds because it is “perception joined with reason” (Strickland p. 66; see also New Essays, p. 210).  This means that the ability to think in this sense, just like reasoning, is also something that is exclusive to minds, that is, something that distinguishes minds from animal souls.  Non-rational souls neither reason nor think, strictly speaking; they do however have perceptions.

The distinctive cognitive and appetitive capacities of the three types of monads are summarized in the following table:

Leibniz-Mind table

2. Freedom

One final capacity that sets human beings apart from non-rational animals is the capacity for acting freely.  This is mainly because Leibniz closely connects free agency with rationality: acting freely requires acting in accordance with one’s rational assessment of which course of action is best.  Hence, acting freely involves rational perceptions as well as rational appetitions.  It requires both knowledge of, or rational judgments about, the good, as well as the tendency to act in accordance with these judgments.  For Leibniz, the capacity for rational judgments is called ‘intellect,’ and the tendency to pursue what the intellect judges to be best is called ‘will.’  Non-human animals, because they do not possess intellects and wills, or the requisite type of perceptions and appetitions, lack freedom.  This also means, however, that most human actions are not free, because we only sometimes reason about the best course of action and act voluntarily, on the basis of our rational judgments.  Leibniz in fact stresses that in three quarters of their actions, human beings act just like animals, that is, without making use of their rationality (see Principles of Nature and Grace, section 5, in Ariew and Garber, 1989).

In addition to rationality, Leibniz claims, free actions must be self-determined and contingent (see e.g. Theodicy, section 288).  An action is self-determined—or spontaneous, as Leibniz often calls it—when its source is in the agent, rather than in another agent or some other external entity.  While all actions of monads are spontaneous in a general sense since, as we will see in section four, Leibniz denies all interaction among created substances, he may have a more demanding notion of spontaneity in mind when he calls it a requirement for freedom.  After all, when an agent acts on the basis of her rational judgment, she is not even subject to the kind of apparent influence of her body or of other creatures that is present, for instance, when someone pinches her and she feels pain.

In order to be contingent, on the other hand, the action cannot be the result of compulsion or necessitation.  This, again, is generally true for all actions of monads because Leibniz holds that all changes in the states of a creature are contingent.  Yet, there may again be an especially demanding sense in which free actions are contingent for Leibniz.  He often says that when a rational agent does something because she believes it to be best, the goodness she perceives, or her motives for acting, merely incline her towards action without necessitating action (see e.g. Huggard, p. 419; Fifth Letter to Clarke, sections 8-9; Ariew and Garber, p. 195; New Essays, p. 175).  Hence, Leibniz may be attributing a particular kind of contingency to free actions.

Even though Leibniz holds that free actions must be contingent, that is, that they cannot be necessary, he grants that they can be determined.  In fact, Leibniz vehemently rejects the notion that a world with free agents must contain genuine indeterminacy.  Hence, Leibniz is what we today call a compatibilist about freedom and determinism (see Free Will).  He believes that all actions, whether they are free or not, are determined by the nature and the prior states of the agent.  What is special about free actions, then, is not that they are undetermined, but rather that they are determined, among other things, by rational perceptions of the good.  We always do what we are most strongly inclined to do, for Leibniz, and if we are most strongly inclined by our judgment about the best course of action, we pursue that course of action freely.  The ability to act contrary even to one’s best reasons or motives, Leibniz contends, is not required for freedom, nor would it be worth having.   As Leibniz puts it in the New Essays, “the freedom to will contrary to all the impressions which may come from the understanding … would destroy true liberty, and reason with it, and would bring us down below the beasts” (p. 180).  In fact, being determined by our rational understanding of the good, as we are in our free actions, makes us godlike, because according to Leibniz, God is similarly determined by what he judges to be best.  Nothing could be more perfect and more desirable than acting in this way.

3. The Mill Argument

In several of his writings, Leibniz argues that purely material things such as brains or machines cannot possibly think or perceive.  Hence, Leibniz contends that materialists like Thomas Hobbes are wrong to think that they can explain mentality in terms of the brain.  This argument is without question among Leibniz’s most influential contributions to the philosophy of mind.  It is relevant not only to the question whether human minds might be purely material, but also to the question whether artificial intelligence is possible.  Because Leibniz’s argument against perception in material objects often employs a thought experiment involving a mill, interpreters refer to it as ‘the mill argument.’  There is considerable disagreement among recent scholars about the correct interpretation of this argument (see References and Further Reading).  The present section sketches one plausible way of interpreting Leibniz’s mill argument.

The most famous version of Leibniz’s mill argument occurs in section 17 of the Monadology:

Moreover, we must confess that perception, and what depends on it, is inexplicable in terms of mechanical reasons, that is, through shapes and motions.  If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters into a mill.  Assuming that, when inspecting its interior, we will only find parts that push one another, and we will never find anything to explain a perception.  And so, we should seek perception in the simple substance and not in the composite or in the machine.

To understand this argument, it is important to recall that Leibniz, like many of his contemporaries, views all material things as infinitely divisible.  As already seen, he holds that there are no smallest or most fundamental material elements, and every material thing, no matter how small, has parts and is hence complex.  Even if there were physical atoms—against which Leibniz thinks he has conclusive metaphysical arguments—they would still have to be extended, like all matter, and we would hence be able to distinguish between an atom’s left half and its right half.  The only truly simple things that exist are monads, that is, unextended, immaterial, mind-like things.  Based on this understanding of material objects, Leibniz argues in the mill passage that only immaterial entities are capable of perception because it is impossible to explain perception mechanically, or in terms of material parts pushing one another.

Unfortunately Leibniz does not say explicitly why exactly he thinks there cannot be a mechanical explanation of perception.  Yet it becomes clear in other passages that for Leibniz perceiving has to take place in a simple thing.  This assumption, in turn, straightforwardly implies that matter—which as seen is complex—is incapable of perception.  This, most likely, is behind Leibniz’s mill argument.  Why does Leibniz claim that perception can only take place in simple things?  If he did not have good reasons for this claim, after all, it would not constitute a convincing starting point for his mill argument.

Leibniz’s reasoning appears to be the following.  Material things, such as mirrors or paintings, can represent complexity.  When I stand in front of a mirror, for instance, the mirror represents my body.  This is an example of the representation of one complex material thing in another complex material thing.  Yet, Leibniz argues, we do not call such a representation ‘perception’: the mirror does not “perceive” my body.  The reason this representation falls short of perception, Leibniz contends, is that it lacks the unity that is characteristic of perceptions: the top part of the mirror represents the top part of my body, and so on.  The representation of my body in the mirror is merely a collection of smaller representations, without any genuine unity.  When another person perceives my body, on the other hand, her representation of my body is a unified whole.  No physical thing can do better than the mirror in this respect: the only way material things can represent anything is through the arrangement or properties of their parts.  As a result, any such representation will be spread out over multiple parts of the representing material object and hence lack genuine unity.  It is arguably for this reason that Leibniz defines ‘perception’ as “the passing state which involves and represents a multitude in the unity or in the simple substance” (Monadology, section 14).

Leibniz’s mill argument, then, relies on a particular understanding of perception and of material objects.  Because all material objects are complex and because perceptions require unity, material objects cannot possibly perceive.  Any representation a machine, or a material object, could produce would lack the unity required for perception.  The mill example is supposed to illustrate this: even an extremely small machine, if it is purely material, works only in virtue of the arrangement of its parts.  Hence, it is always possible, at least in principle, to enlarge the machine.  When we imagine the machine thus enlarged, that is, when we imagine being able to distinguish the machine’s parts as we can distinguish the parts of a mill, we will realize that the machine cannot possibly have genuine perceptions.

Yet the basic idea behind Leibniz’s mill argument can be appealing even to those of us who do not share Leibniz’s assumptions about perception and material objects.  In fact, it appears to be a more general version of what is today called “the hard problem of consciousness," that is, the problem of explaining how something physical could explain, or give rise to, consciousness.  While Leibniz’s mill argument is about perception generally, rather than conscious perception in particular, the underlying structure of the argument appears to be similar: mental states have characteristics—such as their unity or their phenomenal properties—that, it seems, cannot even in principle be explained physically.  There is an explanatory gap between the physical and the mental.

4. The Relation between Mind and Body

The mind-body problem is a central issue in the philosophy of mind.  It is, roughly, the problem of explaining how mind and body can causally interact.  That they interact seems exceedingly obvious: my mental states, such as for instance my desire for a cold drink, do seem capable of producing changes in my body, such as the bodily motions required for walking to the fridge and retrieving a bottle of water.  Likewise, certain physical states seem capable of producing changes in my mind: when I stub my toe on my way to the fridge, for instance, this event in my body appears to cause me pain, which is a mental state.  For Descartes and his followers, it is notoriously difficult to explain how mind and body causally interact.  After all, Cartesians are substance dualists: they believe that mind and body are substances of a radically different type (see Descartes: Mind-Body Distinction).  How could a mental state such as a desire cause a physical state such as a bodily motion, or vice versa, if mind and body have absolutely nothing in common?  This is the version of the mind-body problem that Cartesians face.

For Leibniz, the mind-body problem does not arise in exactly the way it arises for Descartes and his followers, because Leibniz is not a substance dualist.  We have already seen that, according to Leibniz, an animal or human being has a central monad, which constitutes its soul, as well as subordinate monads that are everywhere in its body.  In fact, Leibniz appears to hold that the body just is the collection of these subordinate monads and their perceptions (see e.g. Principles of Nature and Grace section 3), or that bodies result from monads (Ariew and Garber, p. 179).  After all, as already seen, he holds that purely material, extended things would not only be incapable of perception, but would also not be real because of their infinite divisibility.  The only truly real things, for Leibniz, are monads, that is, immaterial and indivisible substances.  This means that Leibniz, unlike Descartes, does not believe that there are two fundamentally different kinds of substances, namely physical and mental substances.  Instead, for Leibniz, all substances are of the same general type.  As a result, the mind-body problem may seem more tractable for Leibniz: if bodies have a semi-mental nature, there are fewer obvious obstacles to claiming that bodies and minds can interact with one another.

Yet, for complicated reasons that are beyond the scope of this article (but see Leibniz: Causation), Leibniz held that human minds and their bodies—as well as any created substances, in fact—cannot causally interact.  In this, he agrees with occasionalists such as Nicolas Malebranche.  Leibniz departs from occasionalists, however, in his positive account of the relation between mental and corresponding bodily events.  Occasionalists hold that God needs to intervene in nature constantly to establish this correspondence.  When I decide to move my foot, for instance, God intervenes and moves my foot accordingly, occasioned by my decision.  Leibniz, however, thinks that such interventions would constitute perpetual miracles and be unworthy of a God who always acts in the most perfect manner.  God arranged things so perfectly, Leibniz contends, that there is no need for these divine interventions.  Even though he endorses the traditional theological doctrine that God continually conserves all creatures in existence and concurs with their actions (see Leibniz: Causation), Leibniz stresses that all natural events in the created world are caused and made intelligible by the natures of created things.  In other words, Leibniz rejects the occasionalist doctrine that God is the only active, efficient cause, and that the laws of nature that govern natural events are merely God’s intentions to move his creatures around in a particular way.  Instead for Leibniz these laws, or God’s decrees about the ways in which created things should behave, are written into the natures of these creatures.  God not only decided how creatures should act, but also gave them natures and natural powers from which these actions follow.  To understand the regularities and events in nature, we do not need to look beyond the natures of creatures.  This, Leibniz claims, is much more worthy of a perfect God than the occasionalist world, in which natural events are not internally intelligible.

How, then, does Leibniz explain the correspondence between mental and bodily states if he denies that there is genuine causal interaction among finite things and also denies that God brings about the correspondence by constantly intervening?  Consider again the example in which I decide to get a drink from the fridge and my body executes that decision.  It may seem that unless there is a fairly direct link between my decision and the action—either a link supplied by God’s intervention, or by the power of my mind to cause bodily motion—it would be an enormous coincidence that my body carries out my decision.  Yet, Leibniz thinks there is a third option, which he calls ‘pre-established harmony.’  On this view, God created my body and my mind in such a way that they naturally, but without any direct causal links, correspond to one another.  God knew, before he created my body, that I would decide to get a cold drink, and hence made my body in such a way that it will, in virtue of its own nature, walk to the fridge and get a bottle of water right after my mind makes that decision.

In one text, Leibniz provides a helpful analogy for his doctrine of pre-established harmony.  Imagine two pendulum clocks that are in perfect agreement for a long period of time.  There are three ways to ensure this kind of correspondence between them: (a) establishing a causal link, such as a connection between the pendulums of these clocks, (b) asking a person constantly to synchronize the two clocks, and (c) designing and constructing these clocks so perfectly that they will remain perfectly synchronized without any causal links or adjustments (see Ariew and Garber, pp. 147-148).  Option (c), Leibniz contends, is superior to the other two options, and it is in this way that God ensures that the states of my mind correspond to the states of my body, or in fact, that the perceptions of any created substance harmonize with the perceptions of any other.  The world is arranged and designed so perfectly that events in one substance correspond to events in another substance even though they do not causally interact, and even though God does not intervene to adjust one to the other.  Because of his infinite wisdom and foreknowledge, God was able to pre-establish this mutual correspondence or harmony when he created the world, analogously to the way a skilled clockmaker can construct two clocks that perfectly correspond to one another for a period of time.

5. References and Further Reading

a. Primary Sources in English Translation

  • Ariew, Roger and Daniel Garber, eds. Philosophical Essays. Indianapolis: Hackett, 1989.
    • Contains translations of many of Leibniz’s most important shorter writings such as the Monadology, the Principles of Nature and Grace, the Discourse on Metaphysics, and excerpts from Leibniz’s correspondence, to name just a few.
  • Ariew, Roger, ed.  Correspondence [between Leibniz and Clarke]. Indianapolis: Hackett, 2000.
    • A translation of Leibniz’s correspondence with Samuel Clarke, which touches on many important topics in metaphysics and philosophy of mind.
  • Francks, Richard and Roger S. Woolhouse, eds. Leibniz's 'New System' and Associated Contemporary Texts. Oxford: Oxford University Press, 1997.
    • Contains English translations of additional short texts.
  • Francks, Richard and Roger S. Woolhouse, eds. Philosophical Texts. Oxford: Oxford University Press, 1998.
    • Contains English translations of additional short texts.
  • Huggard, E. M., ed. Theodicy: Essays on the Goodness of God, the Freedom of Man and the Origin of Evil. La Salle: Open Court, 1985.
    • Translation of the only philosophical monograph Leibniz published in his lifetime, which contains many important discussions of free will.
  • Lodge, Paul, ed. The Leibniz–De Volder Correspondence: With Selections from the Correspondence between Leibniz and Johann Bernoulli. New Haven: Yale University Press, 2013.
    • An edition, with English translations, of Leibniz’s correspondence with De Volder, which is a very important source of information about Leibniz’s mature metaphysics.
  • Loemker, Leroy E., ed. Philosophical Papers and Letters. Dordrecht: D. Reidel, 1970.
    • Contains English translations of additional short texts.
  • Look, Brandon and Donald Rutherford, eds. The Leibniz–Des Bosses Correspondence. New Haven: Yale University Press, 2007.
    • An edition, with English translations, of Leibniz’s correspondence with Des Bosses, which is another important source of information about Leibniz’s mature metaphysics.
  • Parkinson, George Henry Radcliffe and Mary Morris, eds. Philosophical Writings. London: Everyman, 1973.
    • Contains English translations of additional short texts.
  • Remnant, Peter and Jonathan Francis Bennett, eds. New Essays on Human Understanding. Cambridge: Cambridge University Press, 1996.
    • Translation of Leibniz’s section-by-section response to Locke’s Essay Concerning Human Understanding, written in the form of a dialogue between the two fictional characters Philalethes and Theophilus, who represent Locke’s and Leibniz’s views, respectively.
  • Rescher, Nicholas, ed. G.W. Leibniz's Monadology: An Edition for Students. Pittsburgh: University of Pittsburgh Press, 1991.
    • An edition, with English translation, of the Monadology, with commentary and a useful collection of parallel passages from other Leibniz texts.
  • Strickland, Lloyd H., ed. The Shorter Leibniz Texts: A Collection of New Translations. London: Continuum, 2006.
    • Contains English translations of additional short texts.

b. Secondary Sources

  • Adams, Robert Merrihew. Leibniz: Determinist, Theist, Idealist. New York: Oxford University Press, 1994.
    • One of the most influential and rigorous works on Leibniz’s metaphysics.
  • Borst, Clive. "Leibniz and the Compatibilist Account of Free Will." Studia Leibnitiana 24.1 (1992): 49-58.
    • About Leibniz’s views on free will.
  • Brandom, Robert. "Leibniz and Degrees of Perception." Journal of the History of Philosophy 19 (1981): 447-79.
    • About Leibniz’s views on perception and perceptual distinctness.
  • Davidson, Jack. "Imitators of God: Leibniz on Human Freedom." Journal of the History of Philosophy 36.3 (1998): 387-412.
    • Another helpful article about Leibniz’s views on free will and on the ways in which human freedom resembles divine freedom.
  • Davidson, Jack. "Leibniz on Free Will." The Continuum Companion to Leibniz. Ed. Brandon Look. London: Continuum, 2011. 208-222.
    • Accessible general introduction to Leibniz’s views on freedom of the will.
  • Duncan, Stewart. "Leibniz's Mill Argument Against Materialism." Philosophical Quarterly 62.247 (2011): 250-72.
    • Helpful discussion of Leibniz’s mill argument.
  • Garber, Daniel. Leibniz: Body, Substance, Monad. New York: Oxford University Press, 2009.
    • A thorough study of the development of Leibniz’s metaphysical views.
  • Gennaro, Rocco J. "Leibniz on Consciousness and Self-Consciousness." New Essays on the Rationalists. Eds. Rocco J. Gennaro and C. Huenemann. Oxford: Oxford University Press, 1999. 353-371.
    • Discusses Leibniz’s views on consciousness and highlights the advantages of reading Leibniz as endorsing a higher-order thought theory of consciousness.
  • Jolley, Nicholas. Leibniz. London; New York: Routledge, 2005.
    • Good general introduction to Leibniz’s philosophy; includes chapters on the mind and freedom.
  • Jorgensen, Larry M. "Leibniz on Memory and Consciousness." British Journal for the History of Philosophy 19.5 (2011a): 887-916.
    • Elaborates on Jorgensen (2009) and discusses the role of memory in Leibniz’s theory of consciousness.
  • Jorgensen, Larry M. "Mind the Gap: Reflection and Consciousness in Leibniz." Studia Leibnitiana 43.2 (2011b): 179-95.
    • About Leibniz’s account of reflection and reasoning.
  • Jorgensen, Larry M. "The Principle of Continuity and Leibniz's Theory of Consciousness." Journal of the History of Philosophy 47.2 (2009): 223-48.
    • Argues against ascribing a higher-order theory of consciousness to Leibniz.
  • Kulstad, Mark. Leibniz on Apperception, Consciousness, and Reflection. Munich: Philosophia, 1991.
    • Influential, meticulous study of Leibniz’s views on consciousness.
  • Kulstad, Mark. "Leibniz, Animals, and Apperception." Studia Leibnitiana 13 (1981): 25-60.
    • A shorter discussion of some of the issues in Kulstad (1991).
  • Lodge, Paul, and Marc E. Bobro. "Stepping Back Inside Leibniz's Mill." The Monist 81.4 (1998): 553-72.
    • Discusses Leibniz’s mill argument.
  • Lodge, Paul. "Leibniz's Mill Argument Against Mechanical Materialism Revisited." Ergo (2014).
    • Further discussion of Leibniz’s mill argument.
  • McRae, Robert. Leibniz: Perception, Apperception, and Thought. Toronto: University of Toronto Press, 1976.
    • An important and still helpful, even if somewhat dated, study of Leibniz’s philosophy of mind.
  • Murray, Michael J. "Spontaneity and Freedom in Leibniz." Leibniz: Nature and Freedom. Eds. Donald Rutherford and Jan A. Cover. Oxford: Oxford University Press, 2005. 194-216.
    • Discusses Leibniz’s views on free will and self-determination, or spontaneity.
  • Phemister, Pauline. "Leibniz, Freedom of Will and Rationality." Studia Leibnitiana 26.1 (1991): 25-39.
    • Explores the connections between rationality and freedom in Leibniz.
  • Rozemond, Marleen. "Leibniz on the Union of Body and Soul." Archiv für Geschichte der Philosophie 79.2 (1997): 150-78.
    • About the mind-body problem and pre-established harmony in Leibniz.
  • Rozemond, Marleen. "Mills Can't Think: Leibniz's Approach to the Mind-Body Problem." Res Philosophica 91.1 (2014): 1-28.
    • Another helpful discussion of the mill argument.
  • Savile, Anthony. Routledge Philosophy Guidebook to Leibniz and the Monadology. New York: Routledge, 2000.
    • Very accessible introduction to Leibniz’s Monadology.
  • Simmons, Alison. "Changing the Cartesian Mind: Leibniz on Sensation, Representation and Consciousness." The Philosophical Review 110.1 (2001): 31-75.
    • Insightful discussion of the ways in which Leibniz’s philosophy of mind differs from the Cartesian view; also argues that Leibnizian consciousness consists in higher-order perceptions.
  • Sotnak, Eric. "The Range of Leibnizian Compatibilism." New Essays on the Rationalists. Eds. Rocco J. Gennaro and C. Huenemann. Oxford: Oxford University Press, 1999. 200-223.
    • About Leibniz’s theory of freedom.
  • Swoyer, Chris. "Leibnizian Expression." Journal of the History of Philosophy 33 (1995): 65-99.
    • About Leibnizian perception.
  • Wilson, Margaret Dauler. "Confused Vs. Distinct Perception in Leibniz: Consciousness, Representation, and God's Mind." Ideas and Mechanism: Essays on Early Modern Philosophy. Princeton: Princeton University Press, 1999. 336-352.
    • About Leibnizian perception as well as perceptual distinctness.

 

Author Information

Julia Jorati
Email: jorati.1@osu.edu
The Ohio State University
U. S. A.

The Computational Theory of Mind

The Computational Theory of Mind (CTM) claims that the mind is a computer, so the theory is also known as computationalism. It is generally assumed that CTM is the main working hypothesis of cognitive science.

CTM is often understood as a specific variant of the Representational Theory of Mind (RTM), which claims that cognition is manipulation of representation. The most popular variant of CTM, classical CTM, or simply CTM without any qualification, is related to the Language of Thought Hypothesis (LOTH), that has been forcefully defended by Jerry Fodor. However, there are several other computational accounts of the mind that either reject LOTH—notably connectionism and several accounts in contemporary computational neuroscience—or do not subscribe to RTM at all. In addition, some authors explicitly disentangle the question of whether the mind is computational from the question of whether it manipulates representations. It seems that there is no inconsistency in maintaining that cognition requires computation without subscribing to representationalism, although most proponents of CTM agree that the account of cognition in terms of computation over representation is the most cogent. (But this need not mean that representation is reducible to computation.)

One of the basic philosophical arguments for CTM is that it can make clear how thought and content are causally relevant in the physical world. It does this by saying thoughts are syntactic entities that are computed over: their form makes them causally relevant in just the same way that the form makes fragments of source code in a computer causally relevant. This basic argument may be made more specific in various ways. For example, Allen Newell couched it in terms of the physical symbol hypothesis, according to which being a physical symbol system (a physical computer) is a necessary and sufficient condition of thinking. Haugeland framed the claim in formalist terms: if you take care of the syntax, the semantics will take care of itself. Daniel Dennett, in a slightly different vein, claims that while semantic engines are impossible, syntactic engines can approximate them quite satisfactorily.

This article focuses only on specific problems with the Computation Theory of Mind (CTM), while for the most part leaving RTM aside. There are four main sections. In the first section, the three most important variants of CTM are introduced: classical CTM, connectionism, and computational neuroscience. The second section discusses the most important conceptions of computational explanation in cognitive science, which are functionalism and mechanism. The third section introduces the skeptical arguments against CTM raised by Hilary Putnam, and presents several accounts of implementation (or physical realization) of computation. Common objections to CTM are listed in the fourth section.

Table of Contents

  1. Variants of Computationalism
    1. Classical CTM
    2. Connectionism
    3. Computational Neuroscience
  2. Computational Explanation
    1. Functionalism
    2. Mechanism
  3. Implementation
    1. Putnam and Searle against CTM
    2. Semantic Account
    3. Causal Account
    4. Mechanistic Account
  4. Other objections to CTM
  5. Conclusion
  6. References and Further Reading

1. Variants of Computationalism

The generic claim that the mind is a computer may be understood in various ways, depending on how the basic terms are understood. In particular, some theorists claimed that only cognition is computation, while emotional processes are not computational (Harnish 2002, 6), yet some theorists explain neither motor nor sensory processes in computational terms (Newell and Simon 1972). These differences are relatively minor compared to the variety of ways in which “computation” is understood.

The main question here is just how much of the mind’s functioning is computational. The crux of this question comes with trying to understand exactly what computation is. In its most generic reading, computation is equated with information processing; but in stronger versions, it is explicated in terms of digital effective computation, which is assumed in the classical version of CTM; in some other versions, analog or hybrid computation is admissible. Although Alan Turing defined effective computation using his notion of a machine (later called a ‘Turing machine’, see below section 1.a), there is a lively debate in philosophy of mathematics as to whether all physical computation is Turing-equivalent. Even if all mathematical theories of effective computation that we know of right now (for example, lambda calculus, Markoff algorithms, and partial recursive functions) turn out to be equivalent to Turing-machine computation, it is an open question whether they are adequate formalizations of the intuitive notion of computation. Some theorists, for example, claim that it is physically possible that hypercomputational processes (that is, processes that compute functions that a Turing machine cannot compute) exist (Copeland 2004). For this reason, the assumption that CTM has to assume Turing computation, frequently made in the debates over computationalism, is controversial.

One can distinguish several basic kinds of computation, such as digital, analog, and hybrid. As they are traditionally assumed in the most popular variants of CTM, they will be explicated in the following format: classical CTM assumes digital computation; connectionism may also involve analog computation; and in several theories in computational neuroscience, hybrid analog/digital processing is assumed.

a. Classical CTM

Classical CTM is understood as the conjunction of RTM (and, in particular, LOTH) and the claim that cognition is digital effective computation. The best-known account of digital, effective computation was given by Alan Turing in terms of abstract machines (which were originally intended to be conceptual tools rather than physical entities, though sometimes they are built physically simply for fun). Such abstract machines can only do what a human computer would do mechanically, given a potentially indefinite amount of paper, a pencil, and a list of rote rules. More specifically, a Turing machine (TM) has at least one tape, on which symbols from a finite alphabet can appear; the tape is read and written (and erased) by a machine head, and can also move left or right. The functioning of the machine is described by the machine table instructions, which  include five pieces of information: (1) the current state of the TM; (2) the symbol read from the tape; (3) the symbol written on the tape; (4) left or right movement of the head; (5) the next state of the TM. The machine table has to be finite; the number of states is also finite. In contrast, the length of tape is potentially unbounded.

As it turns out, all known effective (that is, halting, or necessarily ending their functioning with the expected result) algorithms can be encoded as a list of instructions for a Turing machine. For  example, a basic Turing machine can be built to perform logical negation of the input propositional letter. The alphabet may consist of all 26 Latin letters, a blank symbol and a tilde. Now, the machine table instructions need to specify the following operations: if the head scanner is at the tilde, erase the tilde (this effectively realizes the double negation rule); if the head scanner is at the letter and the state of the machine is not “1”, move the head left and change the state of the machine to 1; if the state is “1” and the head is at the blank symbol, write the tilde (note: This list of instructions is vastly simplified for presentation purposes. In reality, it would be necessary to rewrite symbols on the tape when inserting the tilde and decide when to stop operation. B—ased on the current list, it would simply cycle infinitely). Writing Turing machine programs is actually rather time-consuming and useful only for purely theoretical purposes, but all other digital effective computational formalisms are essentially similar in requiring  (1) a finite number of different symbols in what corresponds to a Turing machine alphabet (digitality); (2) that there are a finite number of steps from the beginning to the end of operation (effectiveness). (Correspondingly, one can introduce hypercomputation by positing an infinite number of symbols in the alphabet, infinite number of states or steps in the operation, or by introducing randomness in the execution of operations.) Note that digitality is not equivalent to binary code, it is just technologically easier to produce physical systems responsive to two states rather than ten. Early computers operated, for example, on decimal code, rather than binary code (Von Neumann 1958).

There is a particularly important variant of the Turing machine, which played a seminal role in justifying the CTM. This is the universal Turing machine. A Turing machine is a formally defined, mathematical entity. Hence, it has a unique description, which can identify a given TM. Since we can encode these descriptions on the tape of another TM, they can be operated upon, and one can make these operations conform to the definition of the first TM. This way, a TM that has the encoding of any other TM on its input tape will act accordingly, and will faithfully simulate the other TM. This machine  is then called universal. The notion of universality is very important in the mathematical theory of computability, as the universal TM is hypothesized to be able to compute all effectively computable mathematical functions. In addition, the idea of using a description of a TM to determine the functioning of another TM gave rise to the idea of programmable computers. At the same time, flexibility is supposed to be the hallmark of general intelligence, and many theorists supposed that this flexibility can be explained with universality (Newell 1980). This gave the universal TM a special role in the CTM; one that motivated an analogy between the mind and the computer: both were supposed to solve problems whose nature cannot be exactly predicted (Apter 1970).

These points notwithstanding, the analogy between the universal TM and the mind is not necessary to prove classical CTM true. For example, it may turn out that human memory is essentially much more bounded than the tape of the TM. In addition, the significance of the TM in modeling cognition is not obvious: the universal TM was never used directly to write computational models of cognitive tasks, and its role may be seen as merely instrumental in analyzing the computational complexity of algorithms posited to explain these tasks. Some theorists question whether anything at all hinges upon the notion of equivalence between the mind’s information-processing capabilities and the Turing machine (Sloman 1996) ——the CTM may leave the question whether all physical computation is Turing-equivalent open, or it might even embrace hypercomputation.

The first digital model of the mind was (probably) presented by Warren McCulloch and Walter Pitts (1943), who suggested that the brain’s neuron operation essentially corresponds to logical connectives (in other words, neurons were equated with what later was called ‘logical gates’ —the basic building blocks of contemporary digital integrated circuits). In philosophy, the first avowal of CTM is usually linked with Hilary Putnam (1960), even if the latter paper does not explicitly assert that the mind is equivalent to a Turing machine but rather uses the concept to defend his functionalism. The classical CTM also became influential in early cognitive science (Miller, Galanter, and Pribram 1967).

In 1975, Jerry Fodor linked CTM with LOTH. He argued that cognitive representations are tokens of the Language of Thought and that the mind is a digital computer that operates on these tokens. Fodor’s forceful defense of LOTH and CTM as inextricably linked prompted many cognitive scientists and philosophers to equate LOTH and CTM. In Fodor’s version, CTM furnishes psychology with the proper means for dealing with the question of how thought, framed in terms of propositional attitudes, is possible. Propositional attitudes are understood as relations of the cognitive agent to the tokens in its LOT, and the operations on these tokens are syntactic, or computational. In other words, the symbols of LOT are transformed by computational rules, which are usually supposed to be inferential. For this reason, classical CTM is also dubbed symbolic CTM, and the existence of symbol transformation rules is supposed to be a feature of this approach. However, the very notion of the symbol is used differently by various authors: some mean entities equivalent to symbols on the tape of the TM, some think of physically distinguishable states, as in Newell’s physical symbol hypothesis (Newell’s symbols, roughly speaking, point to the values of some variables), whereas others frame them as tokens in LOT. For this reason, major confusion over the notion of symbol is prevalent in current debate (Steels 2008).

The most compelling case for classical CTM can be made by showing its aptitude for dealing with abstract thinking, rational reasoning, and language processing. For example, Fodor argued that productivity of language (the capacity to produce indefinitely many different sentences) can be explained only with compositionality, and compositionality is a feature of rich symbol systems, similar to natural language. (Another argument is related to systematicity; see (Aizawa 2003).) Classical systems, such as production systems, excel in simulating human performance in logical and mathematical domains. Production systems contain production rules, which are, roughly speaking, rules of the form “if a condition X is satisfied, do Y”. Usually there are thousands of concurrently active rules in production systems (for more information on production systems, see (Newell 1990; Anderson 1983).)

In his later writings, however, Fodor (2001) argued that only peripheral (that is, mostly perceptual and modular) processes are computational, in contradistinction to central cognitive processes, which, owing to their holism, cannot be explained computationally (or in any other way, really). This pessimism about classical CTM seems to contrast with the successes of the classical approach in its traditional domains.

Classical CTM is silent about the neural realization of symbol systems, and for this reason it has been criticized by connectionists as biologically implausible. For example, Miller et al. (1967) supposed that there is a specific cognitive level which is best described as corresponding to reasoning and thinking, rather than to any lower-level neural processing. Similar claims have been framed in terms of an analogy between the software/hardware distinction and the mind/brain distinction. Critics stress that the analogy is relatively weak, and neurally quite implausible. In addition, perceptual and motor functioning does not seem to fit the symbolic paradigm of cognitive science.

b. Connectionism

In contrast to classical CTM, connectionism is usually presented as a more biologically plausible variant of computation. Although some artificial neural networks (ANNs) are vastly idealized (for an evaluation of neural plausibility of typical ANNs, see (Bechtel and Abrahamsen 2002, sec. 2.3)), many researchers consider them to be much more realistic than rule-based production systems. The connectionist systems do well in modeling perceptual and motor processes, which are much harder to model symbolically.

Some early ANNs are clearly digital (for example, the early proposal of McCulloch and Pitts, see section 1.a above, is both a neural network and a digital system), while some modern networks are supposed to be analog. In particular, the connection weights are continuous values, and even if these networks are usually simulated on digital computers, they are supposed to implement analog computation. Here an interesting epistemological problem is evident: because all measurement is of finite precision, we cannot ever be sure whether the measured value is actually continuous or discrete. The discreteness may just be a feature of the measuring apparatus. For this reason, continuous values are always theoretically posited rather than empirically discovered, as there is no way to empirically decide whether a given value is actually discrete or not. Having said that, there might be compelling reasons in some domains of science to assume that measurement values should be mathematically described as real numbers, rather than approximated digitally. (Note that a Turing machine cannot compute all real numbers but it can approximate any given real number to any desired degree, as the Nyquist-Shannon sampling theorem shows).

Importantly, the relationship between connectionism and RTM is more debatable here than in classical CTM. Some proponents of connectionist models are anti-representationalists or eliminativists: the notion of representation, according to them, can be discarded in connectionist cognitive science. Others claim that the mention of representation in connectionism is at best honorific (for an extended argument, see (Ramsey 2007)). Nevertheless, the position that connectionist networks are representational as a whole, by being homomorphic to their subject domain, has been forcefully defended (O’Brien and Opie 2006; O’Brien and Opie 2009). It seems that there are important and serious differences among various connectionist models in the way that they explain cognition.

In simpler models, the nodes of artificial neural networks may be treated as atomic representations (for example, as individual concepts). They are usually called ‘symbolic’ for that very reason. However, these representations represent only by fiat: it is the modeler who decides what they represent. For this reason, they do not seem to be biologically plausible, though some might argue that, at least in principle, individual neurons may represent complex features: in biological brains, so-called grandmother cells do exactly that (Bowers 2009; Gross 2002; Konorski 1967). More complex connectionist models do not represent individual representations as individual nodes; instead, the representation is distributed into multiple nodes that may be activated to a different degree. These models may plausibly implement the prototype theory of concepts (Wittgenstein 1953; Rosch and Mervis 1975). The distributed representation seems, therefore, to be much more biologically and psychologically plausible for proponents of the prototype theory (though this theory is also debated ——see (Machery 2009) for a critical review of theories of concepts in psychology).

The proponents of classical CTM have objected to connectionism by pointing out that distributed representations do not seem to explain productivity and systematicity of cognition, as these representations are not compositional (Fodor and Pylyshyn 1988). Fodor and Pylyshyn present connectionists with the following dilemma: If representations in ANNs are compositional, then ANNs are mere implementations of classical systems; if not, they are not plausible models of higher cognition. Obviously, both horns of the dilemma are unattractive for connectionism. This has sparked a lively debate. (For a review, see Connectionism and (Bechtel and Abrahamsen 2002, chap. 6)). In short, some reject the premise that higher cognition is actually as systematic and productive as Fodor and Pylyshyn assume, while others defend the view that implementing a compositional symbolic system by an ANN does not simply render it uninteresting technical gadgetry, because further aspects of cognitive processes can be explained this way.

In contemporary cognitive modeling, ANNs have become major standard tools. (See for example (Lewandowsky and Farrell 2011)). They are also prevalent in computational neuroscience, but there are some important hybrid digital/analog systems in the latter discipline that deserve separate treatment.

c. Computational Neuroscience

Computational neuroscience employs many diverse methods and it is hard to find modeling techniques applicable to a wide range of task domains. Yet it has been argued that, in general, computation in the brain is neither completely analog nor completely digital (Piccinini and Bahar 2013). This is because neurons, on one hand, seem to be digital, since they spike only when the input signal exceeds a certain threshold (hence, the continuous input value becomes discrete), but their spiking forms continuous patterns in time. For this reason, it is customary to describe the functioning of spiking neurons both as dynamical systems, which means that they are represented in terms of continuous parameters evolving in time in a multi-dimensional space (the mathematical representation takes the form of differential equations in this case), and as networks of information-processing elements (usually in a way similar to connectionism). Hybrid analog/digital systems are also often postulated as situated in different parts of the brain. For example, the prefrontal cortex is said to manifest bi-stable behavior and gating (O’Reilly 2006), which is typical of digital systems.

Unifying frameworks in computational neuroscience are relatively rare. Of special interest might be the Bayesian brain theory and the Neural Engineering Framework (Eliasmith and Anderson 2003). The Bayesian brain theory has become one of the major theories of brain functioning——here it is assumed that the brain’s main function is to predict probable outcomes (for example, causes of sensory stimulation) based on its earlier sensory input. One major theory of this kind is the free-energy theory (Friston, Kilner, and Harrison 2006; Friston and Kiebel 2011). This theory presupposes that the brain uses hierarchical predictive coding, which is an efficient way to deal with probabilistic reasoning (which is known to be computationally hard; this is one of the major criticisms of this approach ——it may even turn out that predictive coding is not Bayesian at all, compare (Blokpoel, Kwisthout, and Van Rooij 2012)). The predictive coding (also called predictive processing) is thought by Andy Clark to be a unifying theory of the brain (Clark 2013), where brains predict future (or causes of) sensory input in a top-down fashion and minimize the error of such predictions either by changing predictions about sensory input or by acting upon the world. However, as critics of this line of research have noted, such predictive coding models lack plausible neural implementation (usually they lack any implementation and remain sketchy, compare (Rasmussen and Eliasmith 2013)). Some suggest that a lack of implementation is true of the Bayesian models in general (Jones and Love 2011).

The Neural Engineering Framework (NEF) differs from the predictive brain approach in two respects: it does not posit a single function for the brain, and it offers detailed, biologically-plausible models of cognitive capacities. In a recent version (Eliasmith 2013) features the world’s largest functional brain model. The main principles of the NEF are: (1) Neural representations are understood as combinations of nonlinear encoding and optimal linear decoding (this includes temporal and population representations); (2) transformations of neural representations are functions of variables represented by a population; and (3) neural dynamics are described with neural representations as control-theoretic state variables. (‘Transformation’ is the term given for what would traditionally be called computation.) The NEF models are at the same time representational, computational, dynamical, and use the control theory (which is mathematically equivalent to dynamic systems theory). Of special interest is that the NEF enables the building of plausible architectures that tackle symbolic problems. For example, a 2.5-million neuron model of the brain (called ‘Spaun’) has been built, which is able to perform eight diverse tasks (Eliasmith et al. 2012). Spaun features so-called semantic pointers, which can be seen as elements of compressed neural vector space, and which enable the execution of higher cognition tasks. At the same time, the NEF models are usually less idealizing than classical CTM models, and they do not presuppose that the brain is as systematic and compositional as Fodor and Pylyshyn claim. The NEF models deliver the required performance but without positing an architecture that is entirely reducible to a classical production system.

2. Computational Explanation

The main aim of computational modeling in cognitive science is to explain and predict mental phenomena. (In neuroscience and psychiatry, therapeutic intervention is another major aim of the inquiry.) There are two main competing theories of computational explanation: functionalism, in particular David Marr’s account; and mechanism. Although some argue for the Deductive-Nomological account in cognitive science, especially proponents of dynamicism (Walmsley 2008), the dynamical models in question are contrasted with computational ones. What's more, the relation between mechanical and dynamical explanation is a matter of a lively debate (Zednik 2011; Kaplan and Craver 2011; Kaplan and Bechtel 2011).

a. Functionalism

One of the most prominent views of functional explanation (for a general overview see Causal Theories of Functional Explanation) was developed by Robert Cummins (Cummins 1975; Cummins 1983; Cummins 2000). Cummins rejects the idea that explanation in psychology is subsumption under a law. For him, psychology and other special sciences are interested in various effects, understood as exercises of various capacities. A given capacity is to be analyzed functionally, by decomposing it into a number of less problematic capacities, or dispositions, that jointly manifest themselves as the effect in question. In cognitive science and psychology, this joint manifestation is best understood in terms of flowcharts or computer programs. Cummins claims that computational explanations are just top-down explanations of a system’s capacity.

A specific problem with Cummins’ account is that the explanation is considered to be correct if dispositions are merely sufficient for the joint manifestation of the effect to be displayed. For example, a computer program that has the same output as a human subject, given the same input, is held to be explanatory of the subject’s performance. This seems problematic, given that computer simulations have been traditionally evaluated not only at the level of their inputs and outputs (in which case they would be merely ‘weakly equivalent’ in Fodor’s terminology, see (Fodor 1968)), but also at the level of the process that transforms the input data into the output data (in which case they are ‘strongly equivalent’ and genuinely explanatory, according to Fodor). Note, for example, that it is sufficient to kill U. S. President John F. Kennedy with an atomic bomb, but this fact is not explanatory of his actual assassination. In short, critics of functional explanation stress that it is too liberal and that it should require causal relevance as well. They argue that functional analyses devoid of causal relevance are in the best case incomplete, and in the worst case they may be explanatorily irrelevant (Piccinini and Craver 2011).

One way to make the functional account more robust is to introduce a hierarchy of explanatory levels. In the context of cognitive science, the most influential proposal for such a hierarchy comes from David Marr (1982), who proposes a three-leveled model of explanation. This model introduces several additional constraints that have since been widely accepted in modeling practice. In particular, Marr argued that the complete explanation of a computational system should feature the following levels: (1) The computational level; (2) the level of representation and algorithm; and (3) the level of hardware implementation.

At the computational level, the modeler is supposed to ask what operations the system performs and why it performs them. Interestingly, the term Marr proposed for this level has proved confusing to some. For this reason, it is usually characterized in semantic terms, such as knowledge or representation, but this may be also somewhat misleading. At this level, the modeler is supposed to assume that a device performs a task by carrying out a series of operations. She needs to identify the task in question and justify her explanatory strategy by ensuring that her specification mirrors the performance of the machine, and that the performance is appropriate in the given environment. Marrian “computation” refers to computational tasks and not to the manipulation of particular semantic representations. No wonder that other terms for this level have been put forth to prevent misunderstanding, perhaps the most appropriate of which is Sterelny’s (1990) “ecological level.” Sterelny makes it clear that the justification of why the task is performed includes the relevant physical conditions of the machine’s environment.

The level of representation and algorithm concerns the following questions: How can the computational task be performed? What is the representation of the input and output? And what is the algorithm for the transformation? The focus is on the formal features of the representation———which are required to develop an algorithm in a programming language —rather than on whether the inputs really represent anything. The algorithm is correct when it performs the specified task, given the same input as the computational system in question. The distinction between the computational level and the level of representation and algorithm amounts to the difference between what and how (Marr 1982, 28).

The level of hardware implementation refers to the physical machinery realizing the computation; in neuroscience, of course, this will be the brain. Marr’s methodological account is based on his own modeling in computational neuroscience, but stresses the relative autonomy of the levels, which are also levels of realization. There are multiple realizations of a given task (see Mind and Multiple Realizability), so Marr endorses the classical functionalist claim of relative autonomy of levels, which is supposed to underwrite antireductionism (Fodor 1974). Most functionalists subsequently embraced Marr’s levels as well (for example, Zenon Pylyshyn (1984) and Daniel Dennett (1987)).

Although Marr introduces more constraints than Cummins, because he requires the description of three different levels of realization, his theory also suffers from the abovementioned problems. That is, it does not require the causal relevance of the algorithm and representation level; sufficiency is all that is required. Moreover, it remains relatively unclear why exactly there are three, and not, say, five levels in the proper explanation (note that some philosophers proposed the introduction of intermediary levels). For these reasons, mechanists have criticized Marr’s approach (Miłkowski 2013).

b. Mechanism

According to mechanism, to explain a phenomenon is to explain its underlying mechanism. Mechanistic explanation is a species of causal explanation, and explaining a mechanism involves the discovery of its causal structure. While mechanisms are defined variously, the core idea is that they are organized systems, comprising causally relevant component parts and operations (or activities) thereof (Bechtel 2008; Craver 2007; Glennan 2002; Machamer, Darden, and Craver 2000). Parts of the mechanism interact and their orchestrated operation contributes to the capacity of the mechanism. Mechanistic explanations abound in special sciences, and it is hoped that an adequate description of the principles implied in explanations (those that are generally accepted as sound) will also furnish researchers with normative guidance. The idea that computational explanation is best understood as mechanistic has been defended by (Piccinini 2007b; Piccinini 2008) and (Miłkowski 2013). It is closely linked to causal accounts of computational explanation, too (Chalmers 2011).

Constitutive mechanistic explanation is the dominant form of computational explanation in cognitive science. This kind of explanation includes at least three levels of mechanism: a constitutive (-1) level, which is the lowest level in the given analysis; an isolated (0) level, where the parts of the mechanism are specified, along with their interactions (activities or operations); and the contextual (+1) level, where the function of the mechanism is seen in a broader context (for example, the context for human vision includes lighting conditions). In contrast to how Marr (1982) or Dennett (1987) understand them, levels here are not just levels of abstraction; they are levels of composition. They are tightly integrated, but not entirely reducible to the lowest level.

Computational models explain how the computational capacity of a mechanism is generated by the orchestrated operation of its component parts. To say that a mechanism implements a computation is to claim that the causal organization of the mechanism is such that the input and output information streams are causally linked and that this link, along with the specific structure of information processing, is completely described. Note that the link is sometimes cyclical and can be very complex.

In some respects, the mechanistic account of computational explanation may be viewed as a causally-constrained version of functional explanation. Developments in the theory of mechanistic explanation, which is now one of the most active fields in the philosophy of science, make it, however, much more sensitive to the actual scientific practice of modelers.

3. Implementation

One of the most difficult questions for proponents of CTM is how to determine whether a given physical system is an implementation of a formal computation. Note that computer science does not offer any theory of implementation, and the intuitive view that one can decide whether a system implements a computation by finding a one-to-one correspondence between physical states and the states of a computation may lead to serious problems. In what follows, I will sketch out some objections to the objectivity of the notion of computation, formulated by John Searle and Hilary Putnam, and examine various answers to their objections.

a. Putnam and Searle against CTM

Putnam and Searle’s objection may be summarized as follows. There is nothing objective about physical computation; computation is ascribed to physical systems by human observers merely for convenience. For this reason, there are no genuine computational explanations. Needless to say, such an objection invalidates most research that has been done in cognitive science.

In particular, Putnam (1991, 121–125) has constructed a proof that any open physical system implements any finite automaton (which is a model of computation that has lower computational power than a Turing machine; note that the proof can be easily extended to Turing machines as well). The purpose of Putnam’s argument is to demonstrate that functionalism, were it true, would imply behaviorism; for functionalism, the internal structure is completely irrelevant to deciding what function is actually realized. The idea of the proof is as follows. Any physical system has at least one state. This state obtains for some time, and the duration can be measured by an external clock. By an appeal to the clock, one can identify as many states as one wishes, especially if the states can be constructed by set-theoretic operations (or their logical equivalent, which is the disjunction operator). For this reason, one can always find as many states in the physical system as the finite machine requires (it has, after all, a finite number of states). Also, its evolution in time may be easily mapped onto a physical system thanks to disjunctions and the clock. For this reason, there is nothing explanatory about the notion of computation.

Searle’s argument is similar. He argues that being a digital computer is a matter of ascribing 0s and 1s to a physical system, and that for any program and any sufficiently complex object there is a description of the object under which it realizes the program (Searle 1992, 207–208). On this view, even an ordinary wall would be a computer. In essence, both objections are similar in making the point that given enough freedom, one can always map physical states —whose number can be adjusted by logical means or by simply making more measurements —to the formal system. If we talk of both systems in terms of sets, then all that matters is cardinality of both sets (in essence, these arguments are similar to the objection once made against Russell’s structuralism, compare (Newman 1928)). As the arguments are similar, the replies to these objections usually address both at the same time, and try to limit the admissible ways of carving physical reality. The view is that somehow reality should be carved at its joints, and then made to correspond with the formal model.

b. Semantic Account

The semantic account of implementation is by far the most popular among philosophers. It simply requires that there is no computation without representation (Fodor 1975). But the semantic account seems to beg the question, given that some computational models require no representation, notably in connectionism. Besides, other objections to CTM (in particular the arguments based on the Chinese Room experiment question the assumption that computer programs ever represent anything by themselves. For this reason, at least in this debate, one can only assume that programs represent just because they are ascribed meaning by external observers. But in such a case, the observer may just as easily ascribe meaning to a wall. Thus, the semantic account has no resources to deal with these objections.

I do not meant to suggest that the semantic account is completely wrong; indeed, the intuitive appeal of CTM is based on its close links with RTM. Yet the assumption that computation always represents has been repeatedly questioned (Fresco 2010; Piccinini 2006; Miłkowski 2013). For example, it seems that an ordinary logical gate (the computational entity that corresponds to a logical connective), for example an AND gate, does not represent anything. At least, it does not seem to refer to anything. Yet it is a simple computational device.

c. Causal Account

The causal account requires that the physical states taken to correspond to the mathematical description of computation are causally linked (Chalmers 2011). This means that there have to be counterfactual dependencies to satisfy (this requirement has been proposed by (Copeland 1996), but without requiring that the states be causally relevant) and that the methodological principles of causal explanations have to be followed. They include theoretical parsimony (used already by Fodor in his constraints of his semantic account of computation) and the causal Markov condition. In particular, states that are not related causally, be it in Searle’s wall, or Putnam’s logical constructs, are automatically discarded.

There are two open questions for the causal account, however. First, for any causal system, there will be a corresponding computational description. This means that even if it is no longer true that all physical systems implement all possible computations, they still implement at least one computation (if there are multiple causal models of a given system, the number of corresponding computations of course grows). Causal theorists usually bite the bullet by replying that this does not make computational explanation void; it just allows a weak form of pancomputationalism (which is the claim that everything is computational (Müller 2009; Piccinini 2007a)). The second question is how the boundaries of causal systems are to be drawn. Should we try to model a computer’s distal causes (including the operations at the production site of its electronic components) in the causal model brought into correspondence with the formal model of computation? This seems absurd, but there is no explicit reply to this problem in the causal account.

d. Mechanistic Account

The mechanistic account is a specific version of the causal account, defended by Piccinini and Miłkowski. The first move made by both is to take into account only functional mechanisms, which excludes weak pancomputationalisms. (The requirement that the systems should have the function —in some robust sense —of computing has also been defended by other authors, compare (Lycan 1987; Sterelny 1990)). Another is to argue that computational systems should be understood as multi-level systems, which fits naturally with the mechanistic account of computational explanation. Note that mechanists in the philosophy of science have already faced the difficult question of how to draw a boundary around systems, for example by including only components constitutively relevant to the capacity of the mechanism; compare (Craver 2007). For this reason, the mechanistic account is supposed to deliver a satisfactory approach to delineating computational mechanisms from their environment.

Another specific feature of the mechanistic account of computation is that it makes clear how the formal account of computation corresponds to the physical mechanism. Namely, the isolated level of the mechanism (level 0, see section 2.c above) is supposed to be described by a mechanistically adequate model of computation. The description of the model usually comprises two parts: (1) an abstract specification of a computation, which should include all the causally relevant variables (a formal model of the mechanism); (2) a complete blueprint of the mechanism at this level of its organization.

Even if one remains skeptical about causation or physical mechanisms, Putnam and Searle’s objections can be rejected in the mechanistic account of implementation, to the extent that these theoretical posits are admissible in special sciences. What is clear from this discussion is that implementation is not a matter of any simple mapping but of satisfying a number of additional constraints usually required by causal modeling in science.

4. Other objections to CTM

The objection discussed in section 3 is by no means the only objection discussed in philosophy, but it is special because of its potential to completely trivialize CTM. Another very influential objection against CTM (and against the very possibility of creating genuine artificial intelligence) stems from Searle’s Chinese Room thought experiment. The debate over this thought experiment is, at best, inconclusive, so it does not show that CTM is doomed (for more discussion on Chinese Room, see also (Preston and Bishop 2002)). Similarly, all arguments that purport to show that artificial intelligence (AI) is in principle impossible seem to be equally unconvincing, even if they were cogent at some point in time when related to some domains of human competence (for example, for a long time it has been thought that decent machine translation is impossible; it has been even argued that funding research into machine speech recognition is morally wrong, compare (Weizenbaum 1976, 176)). The relationship between AI and CTM is complex: even if non-human AI is impossible, it does not imply that CTM is wrong, as it may turn out that only biologically-inspired AI is possible.

One group of objections against CTM focuses on its alleged reliance on the claim that cognition should be explained merely in terms of computation. This motivates, for example, claims that CTM ignores emotional or bodily processes (see Embodied Cognition). Such claims are, however, unsubstantiated: proponents of CTM more often than not ignore emotions (though even early computer simulations focused on motivation and emotion; compare (Tomkins and Messick 1963; Colby and Gilbert 1964; Loehlin 1968)) or embodiment, though this is not at the core of their claims. Furthermore, according to the most successful theories of implementation, both causal and mechanistic, a physical computation always has properties that are over and above its computational features. It is these physical features that make this computation possible in the first place, and ignoring them (for example, ignoring the physical constitution of neurons) simply leaves the implementation unexplained. For this reason, it seems quite clear that CTM cannot really involve a rejection of all other explanations; the causal relevance of computation implies causal relevance of other physical features, which means that embodied cognition is implied by CTM, rather than excluded.

Jerry Fodor has argued that it is central cognition that cannot be explained computationally, in particular in the symbolic way (and that no other explanation is forthcoming). This claim seems to fly in the face of the success of production systems in such domains as reasoning and problem solving. Fodor justifies his claim by pointing out that central cognitive processes are cognitively impenetrable, which means that an agent’s knowledge and beliefs may influence any other of his other beliefs (which also means that beliefs are strongly holistic). But even if one accepts the claim that there is a substantial (and computational) difference between cognitively penetrable and impenetrable processes, this still wouldn’t rule out a scientific account of both (Boden 1988, 172).

Arguments against the possibility of a computational account of common sense (Dreyfus 1972) also appeal to Holism. Some also claim that it leads to the frame problem in AI, though this has been debated; while the meaning of the frame problem for CTM is unclear (Pylyshyn 1987; Shanahan 1997; Shanahan and Baars 2005).

A specific group of arguments against CTM is directed against the claim that cognition is digital effective computation: some propose that the mind is hypercomputational and try to prove this with reference to Gödel’s proof of undecidability (Lucas 1961; Penrose 1989). These arguments are not satisfactory because they assume without justification that human beliefs are not contradictory (Putnam 1960; Krajewski 2007). Even if they are genuinely contradictory, the claim that the mind is not a computational mechanism cannot be proven this way, as Krajewski has argued, showing that the proof leads to a contradiction.

5. Conclusion

The Computational Theory of Mind (CTM) is the working assumption of the vast majority of modeling efforts in cognitive science, though there are important differences among various computational accounts of mental processes. With the growing sophistication of modeling and testing techniques, computational neuroscience offers more and more refined versions of CTM, which are more complex than early attempts to model mind as a single computational device ( such as a Turing machine). What is much more plausible, at least biologically, is a complex organization of various computational mechanisms, some permanent and some ephemeral, in a structure that does not form a strict hierarchy. The general agreement in cognitive science is, however, that the generic claim that minds process information, even if it is an empirical hypothesis that might prove wrong, is highly unlikely to turn out false. Yet it is far from clear what kind of processing is involved.

6. References and Further Reading

  • Aizawa, Kenneth. 2003. The Systematicity Arguments. Boston: Kluwer Academic.
  • Anderson, John R. 1983. The Architecture of Cognition. Cambridge, Mass.: Harvard University Press.
  • Apter, Michael. 1970. The Computer Simulation of Behaviour. London: Hutchinson.
  • Arbib, Michael, Carl Lee Baker, Joan Bresnan, Roy G. D’Andrade, Ronald Kaplan, Samuel Jay Keyser, Donald A. Norman, et al. 1978. Cognitive Science, 1978.
  • Bechtel, William. 2008. Mental Mechanisms. New York: Routledge (Taylor & Francis Group).
  • Bechtel, William, and Adele Abrahamsen. 2002. Connectionism and the Mind. Blackwell.
  • Blokpoel, Mark, Johan Kwisthout, and Iris van Rooij. 2012. “When Can Predictive Brains Be Truly Bayesian?” Frontiers in Psychology 3 (November): 1–3.
  • Boden, Margaret A. 1988. Computer Models of Mind: Computational Approaches in Theoretical Psychology. Cambridge [England]; New York: Cambridge University Press.
  • Bowers, Jeffrey S. 2009. “On the Biological Plausibility of Grandmother Cells: Implications for Neural Network Theories in Psychology and Neuroscience.” Psychological Review 116 (1) (January): 220–51.
  • Chalmers, David J. 2011. “A Computational Foundation for the Study of Cognition.” Journal of Cognitive Science (12): 325–359.
  • Clark, Andy. 2013. “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science.” The Behavioral and Brain Sciences 36 (3) (June 10): 181–204.
  • Colby, Kenneth Mark, and John P Gilbert. 1964. “Programming a Computer Model of Neurosis.” Journal of Mathematical Psychology 1 (2) (July): 405–417.
  • Copeland, B. Jack. 1996. “What Is Computation?” Synthese 108 (3): 335–359.
  • Copeland, B. 2004. “Hypercomputation: Philosophical Issues.” Theoretical Computer Science 317 (1-3) (June): 251–267.
  • Craver, Carl F. 2007. Explaining the Brain. Mechanisms and the Mosaic Unity of Neuroscience. Oxford: Oxford University Press.
  • Cummins, Robert. 1975. “Functional Analysis.” The Journal of Philosophy 72 (20): 741–765.
  • Cummins, Robert. 1983. The Nature of Psychological Explanation. Cambridge, Mass.: MIT Press.
  • Cummins, Robert. 2000. “‘How Does It Work’ Versus ‘What Are the Laws?’: Two Conceptions of Psychological Explanation.” In Explanation and Cognition, ed. F Keil and Robert A Wilson, 117–145. Cambridge, Mass.: MIT Press.
  • Dennett, Daniel C. 1983. “Beyond Belief.” In Thought and Object, ed. Andrew Woodfield. Oxford University Press.
  • Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, Mass.: MIT Press.
  • Dreyfus, Hubert. 1972. What Computers Can’t Do: A Critique of Artificial Reason. New York: Harper & Row, Publishers.
  • Eliasmith, Chris. 2013. How to Build the Brain: a Neural Architecture for Biological Cognition. New York: Oxford University Press.
  • Eliasmith, Chris, and Charles H. Anderson. 2003. Neural Engineering. Computation, Representation, and Dynamics in Neurobiological Systems. Cambridge, Mass.: MIT Press.
  • Eliasmith, Chris, Terrence C Stewart, Xuan Choo, Trevor Bekolay, Travis DeWolf, Yichuan Tang, Charlie Tang, and Daniel Rasmussen. 2012. “A Large-scale Model of the Functioning Brain.” Science (New York, N.Y.) 338 (6111) (November 30): 1202–5.
  • Fodor, Jerry A. 1968. Psychological Explanation: An Introduction to the Philosophy of Psychology. New York: Random House.
  • Fodor, Jerry A. 1974. “Special Sciences (or: The Disunity of Science as a Working Hypothesis).” Synthese 28 (2) (October): 97–115.
  • Fodor, Jerry A. 1975. The Language of Thought. 1st ed. New York: Thomas Y. Crowell Company.
  • Fodor, Jerry A. 2001. The Mind Doesn’t Work That Way. Cambridge, Mass.: MIT Press.
  • Fodor, Jerry A., and Zenon W. Pylyshyn. 1988. “Connectionism and Cognitive Architecture: a Critical Analysis.” Cognition 28 (1-2) (March): 3–71.
  • Fresco, Nir. 2010. “Explaining Computation Without Semantics: Keeping It Simple.” Minds and Machines 20 (2) (June): 165–181.
  • Friston, Karl, and Stefan Kiebel. 2011. “Predictive Coding: A Free-Energy Formulation.” In Predictions in the Brain: Using Our Past to Generate a Future, ed. Moshe Bar, 231–246. Oxford: Oxford University Press.
  • Friston, Karl, James Kilner, and Lee Harrison. 2006. “A Free Energy Principle for the Brain.” Journal of Physiology, Paris 100 (1-3): 70–87.
  • Glennan, Stuart. 2002. “Rethinking Mechanistic Explanation.” Philosophy of Science 69 (S3) (September): S342–S353.
  • Gross, Charles G. 2002. “Genealogy of the ‘Grandmother Cell’.” The Neuroscientist 8 (5) (October 1): 512–518.
  • Harnish, Robert M. 2002. Minds, Brains, Computers : an Historical Introduction to the Foundations of Cognitive Science. Malden, MA: Blackwell Publishers.
  • Haugeland, John. 1985. Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press.
  • Jones, Matt, and Bradley C. Love. 2011. “Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition.” Behavioral and Brain Sciences 34 (04) (August 25): 169–188.
  • Kaplan, David Michael, and William Bechtel. 2011. “Dynamical Models: An Alternative or Complement to Mechanistic Explanations?” Topics in Cognitive Science 3 (2) (April 6): 438–444.
  • Kaplan, David Michael, and Carl F Craver. 2011. “The Explanatory Force of Dynamical and Mathematical Models in Neuroscience: A Mechanistic Perspective*.” Philosophy of Science 78 (4) (October): 601–627.
  • Konorski, Jerzy. 1967. Integrative Activity of the Brain; an Interdisciplinary Approach. Chicago: University of Chicago Press.
  • Krajewski, Stanisław. 2007. “On Gödel’s Theorem and Mechanism: Inconsistency or Unsoundness Is Unavoidable in Any Attempt to ‘Out-Gödel’ the Mechanist.” Fundamenta Informaticae 81 (1) (January 1): 173–181.
  • Lewandowsky, Stephan, and Simon Farrell. 2011. Computational Modeling in Cognition: Principles and Practice. Thousand Oaks: Sage Publications.
  • Loehlin, John. 1968. Computer Models of Personality. New York: Random House.
  • Lucas, JR. 1961. “Minds, Machines and Gödel.” Philosophy 9 (3) (April): 219–227.
  • Lycan, William G. 1987. Consciousness. Cambridge, Mass.: MIT Press.
  • Machamer, Peter, Lindley Darden, and Carl F Craver. 2000. “Thinking About Mechanisms.” Philosophy of Science 67 (1): 1–25.
  • Machery, Edouard. 2009. Doing Without Concepts. Oxford: Oxford University Press, USA.
  • Marr, David. 1982. Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. New York: W. H. Freeman and Company.
  • McCulloch, Warren S., and Walter Pitts. 1943. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Bulletin of Mathematical Biophysics 5: 115–133.
  • Miller, George A., Eugene Galanter, and Karl H. Pribram. 1967. Plans and the Structure of Behavior. New York: Holt.
  • Miłkowski, Marcin. 2013. Explaining the Computational Mind. Cambridge, Mass.: MIT Press.
  • Müller, Vincent C. 2009. “Pancomputationalism: Theory or Metaphor?” In The Relevance of Philosophy for Information Science, ed. Ruth Hagengruber. Berlin: Springer.
  • Von Neumann, John. 1958. The Computer and the Brain. New Haven: Yale University Press.
  • Newell, Allen. 1980. “Physical Symbol Systems.” Cognitive Science: A Multidisciplinary Journal 4 (2): 135–183.
  • Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, Mass. and London: Harvard University Press.
  • Newell, Allen, and Herbert A Simon. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall.
  • Newman, M H A. 1928. “Mr. Russell’s ‘Causal Theory of Perception’.” Mind 37 (146) (April 1): 137–148.
  • O’Brien, Gerard, and Jon Opie. 2006. “How Do Connectionist Networks Compute?” Cognitive Processing 7 (1) (March): 30–41.
  • O’Brien, Gerard, and Jon Opie. 2009. “The Role of Representation in Computation.” Cognitive Processing 10 (1) (February): 53–62.
  • O’Reilly, Randall C. 2006. “Biologically Based Computational Models of High-level Cognition.” Science 314 (5796) (October 6): 91–4.
  • Penrose, Roger. 1989. The Emperor’s New Mind. Quantum. London: Oxford University Press.
  • Piccinini, Gualtiero. 2006. “Computation Without Representation.” Philosophical Studies 137 (2) (September): 205–241.
  • Piccinini, Gualtiero. 2007a. “Computational Modelling Vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?” Australasian Journal of Philosophy 85 (1): 93–115.
  • Piccinini, Gualtiero. 2007b. “Computing Mechanisms.” Philosophy of Science 74 (4) (October): 501–526.
  • Piccinini, Gualtiero. 2008. “Computers.” Pacific Philosophical Quarterly 89 (1) (March): 32–73.
  • Piccinini, Gualtiero, and Sonya Bahar. 2013. “Neural Computation and the Computational Theory of Cognition.” Cognitive Science 37 (3) (April 5): 453–88.
  • Piccinini, Gualtiero, and Carl Craver. 2011. “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches.” Synthese 183 (3) (March 11): 283–311.
  • Preston, John, and Mark Bishop. 2002. Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford; New York: Clarendon Press.
  • Putnam, Hilary. 1960. “Minds and Machines.” In Dimensions of Mind, ed. Sidney Hook. New York University Press.
  • Putnam, Hilary. 1991. Representation and Reality. Cambridge, Mass.: The MIT Press.
  • Pylyshyn, Zenon W. 1984. Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge, Mass.: MIT Press.
  • Pylyshyn, Zenon W. 1987. Robot’s Dilemma: The Frame Problem in Artificial Intelligence. Norwood, New Jersey: Ablex Publishing Corporation.
  • Ramsey, William M. 2007. Representation Reconsidered. Cambridge: Cambridge University Press.
  • Rasmussen, Daniel, and Chris Eliasmith. 2013. “God, the Devil, and the Details: Fleshing Out the Predictive Processing Framework.” The Behavioral and Brain Sciences 36 (3) (June 1): 223–4.
  • Rosch, Eleanor, and Carolyn B Mervis. 1975. “Family Resemblances: Studies in the Internal Structure of Categories.” Cognitive Psychology 7 (4) (October): 573–605.
  • Searle, John R. 1992. The Rediscovery of the Mind. Cambridge, Mass.: MIT Press.
  • Shanahan, Murray, and Bernard Baars. 2005. “Applying Global Workspace Theory to the Frame Problem.” Cognition 98 (2) (December): 157–76.
  • Shanahan, Murray. 1997. Solving the Frame Problem: a Mathematical Investigation of the Common Sense Law of Inertia. Cambridge, Mass.: MIT Press.
  • Sloman, A. 1996. “Beyond Turing Equivalence.” In Machines and Thought: The Legacy of Alan Turing, ed. Peter Millican, 1:179–219. New York: Oxford University Press, USA.
  • Steels, Luc. 2008. “The Symbol Grounding Problem Has Been Solved, so What’ s Next?” In Symbols and Embodiment: Debates on Meaning and Cognition, ed. Manuel de Vega, Arthur M. Glenberg, and Arthur C. Graesser, 223–244. Oxford: Oxford University Press.
  • Sterelny, Kim. 1990. The Representational Theory of Mind: An Introduction. Oxford, OX, UK; Cambridge, Mass., USA: B. Blackwell.
  • Tomkins, Silvan, and Samuel Messick. 1963. Computer Simulation of Personality, Frontier of Psychological Theory,. New York: Wiley.
  • Walmsley, Joel. 2008. “Explanation in Dynamical Cognitive Science.” Minds and Machines 18 (3) (July 2): 331–348.
  • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W.H. Freeman.
  • Wittgenstein, Ludwig. 1953. Philosophical Investigations. New York: Macmillan.
  • Zednik, Carlos. 2011. “The Nature of Dynamical Explanation.” Philosophy of Science 78 (2): 238–263.

 

Author Information

Marcin Milkowski
Email: marcin.milkowski@gmail.com
Institute of Philosophy and Sociology
Polish Academy of Sciences
Poland

Philosophy of Dreaming

According to Owen Flanagan (2000), there are four major philosophical questions about dreaming:

1. How can I be sure I am not always dreaming?

2. Can I be immoral in dreams?

3. Are dreams conscious experiences that occur during sleep?

4. Does dreaming have an evolutionary function?

These interrelated questions cover philosophical domains as diverse as metaphysics, epistemology, ethics, scientific methodology, and the philosophy of biology, mind and language. This article covers the four questions and also looks at some newly emerging philosophical questions about dreams:

5. Is dreaming an ideal scientific model for consciousness research?

6. Is dreaming an instance of hallucinating or imagining?

Section 1 introduces the traditional philosophical question that Descartes asked himself, a question which has championed scepticism about the external world. How can I be sure I am not always dreaming, or dreaming right now? Philosophers have typically looked for features that distinguish dreams from waking life and one key debate centres on whether it is possible to feel pain in a dream.

Section 2 surveys the ethics of dreaming. The classical view of Augustine is contrasted with more abstract ethical positions, namely, those of the Deontologist, the Consequentialist and the Virtue Ethicist. The notion of lucid dreaming is examined here in light of the question of responsibility during dreaming and how we treat other dream characters.

Sections 3 covers the various different positions, objections and replies to question 3: the debate about whether dreaming is, or is not, a conscious state. The challenges from Malcolm and Dennett are covered. These challenges question the authority of the common-sense view of dreaming as a consciously experienced state. Malcolm argues that the concept of dreaming is incoherent, while Dennett puts forward a theory of dreaming without appealing to consciousness.

Section 4 covers the evolutionary debate, where empirical work ultimately leaves us uncertain of the extent to which natural selection has shaped dreaming, if at all. Early approaches by Freud and Jung are reviewed, followed by approaches by Flanagan and Revonsuo. Though Freud, Jung and Revonsuo have argued that dreaming is functional, Flanagan represents a view shared by many neuroscientists that dreaming has no function at all.

Section 5 looks at questions 5 and 6. Question 5 is about the cutting edge issue of precisely how dreaming should be integrated into the research program of consciousness. Should dreaming be taken as a scientific model of consciousness? Might dreaming play another role such as a contrast analysis with other mental states? Question 6, which raises a question of the exact qualitative nature of dreaming, has a longer history, though it is also receiving contemporary attention. The section outlines reasons favouring the orthodox view of psychology, that dream imagery is perceptual (hallucinatory), and reasons favouring the philosophical challenge to that orthodoxy, that dreams are ultimately imaginative in nature.

Table of Contents

  1. Dreaming in Epistemology
    1. Descartes’ Dream Argument
    2. Objections and Replies
  2. The Ethics of Dreaming
    1. Saint Augustine on the Morality of Dreaming
    2. Consequentialist vs. Deontological Positions on Dreaming
    3. Virtue Ethics of Dreaming
  3. Are Dreams Consciously Experienced?
    1. The Received View of Dreaming
    2. Malcolm’s Challenge to the Received View
      1. The Impossibility of Verifying Dream Reports
      2. The Conflicting Definitions of “Sleep” and “Dreaming”
      3. The Impossibility of Communicating or Making Judgments during Sleep
      4. Ramifications (contra Descartes)
    3. Possible Objections to Malcolm
      1. Putnam on the Conceptual Analysis of Dreaming
      2. Distinguishing “State” and “Creature” Consciousness
    4. Dennett’s Challenge to the Received View
      1. A New Model of Dreaming: Uploading Unconscious Content
      2. Accounting for New Data on Dreams: “Precognitive” Dreams
    5. Possible Objections to Dennett
      1. Lucid Dreaming
      2. Alternative Explanations for “Precognitive” Dreams
  4. The Function of Dreaming
    1. Early Approaches
      1. Freud: Psychoanalysis
      2. Jung: Analytic Psychology
    2. Contemporary Approaches
      1. Pluralism
      2. Adaptationism
  5. Dreaming in Contemporary Philosophy of Mind and Consciousness
    1. Should Dreaming Be a Scientific Model?
      1. Dreaming as a Model of Consciousness
      2. Dreaming as a Contrast Case for Waking Consciousness
    2. Is Dreaming an Instance of Images or Percepts?
      1. Dreaming as Hallucination
      2. Dreaming as Imagination
  6. References and Further Reading

1. Dreaming in Epistemology

a. Descartes’ Dream Argument

Descartes strove for certainty in the beliefs we hold. In his Meditations on First Philosophy he wanted to find out what we can believe with certainty and thereby claim as knowledge. He begins by stating that he is certain of being seated by the fire in front of him. He then dismisses the idea that this belief could be certain because he has been deceived before in dreams where he has similarly been convinced that he was seated by a fire, only to wake and discover that he was only dreaming that he was seated by a fire. How can I know that I am not now dreaming? is the resulting famous question Descartes asked himself. Though Descartes was not the first to ask himself this question (see Zhuangzi’s eponymous work, Plato’s Theaetetus and Aristotle’s Metaphysics) he was the first philosopher to doggedly pursue and try to answer the question. In answering the question, due to the sensory deception of dreams, Descartes believes that we cannot trust our senses in waking life (without invoking a benevolent God who would surely not deceive us).

The phenomenon of dreaming is used as key evidence for the sceptical hypothesis that everything we currently believe to be true could be false and generated by a dream. Descartes holds the common-sense view that dreams, which regularly occur in all people, are a sequence of experiences often similar to those we have in waking life (this has come to be labelled as the “received view” of dreaming). A dream makes it feel as though the dreamer is carrying out actions in waking life, for during a dream we do not realize that it is a dream we are experiencing. Descartes claims that the experience of a dream could in principle be indistinguishable from waking life – whatever apparent subjective differences there are between waking life and dreaming, they are insufficient differences to gain certainty that I am not now dreaming. Descartes is left unsure that the objects in front of him are real – whether he is dreaming of their existence or whether they really are there. Dreaming was the first source for motivating Descartes’ method of doubt which came to threaten perceptual and introspective knowledge. In this method, he would use any means to subject a statement or allegedly true belief to the most critical scrutiny.

Descartes’ dream argument began with the claim that dreams and waking life can have the same content. There is, Descartes alleges, a sufficient similarity between the two experiences for dreamers to be routinely deceived into believing that they are having waking experiences while we are actually asleep and dreaming.  The dream argument has similarities to his later evil demon argument. According to this later argument, I cannot be sure anything I believe for I may just be being deceived by a malevolent demon. Both arguments have the same structure: nothing can rule out my being duped into believing I am having experience X, when I am really in state Y, hence I cannot have knowledge Z, about my current state. Even if the individuals happen to be right in their belief that they are not being deceived by an evil demon and even if individuals really are having a waking life experience, they are left unable to distinguish reality from their dream experiences in order to gain certainty in their belief that they are not now dreaming.

b. Objections and Replies

Since the Meditations on First Philosophy was published, Descartes’ argument has been replied to. One main claim that has been replied to is the idea that there are no certain marks to distinguish waking consciousness from dreaming. Hobbes believed that an absence of the absurd in waking life was a key difference (Hobbes, 1651: Part 1, Chapter 2). Though sleeping individuals are too wrapped up in the absurdity of their dreams to be able to distinguish their states, an individual who is awake can tell, simply because the absurdity is no longer there during wakefulness. Locke compared real pain to dream pain. He asks Descartes to consider the difference between dreaming of being in the fire and actually being in the fire (Locke, 1690: Book 4, Chapter 2, § 2). Locke’s claim is that we cannot have physical pain in dreams as we do in waking life. His claim, if true, undermines Descartes’ premise that there are no certain marks to distinguish dreaming from waking life such that we could ever be sure we are in one or the other states.

Descartes thought that dreams are protean (Hill, 2004b). By “protean”, Hill means that dream experience can replicate the panoply of any possible waking life experience; to put it negatively, there is no experience in waking life that could not be realistically simulated (and thereby be phenomenally indistinguishable) in dreams. This protean claim was necessary for Descartes to mount his sceptical argument about the external world. After all, if there was even one experience during waking life which simply could not occur during dreaming, then, in that moment at least, we could be sure we are awake and in contact with the external world, rather than dreaming. Locke alleged that he had found a gap in this protean claim: we do not and cannot feel pain in dreams. The notion of pain occurring in a dream has now been put to the test in a number of scientific studies through quantitative analysis of the content of dream diaries in the case of ordinary dreams and also by participating lucid dreamers. The conclusion reached independently by these various studies is that the occurrence of sharply localized pains can occur in dreams, though they are rare (Zadra, and others 1998; LaBerge & DeGracia, 2000). According to the empirical work then, Locke is wrong about his claim, though he might still query whether really agonizing and ongoing pain (as in his original request of being in a fire) might not be possible in dreams. The empirical work supports Descartes’ conviction that dreams can recapitulate any waking state, meaning that there is no essential difference between waking and dreaming thereby ruling out certainty that this is not now a dream.

Another common attempt to distinguish waking life from dreaming is the “principle of coherence” (Malcolm, 1959: Chapter 17). We are awake and not asleep dreaming if we can connect our current experiences to the overall course of our lives. Essentially, through using the principle of coherence, we can think more critically in waking life. Hobbes seems to adhere to something like the principle of coherence in his appeal to absurdity as a key feature of dreams. Though dreams do have a tendency to involve a lack of critical thinking, it still seems possible that we could wake with a dream connecting to the overall course of our lives. It is generally accepted that there is no certain way to distinguish dreaming from waking life, though the claim that this ought to undermine our knowledge in any way is controversial.

For an alternative response to Descartes’ sceptical dream argument see Sosa (2007), who says that “in dreaming we do not really believe; we only make-believe.” He argues that in dreaming we actually only ever imagine scenarios, which never involve deceptive beliefs, and so we have no reason to feel our ordinary waking life beliefs can be undermined. Descartes relied on a notion of belief that was the same in both dreaming and waking life. Of course, if I have never believed, in sleep, that I was seated by the fire when I was actually asleep in bed, then none of my dreams challenge the perceptual and introspective beliefs I have during waking life. Ichikawa (2008) agrees with Sosa that in dreams we imagine scenarios (rather than believe we are engaged in scenarios as though awake), but he argues in contrast to Sosa, that this does not avoid scepticism about the external world. Even when dreams trade in imaginings rather than beliefs, the dreams still create subjectively indistinguishable experiences from waking experience. Due to the similarity in experience, it would be “epistemically irresponsible” to believe that we are engaged in waking experiences when we think we are on the sole basis that we imagine our dream experiences and imaginings are not beliefs. I still cannot really tell the difference between the experiences. The new worry is whether the belief I have in waking life is really a belief, rather than an imagining during dreaming and so scepticism is not avoided, so Ichikawa claims.

2. The Ethics of Dreaming

Since the late twentieth century, discussion of the moral and criminal responsibility of dreaming has been centred on sleep walking, where sleep-walkers have harmed others. The assessment has typically been carried out in practical, rather than theoretical, settings, for example law courts. Setting aside the notion of sleepwalking, philosophers are more concerned with the phenomenology of ordinary dreams. Does the notion of right and wrong apply to dreams themselves, as well as actions done by sleepwalkers?

a. Saint Augustine on the Morality of Dreaming

Saint Augustine, seeking to live a morally perfect life, was worried about some of the actions he carried out in dreams. For somebody who devoted his life to celibacy, his sexual dreams of fornication worried him. In his Confessions (Book X; Chapter 30), he writes to God. He talks of his success in quelling sexual thoughts and earlier habits from his life before his religious conversion. But he declares that in dreams he seems to have little control over committing the acts that he avoids during the waking day. He rhetorically asks “am I not myself during sleep?” believing that it really is him who is the central character of his dreams. In trying to solve the problem Augustine appeals to the apparent experiential difference between waking and dreaming life. He draws a crucial distinction between “happenings” and “actions.” Dreams fall into the former category. Augustine was not carrying out actions but was rather undergoing an experience which happened to him without choice on his part. By effectively removing agency from dreaming, we cannot be responsible for what happens in our dreams. As a result, the notion of sin or moral responsibility cannot be applied to our dreams (Flanagan, 2000: p.18; pp. 179 - 183). According to Augustine, only actions are morally evaluable. He is committed to the claim that all events that occur in dreams are non-actions. The claim that actions do not occur during sleep is brought into question by lucid dreams which seem to involve genuine actions and decision making processes whereby dreaming individuals can control, affect and alter the course of the dream. The success of Augustine’s argument hinges on there being no actions in dreams. Lucid dreaming is therefore evidence against this premise. We have now seen Augustine’s argument that moral notions never apply to dreams fail (because they can involve actions rather than happenings). In the next section we will see what the two main ethical positions might say on the issue of right and wrong in dreams.

b. Consequentialist vs. Deontological Positions on Dreaming

Dreaming is an instance of a more general concern about a subset of thoughts – fantasies – that occur, potentially without affecting behaviour We seem to carry out actions during dreams in simulated realities involving other characters. So perhaps we ought to consider whether we are morally responsible for actions in dreams. More generally, are we morally obliged to not entertain certain thoughts, even if these thoughts do not affect our later actions and do not harm others? The same issue might be pressed with the use of violent video games, though the link to later behaviour is more controversial. Some people enjoy playing violent video games and the more graphic the better. Is that unethical in and of itself? Why should we excuse people’s thoughts – when, if they were carried out as actual actions they would be grossly wrong? Dreaming is perhaps a special instance because in ordinary dreams we believe we are carrying out actions in real life. What might the two main moral theories say about the issue, with the assumption in place that what we do in dreams does not affect our behaviour in waking life?

Consequentialism is a broad family of ethical doctrines which always assesses an action in terms of the consequences it has. There are two separate issues – ethical and empirical. The empirical question asks whether dreams, fantasies and video games are really without behavioural consequence towards others. To be clear, the Consequentialist is not arguing that dreams do not have any consequences, only that if they really do have no consequences then they are not morally evaluable or should be deemed neutral. Consequentialist theories may well argue that, provided that dreams really do not affect my behaviour later, it is not morally wrong to “harm” other dream characters, even in lucid dreaming. The more liberal Consequentialists might even see value in these instances of free thought. That is, there might be some intrinsic good in allowing such freedom of the mind but this is not a value that can be outweighed by actual harm to others, so the Consequentialists might claim. If having such lucid dreams makes me nicer to people in waking life, then the Consequentialist will actually endorse such activity during sleep.

Consequentialists will grant their argument even though dream content has an intentional relation to other people. Namely, dreams can often have singular content. Singular content, or singular thought, is to be contrasted with general content (the notion of singular thought is somewhat complex. Readers should consult Jeshion, 2010). If I simply form a mental representation of a blond Hollywood actor, the features of the representation might be too vague to pick out any particular individual. My representation could equally be fulfilled by Brad Pitt, Steve McQueen, a fictional movie star or countless other individuals. If I deliberately think of Brad Pitt, or if the images come to me detailed enough, then my thought does not have general content but is about that particular individual. Dreams are not always about people with general features (though they can be), but are rather often about people the sleeping individual is actually acquainted with – particular people from that individual’s own life – family, friends, and so forth.

Deontological theories, in stark contrast to Consequential theories, believe that we have obligations to act and think, or not act and think, in certain ways regardless of effects on other people. According to Deontological moral theories, I have a duty to never entertain certain thoughts because it is wrong in itself. Deontological theories see individuals as more important than mere consequences of action. Individuals are “ends-in-themselves” and not the means to a desirable state of affairs. Since dreams are often actually about real people, I am not treating that individual as an end-in-itself if I chose to harm their “dream representative”. The basic Deontological maxim to treat someone as an end rather than a means to my entertainment can apply to dreams.

As the debate between Deontologists and Consequentialists plays out, nuanced positions will reveal themselves. Perhaps there is room for agreement between the Consequentialist and Deontologist. Maybe I can carry out otherwise immoral acts on dream characters with general features where these characters do not represent any particular individuals of the waking world. Some Deontologists might still be unhappy with the notion that in dreams one crucial element of singular content remains – we represent ourselves in dreams. The arch-Deontologist Kant will argue that one is not treating oneself as an end-in-itself but a means to other ends by carrying out the acts; namely, there is something inherently wrong about even pretending to carry out an immoral action because in doing so we depersonalize ourselves. Other Deontologists might want to speak about fantasies being different from dreams. Fantasies are actions, where I sit down and decide to indulge my daydreams, whereas dreams might be more passive and therefore might respect the Augustinian distinction between actions and happenings. On this view, I am not using someone as a means to an end if I am just passively dreaming whereas I am if I start actively thinking about that individual. So maybe the Deontologist case only applies to lucid dreaming, where Augustine’s distinction would still be at work. This might exempt a large number of dreams from being wicked, but not all of them.

c. Virtue Ethics of Dreaming

Deontology and Consequentialism are the two main moral positions. The third is Virtue Ethics, which emphasizes the role of character. This moral approach involves going beyond actions of right and wrong, avoiding harm and maximizing pleasure, and instead considers an individual for his or her overall life, how to make it a good one and develop that individual’s character. Where might dreaming fit in with the third moral position – that of the Virtue Ethicist? Virtue Ethics takes the question “what is the right action?” and turns it into the broader question: “how should I live?” The question “can we have immoral dreams?” needs to be opened up to: “what can I get out of dreaming to help me acquire virtuousness?”

The Virtue Ethics of dreaming might be pursued in a Freudian or Jungian vein. Dreams arguably put us in touch with our unconscious and indirectly tell us about our motives and habits in life:

“[I]t is in the world of dreaming that the unconscious is working out its powerful dynamics. It is there that the great forces do battle or combine to produce the attitudes, ideals, beliefs, and compulsions that motivate most of our behavior Once we become sensitive to dreams, we discover that every dynamic in a dream is manifesting itself in some way in our practical lives—in our actions, relationships, decisions, automatic routines, urges, and feelings.” (Johnson, 2009: p.19)

Similarly:

“Studying our own dreams can be valuable in all sorts of ways. They can be reveal our inner motivations and hopes, help us face our fears, encourage growing awareness and even be a source of creativity and insight.” (Blackmore, 2004: p.338)

In order to achieve happiness, fulfilment and developing virtuousness we owe it to ourselves to recall and pay attention to our dreams. However, this line of argument relies on the claim that dreams really do function in a way that Freud or Jung thought they do, which is controversial: dream analysis of any kind lacks scientific status and is more of an art. But then social dynamics and the development of character is more of an art than a science. Virtue Ethics is perhaps the opposite side of the coin of psychotherapy. The former focuses on positive improvement of character, whereas the latter focuses on avoiding negative setbacks in mind and behaviour Whether psychotherapy should be used more for positive improvement of character is a question approached in the philosophy of medicine. These considerations touch on a further question of whether dreams should be used in therapy.

Certain changes people make in waking life do eventually “show up” in dreams. Dreams, as unconsciously instantiated, capture patterns of thought from waking life. New modes of thinking can be introduced and this is the process by which people learn to lucid dream. By periodically introducing thoughts about whether one is awake or not during the day, every day for some period of time, this pattern of thinking eventually occurs in dreams. By constantly asking “am I awake?” in the day it becomes more likely to ask oneself in a dream, to realize that one is not awake and answer in the negative (Blackmore, 1991). With the possibility that dreams do capture waking life thinking and the notion that one can learn to lucid dream one may ask whether Augustine tried his hardest at stopping the dreams that troubled him and whether he was really as successful at quelling sexual urges in waking life as he thought he was.

Ordinary dreams are commonly thought to not actually involve choices and corresponding agency. Lucid dreaming invokes our ability to make choices, often to the same extent as in waking life. Lucid dreaming represents an example of being able to live and act in a virtual reality and is especially timely due to the rise in number of lucid dreamers (popular manuals on how to lucid dream are sold and actively endorsed by some leading psychologists; see LaBerge & Rheingold, 1990; Love, 2013) and the increase of virtual realities on computers. Whereas Deontologists and Consequentialists are likely to be more interested in the content of the dreams, the Virtue Ethicist will likely be more interested in dreaming as an overall activity and how it fits in with one’s life. Stephen LaBerge is perhaps an implicit Virtue Ethicist of dreaming. Though humans are thought to be moral agents, we spend a third of our lives asleep. 11% of our mental experiences are dreams (Love, 2013: p.2). The dreams we experience during sleep are mostly non-agentic and this amounts to a significant unfulfilled portion of our lives. LaBerge argues that by not cultivating lucid dreams, we miss out on opportunities to explore our own minds and ultimately enrich our waking lives (LaBerge & Rheingold, 1990: p.9). Arguably then, the fulfilled virtuous person will try to develop the skill of lucid dreaming. One could object that the dreamer should just get on with life in the real world. After all, learning to lucid dream for most people takes time and practice, requiring the individual to think about their dreams for periods of time in their waking life. They could be spending their time instead doing voluntary work for charity in real life. In reply, the Virtue Ethicist can show how parallel arguments can be made for meditation: individuals are calmer in situations that threaten their morality and are working on longer-term habits. Similarly, the lucid dreamer is achieving fulfilment and nurturing important long term traits and habits. By gaining control of dreams, there is the opportunity to examine relationships with people by representing them in dreams. Lucid dreams might aid in getting an individual to carry out a difficult task in real life by allowing them to practice it in life-like settings (that go beyond merely imagining the scenario in waking life). Lucid dreams may then help to play a role in developing traits that people otherwise would not develop, and act as an outlet for encouraging the “thick moral concepts” of oneself – courage, bravery, wisdom, and so forth. Lucid dreaming helps in developing such traits and so can be seen as a means to the end of virtuousness or act as a supplementary virtue. Human experience can be taken as any area in which a choice is required. At the very least then, lucid dreaming signifies an expansion of agency.

3. Are Dreams Consciously Experienced?

a. The Received View of Dreaming

There is an implicit, unquestioned commitment in both Descartes’ dream argument and Augustine’s argument on the morality of dreaming. This is the received view, which is the platitudinous claim that a dream is a sequence of experiences that occur during sleep. The received view typically adheres to a number of further claims: that dreams play out approximately in real time and do not happen in a flash. When a dream is successfully remembered, the content of the dream is to a large extent what is remembered after waking (see Fig. 1 below), and an individual’s dream report is taken as excellent evidence for that dream having taken place.

Fig 1

 

The received view is committed to the claim that we do not wake up with misleading memories. Any failure of memory is of omission rather than error. I might wake unable to recall the exact details of certain parts of a dream, but I will not wake up and believe I had a dream involving contents which did not occur (I might recall details A – G with D missing, but I will not wake and recall content X, Y, Z). The received view is not committed to a claim about exactly how long a dream takes to experience, in correlation to how long it takes to remember, but dreams cannot occur instantaneously during sleep. The received view is committed to the claim that dreams are extended in time during sleep. The content does not necessarily have to occur just before waking; another graph detailing possible experience and later recollection of dream content on the received view might show A – G occurring much earlier in sleep (with the recalled A* - G* in the same place). A – G can represent any dream that people ordinarily recall

We can appear to carry out a scope of actions in our dreams pretty similar to those of waking life. Everything that we can do in waking life, we can also do in dreams. The exact same mental states can occur in dreams just as they do in waking life. We can believe, judge, reason and converse with what we take to be other individuals in our dreams. Since we can be frightened in a dream we can be frightened during sleep.

The received view is attested by reports of dreams from ordinary people in laboratory and everyday settings. Every dreamer portrays the dream as a mental experience that occurred during sleep. The received view is hence a part of folk psychology which is the term given to denote the beliefs that ordinary people hold on matters concerning psychology such as the nature of mental states. When it comes to dreaming, the consensus (folk psychology, scientific psychology and philosophy) agree that dreams are experiences that occur during sleep.

b. Malcolm’s Challenge to the Received View

Malcolm stands in opposition to received view – the implicit set of claims about dreams that Descartes, Augustine and the majority of philosophers, psychologists and ordinary people are committed to. It will be worth separating Malcolm’s challenge to the received view into three arguments: #1 dream reports are unverifiable; #2 sleep and dreaming have conflicting definitions; #3 communication and judgements cannot occur during sleep.

i. The Impossibility of Verifying Dream Reports

According to Malcolm’s first argument, we should not simply take dream reports at face value, as the received view has done; dream reports are insufficient to believe the metaphysical claim that dreaming consciously takes place during sleep. When we use introspection after sleeping to examine our episodic memories of dreams and put our dream report into words, these are not the dream experiences themselves. Malcolm adds that there is no other way to check the received view’s primary claim that dreams are consciously experienced during sleep. Importantly, Malcolm states that the sole criterion we have for establishing that one has had a dream is that one awakes with the impression of having dreamt (that is, an apparent memory) and that one then goes on to report the dream. Waking with the impression does not entail that there was a conscious experience during sleep that actually corresponds to the report. Malcolm views dream reports as inherently first personal and repeatedly claims that the verbal report of a dream is the only criterion for believing that a dream took place. He adds that dreams cannot be checked in any other way without either showing that the individual is not fully asleep or by invoking a new conception of dreaming by relying on behavioural criteria such as patterns of physiology or movement during sleep. Behavioral criteria too, are insufficient to confirm that an individual is consciously experiencing their dreams, according to Malcolm. The best we can get from that are probabilistic indications of consciousness that will never be decisive. If scientists try to show that one is dreaming during sleep then those scientists have invoked a new conception of dreaming that does not resemble the old one, Malcolm alleges. He believes that where scientists appeal to behavioural criteria they are no longer inquiring into dreaming because the real conception of dreaming has only ever relied on dream reports. A dream is logically inseparable from the dream report and it cannot be assumed that the report refers to an experience during sleep. Malcolm thereby undermines the received view’s claim that “I dreamed that I was flying” entails that I had an experience during sleep in which I believed I was flying. Hence there is no way of conclusively confirming the idea that dreaming occurs during sleep at all.

Malcolm’s claim that the received view is unverifiable is inspired by a statement made by Wittgenstein who alludes to the possibility that there is no way of finding out if the memory of a dream corresponds to the dream as it actually occurred (Wittgenstein, 1953: part 2, § vii; p.415). Wittgenstein asks us what we should do about a man who has an especially bad memory. How can we trust his reports of dreams? The received view is committed to a crucial premise that when we recall dreams we recall the same content of the earlier experience. But Wittgenstein’s scenario establishes the possibility that an individual could recall content that did not occur. The question then arises as to why we should believe that somebody with even a good day-to-day memory is in any better position to remember earlier conscious experiences during sleep after waking.

In drawing attention to empirical work on dreams, Malcolm says that psychologists have come to be uncertain whether dreams occur during sleep or during the moment of waking up. The point for Malcolm is that it is “impossible to decide” between the two (Malcolm, 1956: p.29). Hence the question “when, in his sleep, did he dream?” is senseless. There is also a lack of criterion for the duration of dreams, that is how long they last in real time. Malcolm states that the concept of the time of occurrence of a dream and also how long a dream might last has no application in ordinary conversation about dreams. “In this sense, a dream is not an ‘occurrence’ and, therefore, not an occurrence during sleep” (Malcolm: 1956, p.30). Malcolm's epistemic claim has a metaphysical result, namely, that dreaming does not take place in time or space. The combination of the waking impression and the use of language has misled us into believing that dreams occur during sleep; “dreams” do not refer to anything over and above the waking report, according to Malcolm. This is why Malcolm thinks that the notion of “dreaming” is an exemplar of Wittgenstein’s idea of prejudices “produced by ‘grammatical illusions’” (Malcolm, 1959: p. 75).

ii. The Conflicting Definitions of “Sleep” and “Dreaming”

According to Malcolm’s second argument, he accuses the received view of contradicting itself and so the claim that dreams could consciously occur during sleep is incoherent. Sleep is supposed to entail a lack of experiential content, or at least an absence of intended behaviour, whereas dreaming is said to involve conscious experience. Experience implies consciousness; sleep implies a lack of consciousness; therefore the claim that dreams could occur during sleep implies consciousness and a lack of consciousness. So the received view results in a contradiction. This alleged contradiction of sleep and dreaming supports Malcolm’s first argument that dreams are unverifiable because any attempt to verify the dream report will just show that the individual was not asleep and so there is no way to verify that dreams could possibly occur during sleep. One might object to Malcolm that the content of a dream report could coincide well with a publicly verifiable event, such as the occurrence of thunder while the individual slept and thunder in the reported dream later. Malcolm claims that in this instance the individual could not be sound asleep if they are aware of their environment in any way. He alleges that instances such as nightmares and sleepwalking also invoke new conceptions of sleep and dreaming. By “sleep” Malcolm thinks that people have meant sound sleep as the paradigmatic example, namelessly, sleeping whilst showing no awareness of the outside environment and no behaviour.

iii. The Impossibility of Communicating or Making Judgments during Sleep

Malcolm takes communication as a crucial way of verifying that a mental state could be experienced. His third argument rules out the possibility of individuals communicating or making judgements during sleep, essentially closing off dreams as things we can know anything about. This third argument supports the first argument that dreams are unverifiable and anticipates a counter-claim that individuals might be able to report a dream as it occurs, thereby verifying it as a conscious experience. Malcolm claims that a person cannot say and be aware of saying the statement “I am asleep” without it being false. It is true that somebody could talk in his sleep and incidentally say “I am asleep” but he could not assert that he is asleep. If he is actually asleep then he is not aware of saying the statement (and so it is not an assertion), whilst if he is aware of saying the statement then he is not asleep. Since a sleeping individual cannot meaningfully assert that he is asleep, Malcolm concludes that communication between a sleeping individual and individuals who are awake is logically impossible.

As inherently first personal and retrospective reports, or so Malcolm alleges, the dream report fails Wittgensteinian criteria of being potentially verified as experiences. Malcolm alleges that there could be no intelligible mental state that could occur during sleep; any talk about mental states that could occur during sleep is meaningless. Malcolm assumes the Wittgensteinian point that talk about experiences gain meaning in virtue of their communicability. Communicability is necessary for the meaningfulness of folk psychological terms. Malcolm appeals to the “no private language argument” to rebut the idea that there could be a mental state which only one individual could privately experience and understand (for more on the private language argument, see Candlish & Wrisley, 2012).

The claim that there is a lack of possible communicability in sleep is key for Malcolm to cash out the further claim that one cannot make judgements during sleep. He does not believe that one could judge what they cannot communicate. According to Malcolm, since people cannot communicate during sleep, they cannot make judgements during sleep. He further adds that being unable to judge that one is asleep underlies the impossibility of being unable to have any mental experience during sleep. For, we could never observe an individual judge that he was asleep. This point relies on Malcolm’s second argument that the definitions of sleep and dreaming are in contradiction. There is nothing an individual could do to demonstrate he was making a judgement that did not also simultaneously show that he was awake. Of course, it seems possible that we could have an inner experience that we did not communicate to others. Malcolm points out that individuals in everyday waking instances could have communicated their experiences, at least modally. There is no possible world though, in which a sleeping individual could communicate with us his experience - so one cannot judge that one is asleep and dreaming. If Malcolm’s argument about the impossibility of making judgements in sleep works then his attack is detrimental to the received view’s premise that in sleep we can judge, reason, and so forth.

iv. Ramifications (contra Descartes)

Malcolm thinks that his challenge to the received view, if successful, undercuts Cartesian scepticism Descartes’ scepticism got off the ground when he raised the following issue: due to the similarity of dreams and waking experiences, my apparent waking experience might be a dream now and much of what I took to be knowledge is potentially untrue. A key premise for Descartes is that a dream is a sequence of experiences, the very same kind we can have whilst awake. This premise is undermined if dreams are not experiences at all. If the received view is unintelligible then Descartes cannot coherently compare waking life experiences to dreams: “if one cannot have thoughts while sound asleep, one cannot be deceived while sound asleep” (Malcolm: 1956, p.22). Descartes, championing the received view, failed to notice the incoherence in the notion that we can be asleep and aware of anything. Whenever we are aware of anything, whether it be the fire in front of us or otherwise, this is firm evidence that we are awake and that the world presented to us is as it really is.

c. Possible Objections to Malcolm

i. Putnam on the Conceptual Analysis of Dreaming

Part of Malcolm’s challenge to empirical work was his claim that the researchers have invoked new conceptions of sleep and dreaming (without realizing it) because of the new method of attempted verification. According to Malcolm’s charge, researchers are not really looking into dreaming as the received view understands the concept of dreaming. This was crucial for his attempt to undermine all empirical work on dreaming. Instead of relying on an individual’s waking report scientists may now try to infer from rapid eye movements or other physiological criteria that the individual is asleep and dreaming. For Malcolm, these scientists are working from a new conception of “sleep” and “dreaming” which only resembles the old one. Putnam objects to Malcolm’s claim, stating that science updates our concepts and does not replace them: the received view seeks confirmation in empirical work. In general, concepts are always being updated by new empirical knowledge. Putnam cites the example of Multiple Sclerosis (MS), a disease which is made very difficult to diagnose because the symptoms resemble those of other neurological diseases and not all of the symptoms are usually present. Furthermore, some neurologists have come to believe that MS is caused by a certain virus. Suppose a patient has a paradigmatic case of MS. Saying that the virus is the cause of the disease changes the concept because it involves new knowledge. On Malcolm’s general account, it would be a new understanding with a new concept and so the scientists would not be talking about MS at all (Putnam, 1962: p.219). Putnam believes that we should reject Malcolm’s view that future scientists are talking about a different disease. Analogously, we are still talking about the same thing when we talk about new ways of verifying the existence of dreams. If Putnam’s attack is successful then the work that scientists are doing on dreaming is about dreaming as the received view understands the concept, namely, conscious experiences that occur during sleep. If Putnam is right that scientists are not invoking a new conception of sleep and dreaming, then we can find other ways to verify our understanding of dreaming and the received view is continuous with empirical work.

ii. Distinguishing “State” and “Creature” Consciousness

David Rosenthal develops some conceptual vocabulary (Rosenthal: 2002, p.406), which arguably exposes a flaw in Malcolm’s reasoning. “Creature consciousness” is what any individual or animal displays when awake and responsive to external stimuli. “Creature unconsciousness” is what the individual or animal displays when unresponsive to external stimuli. “State consciousness,” on the other hand, refers to the mental state that occurs when one has an experience. This may be either internally or externally driven. I may have a perception of my environment or an imaginative idea without perceptual input. Malcolm evidently thinks that any form of state consciousness requires some degree of creature consciousness. But such a belief begs the question, so a Rosenthalian opponent of Malcolm might argue. It does not seem to be conceptually confused to believe that one can be responsive to internal stimuli (hence state conscious) without being responsive to external stimuli (hence creature unconscious). If, by “sleep” all we have meant is creature unconsciousness, then there is no reason to believe that an individual cannot have state conscious at the same time. An individual can be creature unconscious whilst having state consciousness, that is to say, an individual can be asleep and dreaming.

There are various reasons to believe that creature consciousness and state consciousness can come apart (and that state consciousness can plausibly occur without creature consciousness): the mental experience of dreaming can be gripping and the individual’s critical reasoning poor enough to be deceived into believing his dream is reality; most movement in sleep is not a response to outside stimuli at all but rather a response to internal phenomenology; the sleeping individual is never directly aware of his own body during sleep. Recall that Malcolm thought that sleep scientists cannot correlate movement in sleep with a later dream report because it detracted from their being fully asleep because creature consciousness and state consciousness can coexist. Malcolm is arguably wrong, then, to think that an individual moving in sleep detracts from their being fully asleep. This may block Malcolm’s appeal to sound sleep as the paradigmatic example of sleep. With the Rosenthalian distinction, we have reason to believe that even if an individual moves around in sleep, they are just as asleep as a sleeping individual lying completely still. The distinction may also count against Malcolm’s third argument against the possibility of communication in sleep. Of course, if creature consciousness is a necessary condition for communication then this distinction is not enough to undermine Malcolm’s third argument that communication cannot occur during sleep. A view where state consciousness alone suffices for communication will survive Malcolm’s third argument, on the other hand.

The apparent contradiction in sleep and dreaming that Malcolm claims existed will be avoided if the kind of consciousness implied by sleep is different to the kind Malcolm thinks is implied. The distinction might allow us to conclude that corroboration between a waking report and a publically verifiable sound, for example, can demonstrate that an individual is dreaming and yet asleep. Some dream content, as reported afterwards, seems to incorporate external stimuli that occurred at the same time as the dream. Malcolm calls this faint perception (Malcolm, 1956: p.22) of the environment and says that it detracts from an individual’s being fully asleep. Perhaps an objector to Malcolm can make a further, albeit controversial claim, in the Rosenthalian framework, to account for such dreams. For example, if there is thunder outside and an individual is asleep he might dream of being struck by Thor's hammer. His experience of the thunder is not the same sort of experience he would have had if he were awake during the thunder. The possible qualia are different. So Malcolm may be wrong in alleging that an individual is faintly aware of the outside environment if corroboration is attempted between a report and a verifiable sound, for example. Malcolm argued that such dreams are examples of individuals who are not fully asleep. But we can now see within the Rosenthalian framework how an individual could be creature unconscious (or simply “asleep” on the received view) and be taking in external stimuli unconsciously whilst having state consciousness that is not directly responsive to the external environment because he is not even faintly conscious of the external world.

See Nagel (1959), Yost (1959), Ayer (1960; 1961), Pears (1961), Kramer (1962) and Chappell (1963), for other replies to Malcolm.

d. Dennett’s Challenge to the Received View

i. A New Model of Dreaming: Uploading Unconscious Content

Dennett begins his attack on the received view of dreaming (the set of claims about dreams being consciously experienced during sleep) by questioning its authority. He does this by proposing a new model of dreaming. He is flexible in his approach and considers variations of his model. The crucial difference between his theory and the received view is that consciousness is not present during sleep on, what we might call Dennett’s uploading of unconscious content model of dreaming. Dennett does not say much about how this processing of unconscious material works, only that different memories are uploaded and woven together to create new content that will be recalled upon waking as though it was experienced during sleep, although it never was. Dennett is not repeating Malcolm’s first argument that dreaming is unverifiable. On the contrary, he believes that the issue will be settled empirically, though he claims that there is nothing to favour the received view’s own claim that dreams involve conscious experiences.

On the received view, the memory of an earlier dream is caused by the earlier dream experience and is the second time the content is experienced. On Dennett’s model, dream recall is the first time the content is experienced. Why believe that dreaming involves a lack of consciousness during sleep? One might cite evidence that the directions of rapid eye movements during sleep have been well correlated with the reports of dream content. An individual with predominantly horizontal eye movements might wake up and report that they were watching a tennis match in their dream. Dennett accommodates these (at the time) unconfirmed findings by arguing that even if it is the case that eye movement matches perfectly with the reported content, the unconscious is uploading memories and readying the content that will be experienced in the form of a false memory. The memory loading process is not conscious at the time of occurrence. Such findings would almost return us back to the received view – that the content of the dream does occur during sleep. It may be that the unconscious content is uploaded sequentially in the same order as the received view believes. We do not have proof that the individual is aware of the content of the dream during sleep. That is to say, the individual may not be having a conscious experience, even though the brain process involves the scenario which will be consciously experienced later, as though it was consciously experienced during sleep. Movement and apparent emotion in sleep can be accounted for too; a person twitches in their sleep as a memory with content involving a frightening scenario is uploaded and interwoven into a nightmarish narrative. It does not necessarily follow that the individual is conscious of this content being uploaded. This account is even plausible on an evolutionary account of sleep. The mind needs time to be unconscious and the brain and body needs to recalibrate. Thus, during sleep, the body is like a puppet, its strings being pulled by the memory loading process – although individuals show outward sign of emotion and bodily movement, there is nothing going on inside. Sometimes, though, what is remembered is the content being prepared, similar to the received view, only the individual is not aware of the content during sleep – this is why there can be matches between dream content reported and the direction of eye movement. Both sides of the debate agree that when dream content is being prepared some parts of the body move about as though it was a conscious experience, only Dennett denies that consciousness is present at the time and the received view believes that it is present.

Dennett also considers possibilities where the content of dream recall does not match the content that is uploaded. The content of the uploading during sleep might involve, say, window shopping in a local mall, yet the content that is recalled upon waking might recall flying over Paris. Having outlined the two theories – the received view and his own unconscious alternative - Dennett is merely making a sceptical point that the data of dream reports alone will not decide between them. What is to choose between them? Dennett believes that there is further evidence of a specific type of dream report that might decide the issue in favour of his own model.

ii. Accounting for New Data on Dreams: “Precognitive” Dreams

Any scientific theory must be able to account for all of the data. Dennett believes that there exists certain dream reports which the received view has failed to acknowledge and cannot account for. There exists anecdotal evidence that seems to suggest that dreams are concocted at the moment of waking, rather than experienced during sleep and is therefore a direct challenge to the received view. The most well-known anecdotal example was noted by French physicist Alfred Maury, who dreamt for some time of taking part in the French Revolution, before being forcibly taken to the Guillotine. As his head was about to be cut off in the dream, he woke up with the headboard falling on his neck (Freud, 1900: Chapter 1; Blackmore, 2005: p. 103). This anecdotal type of dream is well documented in films - one dreams of taking a long romantic vacation with a significant other, about to kiss them, only to wake up with the dog licking their face. In Dennett's own anecdotal example, he recalls dreaming for some time of looking for his neighbour’s goat. Eventually, the goat started bleating at the same time as his alarm clock went off which then woke him up (Dennett: 1976, p.157). The received view is committed to the claim that dreams, that is, conscious experiences, occur whilst an individual is asleep. The individual then awakes with the preserved memory of content from the dream. But the anecdotes pose a potentially fatal problem for the received view because the entire content of the dream seems to be caused by the stimulus that woke the individual up. The anecdotes make dreams look more like spontaneous imaginings on waking than the real time conscious experiences of the received view. Dennett argues that precognition is the only defense the received view can take against this implication. Given the paranormal connotations, this defense is redundant (Dennett: 1976, p. 158). Dennett provides a number of other anecdotal examples that imply that the narrative to dreams is triggered retrospectively, after waking. The content of the dream thematically and logically leads up to the end point, which is too similar to the waking stimulus to be a coincidence. The difficulty for the received view is to explain how the content could be working towards simultaneously ending with the sound, or equivalent experience, of something in the outside environment. Figures 2 and 3 depict the attempts to explain the new data on the received view and Dennett’s model.

 

Fig 2

 

 

Fig 3

If Dennett is right that the received view can only explain the anecdotes by appeal to precognition then we would do well to adopt a more plausible account of dreaming. Any attempt to suggest that individuals have premonitions about the near future from dreams would have little credibility. Dennett’s alternative unconscious uploading account might also allow for the retro-selection of appropriate content on the moment of waking. This theory allows for two ways of dreaming to regularly occur, both without conscious experience during sleep. The first way: dreams play out similar to the received view, only individuals lack consciousness of the content during sleep. Specifically, Dennett argues that during sleep different memories are uploaded by the unconscious and woven together to create the dream content that will eventually be experienced when the individual wakes. During sleep the content of the dream is gathered together by the brain, without conscious awareness, like recording a programme at night which is consciously played for the first time in waking moments. The second way: perhaps when one is woken up dramatically, the brain selects material (relevant to the nature of the waking stimulus) at the moment of waking which is threaded together as a new story, causing the individual to have a “hallucination of recollection” (Dennett, 1976: p.169). It might be that the unconscious was preparing entirely different content during sleep, which was set to be recalled, but that content is overwritten due to the dramatic interruption.

On Dennett’s unconscious uploading/retro-selection theory, “it is not like anything to dream, although it is like something to have dreamed” (Dennett, 1976: p.161). Consciousness is invoked on waking as one apparently remembers an event which had never consciously occurred before. On the above proposal, Dennett was not experiencing A – G, nor was the content for that dream even being prepared (though it might have been material prepared at an earlier date and selected now). Dennett also alludes to a library in the brain of undreamed dreams with various endings that are selected in the moments of waking to appropriately fit the narrative connotations of the stimuli that wakes the individual (Dennett, 1976: p. 158). His account is open to include variations of his model: perhaps various different endings might be selected – instead of a goat, other content may have competed for uploading through association – it may have been a dream involving going to the barbers and getting a haircut, before the buzzing of the clippers coincided with the alarm clock, or boarding a spaceship before it took off, had the sound of the alarm clock been more readily associated with these themes. Alternatively, the same ending might get fixed with alternative story-lines leading up to that ending. Dennett could have had a dream about going to his local job centre and being employed as a farmer, rather than searching for his neighbour’s goat, though the same ending of finding a bleating goat will stay in place.

As Dennett notes, if any of these possibilities turn out to be true then the received view is false and so they are serious rivals indeed. For all the evidence we have (in 1976), Dennett believes his unconscious uploading model is better placed to explain the data than the received view because the anecdotes prove that sometimes the conscious experience only occurs after sleep – an alien idea to the received view. Moreover, the received view should have no immediate advantage over the other models. Dennett separates the memory from the experience – the memory of a dream is first experienced consciously when it is recalled. The result is the same as Malcolm's – the received view is epistemologically and metaphysically flawed. If there is nothing it is like to have a dream during sleep, then the recall of dreams does not refer to any experience that occurred during sleep.

e. Possible Objections to Dennett

i. Lucid Dreaming

It might seem that lucid dreaming is an immediate objection to Malcolm and Dennett's arguments against the received view. Lucid dreaming occurs when an individual is aware during a dream that it is a dream. Lucid dreaming is therefore an example of experiencing a dream whilst one is asleep, therefore dreams must be experiences that occur during sleep. In replying to this objection, Dennett argues that lucid dreaming does not really occur. For, the waking impression might contain “the literary conceit of a dream within a dream” (Dennett: 1976, p.161). The recalled dream might just have content where it seems as though the individual is aware they are having a dream. Even this content might have been uploaded unconsciously during sleep. It might now seem that we have no obvious way of testing that the individual who reports having had a lucid dream is aware of the dream at the time. Dennett’s reply is undermined by a series of experiments carried out by Stephen LaBerge. LaBerge asked individuals who could lucid dream to voluntarily communicate with him in their lucid dreams by using the match in content of the dream to the direction of eye movements, thereby challenging Dennett’s claim that individuals are not conscious during sleep. The participants made pre-arranged and agreed eye movements. One might think that if lucid dreaming is really occurring, and the direction of REM might be linked to the content of the dream as it occurs, then one could pre-arrange some way of looking around in a lucid dream (that stands out from random eye movement) in order to test both theories. Stephen LaBerge carried out this experiment with positive results (LaBerge, 1990). The meaning of certain patterns of intended and previously arranged eye movement, for example, left-right-left-right can be a sleeping individual's way of expressing the fact that they are having a dream and aware that they are having a dream (see figure below).

 

Fig 4

 

In figure 4 above, the graph correlates with the reported dream content. The participant wakes and gives a dream report which matches the eye movement. The participant claims to have made five eye signals as agreed. The written report confirms the eye movements and the participant is not shown any of the physiological data until his report is given. He alleges to have gained lucidity in the first eye movement (1, LRLR). He then flew about in his dream, until 2, where he signalled that he thought he had awakened (2, LRx4). 2 is a false awakening which he then realized and went on to signal lucidity once again in 3. Here, he made too many eye signals and went on to correct this. This corroborates with the participant’s report. The participant accurately signals in the fifth set of eye movements that he is awake which coincides with the other physiological data that the participant is awake (participants were instructed to keep their eyes closed when they thought they had awoken).

The important implication of LaBerge’s experiment for Malcolm and Dennett’s arguments is that they can no longer dismiss the results of content matching with eye movement as merely “apparent” or “occasional” (as they took the content-relative-to-eye-movement claim at the time they wrote). Predictability is a hallmark of good science and these findings indicate that sleep science can achieve such status. The content-relativity thesis is confirmed by the findings because it is exactly what that thesis predicts in LaBerge’s experiment. The opposing thesis – that eye movement is not relative to content, cannot explain why the predicted result occurred. Namely, if we believe that eye content might match the content of the dream (and that lucid dreaming really can occur, contra Dennett), then if sleep scientists asked participants to do something in their dream (upon onset of lucidity) that would show that they are aware – this is what we would expect.

The received view gains further confirmation because in order to make sense of the communication as communication, one has to employ the notion that the direction of eye movement was being voluntarily expressed. If one is convinced by LaBerge’s arguments, then the notion of communicating during sleep undercuts Malcolm's privilege of the dream report and his claim that individuals could never communicate during sleep. We can empirically demonstrate, then, that the waking impression and the dream itself are logically separable. We need an account of why a sleeping individual would be exhibiting such systematic eye movements in LaBerge’s experiments. One reasonable conclusion is that the participant is mentally alert (has state consciousness) even though they are not awake (are not displaying creature consciousness). The most important implication of LaBerge’s study is that communication can arguably occur from within the dream without waking the individual up in any way. This supports the claim of the received view that we can be asleep and yet having a sequence of conscious experiences at the same time. The content of the dream occurs during sleep because the content is matched to the expected, systematic eye movement beyond coincidence. The systematic eye movement occurs exactly as predicted on the received view. Although Dennett could account for matched content to eye movement – he could not account for what seems like voluntary communication, which requires that an individual is conscious. This arguably leaves us in a position to cash out dreaming experience as verifiable as any other waking state.

The “content relative to eye movement” thesis can be accepted uncontroversially. Much dream content really does occur during sleep and this can be taken advantage of via participants communicating and thereby demonstrating conscious awareness. If LaBerge’s participants communicating with sleep scientists convinces, then the received view is right to think that the content of dreams does occur during sleep and that the dream content during sleep matches the memory of dream content. The findings are significant insofar as the participants influenced the dream content and thereby influenced eye movement through an act of agency within the dream.

The most important potential implication of LaBerge's findings, though the more controversial implication, is that communication can occur from within the dream without the individual waking up in any way, thereby confirming the received view. The content of the dream occurs during sleep because the study confirms that eye movement during sleep really does match up with the content of the dream as reported after sleep. LaBerge chose agreed eye movements rather than speech, because in this way, the individual would remain asleep. The experiment also challenges Malcolm’s claim that dreams cannot be communicated and were therefore logically unverifiable.

There exists a possible objection to the received view cashing in the results from LaBerge’s experiments, which can be raised in line with Dennett's unconscious memory-loading process. The objector might argue that the unconscious takes care of the pending task of looking around in the dream in the pre-arranged manner. If, according to Dennett's retro-selection model, the unconscious uploads memories from the day, then it follows that one of the memories it might upload could be of the participant discussing with LaBerge the specific eye movements to be made. So the unconscious might carry out the eye movements, misleading scientists into believing the sleeping individual is conscious. Once we start to credit the unconscious with being able to negotiate with waking memories during sleep and to make judgements, we either have to change our picture of the unconscious or conclude that these individuals are consciously aware during sleep.

LaBerge carried out a further experiment in which the timing of dreams was measured from within the dream. The experiment used lucid dreamers again who made the agreed signal. They then counted to 10 in their dream and then made the eye signal again. They were also asked to estimate the passing of 10 seconds without counting (see fig. 5 below). LaBerge has concluded from his experiments that experience in dreams happens roughly at the same time as in waking life. This second experiment carried out by LaBerge further blunts the Dennettian objection from unconsciously uploading the task. For, it seems to require agency (rather than unconscious processing) to negotiate how much time is passing before carrying out an action.

Fig 5

There is still room for scepticism towards dreams being consciously experienced during sleep. For, the sceptic could bite the bullet and say that lucid dreams are a special, anomalous case that does not apply to ordinary dreaming. Indeed, there is evidence that different parts of the brain are accessed, or more strongly activated, during lucid dreaming (pre-frontal brain regions, pre-cuneus and front polar regions). So there still remains the possibility that lucid dreaming is an example of consciously “waking up within a dream” whilst ordinary dreams are taken care of entirely by the unconscious (thus there is not anything-it's-like to have an ordinary dream occur during sleep). It might be useful to look at one report of the memory of a lucid dream:

In a dangerous part of San Francisco, for some reason I start crawling on the sidewalk. I start
to reflect: this is strange; why can't I walk? Can other people walk upright here? Is it just me
who has to crawl? I see a man in a suit walking under the streetlight. Now my curiosity is
replaced by fear. I think, crawling around like this may be interesting but it is not safe. Then I
think, I never do this – I always walk around San Francisco upright! This only happens in
dreams. Finally, it dawns on me: I must be dreaming! (LaBerge & Rheingold, 1990: p.35)

Notice that this is not just the memory-report of a lucid dream. Rather, it is the memory-report of an ordinary dream turning into a lucid dream. The report demonstrates an epistemic transition within the dream. With this in mind, it is difficult to maintain that lucid dreams are anomalous, or that lucid dreams bring about conscious experience. For, there is a gradual process of realization. We might want to also thus accept that the preceding ordinary dream is conscious too because whenever individuals gain lucidity in their dreams, there is a prior process of gradual realization that is present in the dream report. When an individual acquires lucidity in a dream, they are arguably already conscious but they begin to think more critically, as the above example demonstrates. Different parts of the brain are activated during lucidity, but these areas do not implicate consciousness. Rather, they are better correlated to the individual thinking more critically.

The gradual transition from ordinary dream to lucid dream can be more fully emphasized. It is possible to question oneself in a dream whilst failing to provide the right answer. One might disbelieve that one is awake and ask another dream character to hit oneself, apparently feel pain and so conclude that one is awake. It is important here to highlight the existence of partial-lucid dreams in order to show that dreams often involve irrationality and that there are fine-grained transitions to the fuller lucidity of critical thinking in dreams. A lucid dream, unlike ordinary dreams, is defined strictly in terms of the advanced epistemic status of the dreamer – the individual is having a lucid dream if they are aware that he or she is dreaming (Green, 1968: p.15). Lucid dreaming is defined in the weak sense as awareness that one is dreaming. Defined more strongly, lucid dreams involve controllability and a level of clarity akin to waking life. As stated, individuals can come close to realizing they are dreaming and miss out – they might dream about the nature of dreaming or lucid dreaming without realizing that they are currently in a dream themselves. Hence the norms of logical inference do not apply to ordinary dreams. One might look around in one’s dream and think “I must remember the people in this dream for my dream journal” without inferring that one is in a dream and acquiring lucidity. In another dream an individual might intend to tell another person in real life, who is featured in the dream, something they have learned just as soon as they wake up. In a dream, the dreamer could be accused by dream character Y of carrying out an action A – the dreamer thinks “I must wake up to tell Y that I did not carry out this action.” Implicitly, the dreamer knows it is a dream to make sense of the belief that they need to wake up, but they have not realized that Y will not have a continuation of the beliefs of their dream representative. The dream here involves awareness of the dream state without controllability over the dream (for they still seem to be going along with the content as though it were real). There is another type of common, partial-lucid dream in which people wake up from a dream and are able to return to it upon sleeping and change the course of the dream. This type of dream seems to involve controllability without awareness – they treat it as real and do not treat it as an acknowledged dream but are able to have much more control over the content than usual. Lucid dreamers used in experimental settings are much more experienced and have lucid dreams in the strongest sense – they are aware they are dreaming (and can maintain this awareness for a significant duration of time), have control over the content and have a level of clarity of thinking akin to waking life.

There are a number of further implications of LaBerge’s findings for various philosophical claims. The existence of communication during lucid dreaming challenges Augustine’s idea that dreams are not actions. If we can gain the same level of agency we have in waking life during lucid dreaming, then it might be the case that even ordinary dreams carry some, albeit reduced, form of agency. We might want to believe this new claim if we accept that agency is not suddenly invoked during lucidity, but is rather enhanced. The findings also open up the possibility to test the claim that somebody who is dreaming cannot tell that they are not awake, further undermining Descartes’ claim that dreaming and waking experiences are inherently indistinguishable. Previously, philosophers who wanted to resist the sceptical argument would say that someone who is dreaming might be able unable to distinguish the state they were in but not that someone awake is unable to distinguish, for example, Locke’s point that if we are in pain then we must be in an awake state. Had Descartes been a lucid dreamer, then when he was sat by the fire, his phrase might have come out as “I am now seated by the fire but I have also been deceived in dreams into believing I have been seated by the fire … though on occasion I have realized that I was just dreaming when apparently seated by the fire and so was not deceived at all!” It is surely a contingent fact that people rarely have lucid dreams. If people lucidly dreamt much more often, then Descartes’ sceptical dream argument would have had little to motivate it. Work on communicative lucid dreaming might also open up the possibility to test further phenomenally distinguishing features between the two states via individuals communicating statements about these features.

ii. Alternative Explanations for “Precognitive” Dreams

Dennett had cited an interesting type of dream report where the ending of the dream was strongly implied by the stimulus of awakening. For example: having a dream of getting married end with the sound of church bells ringing, which coincides with the sound of the alarm clock. These seemingly precognitive dreams are sometimes referred to in the literature as “the anecdotes” because of their generally non-experimental form. Though they are remote from scientific investigation, the mere existence of the anecdotes at all caused trouble for the received view and requires explanation. They provide extra evidence for Dennett’s proposed paradigm shift of dreaming. On the other hand, if there is enough evidence to claim that dreams are consciously experienced during sleep then the anecdotal data of dreams will not be a powerful enough counterexample; they will not warrant a paradigm shift in our thinking about dreams. The biggest challenge the anecdotes represent is that on occasion the memory can significantly deviate from the actual experience. On this view, false memories overriding the actual content of the dream does occur, but these experiences are the exception rather than the rule.

Though LaBerge’s experiments suggest that the content of dreams consciously occurs during sleep, the findings on their own are insufficient to draw the conclusion that all dreams occur during sleep. The existence of the anecdotes blocks one from drawing that conclusion. But these anecdotes can be explained on the received view. It is already known that the human species has specific bodily rhythms for sleep. Further, there is a noted phenomenon that people wake up at the same time even when their alarm clocks are off. Dennett himself says that he had got out his old alarm clock that he had not used in months and set the alarm himself (Dennett, 1976: p.157) – presumably for a time he usually gets up at anyway. If the subconscious can be credited with either creating a dream world on the received view or a dream memory on the Dennettian view and the personal body clock works with some degree of automaticity during sleep, one may well ask why the dream's anticipation (and symbolic representation) of this need be precognitive in a paranormal sense. Had Dennett woken up earlier, he may have lain in bed realizing that his alarm clock was going to go off, which is not considered an act of precognition. Had he thought this during sleep, the received view would expect it to be covered symbolically via associative imagery. Thus, perhaps Dennett is not being impartial in his treatment of dreams, and his argument begs the question since he is considering the received view's version of dreaming to be inferior to his own theory by assuming that thought in dreaming is completely oblivious to the banalities of the future.

Arguably, the other anecdotes can be explained away. Recall the most famous anecdote, where Maury was dragged to the guillotine with the headboard falling on his neck, waking him up (Freud discusses the case of Maury in the first chapter of his Interpretation of Dreams). Maury may have had some ongoing and subconscious awareness of the wobbliness of his headboard before it fell (presumably it did not fall without movement on his part – possibly in resistance to being beheaded. Maury's could thus be a type of dreaming involving self-fulfilling prophecy) that is left out of the overall account. Dennett's account agrees that any unconscious awareness of the outside environment is represented differently in the dream. Maury would have had enough time to form the simple association of his headboard being like a guillotine from the French Revolution. If he had some awareness of the looseness of his headboard, then the thought: “if this headboard falls on me it will be like being beheaded,” would be appropriately synchronized into the dream.

Though it is possible to provide alternative explanation for some of the anecdotes, it might be worth dividing Dennett’s anecdotes into hard and soft anecdotes. A hard anecdote is outlined by Dennett: a car backfires outside and an individual wakes up with the memory of a dream logically leading up to being shot. The soft anecdotes, those already mentioned (such as Maury’s and Dennett’s own examples), can at least be alternatively explained. The hard anecdotes, on the other hand, cannot simply be explained by appeal to body clocks and anticipation in sleep. The hard anecdotes reveal that the dreamer has no idea what will wake them up in the morning, if anything (maybe the alarm will not actually go off; or the truck backfiring could occur at any point). The soft anecdotes might involve outside stimuli being unconsciously incorporated into the dream. The only hard anecdote mentioned in Dennett’s paper is made up by Dennett himself to outline his theory, rather than representing a genuine example of a dream. Notice that a hard anecdote could favour the received view – if experimenters switched off an alarm that a participant had set and that individual woke with a dream which seemed to anticipate the alarm going off at the set time (for example, had Dennett’s goat dream occurred despite his alarm not going off) then we might do best to conclude that the dream consciously occurred during sleep in a sequential order leading up to the awakening. Dennett is right that the issue is worth empirically investigating. If a survey found that hard anecdote-like dreams (such as a truck back-firing and waking with a dream thematically similar) occur often, then the received view is either dis-confirmed or must find a way to make room for such dreams.

Dennett might reply that he does not need to wait for the empirical evidence to reveal that the hard anecdotes are common. He might simply deny that the alternative explanation of the soft anecdotes is not credible. If the attempt to explain the soft anecdotes are not credible (as a Dennettian might object), they at least draw attention to the extraneous variables in each of the anecdotes. Dennett's sole reason for preferring his retro-selection theory over the received view is so an explanation can account for the anecdotal data. But Dennett uses his own alarm clock which he himself set the night before; Maury uses his own bed; Dennett of course admits that the evidence is anecdotal and not experimental. In one of the few actual experiments carried out, water is slowly dripped onto a sleeping participant’s back to wake them up, but this is not timed and is akin to a soft anecdote. More empirical work needs to be done to clarify the issue and for the debate to move forward.

The anecdotes are a specific subclass of prophetic dreams. It is worth noting a related issue about dreams which are alleged to have a more prophetic nature. It seems from the subjective point of view that if one was to have had a dream about a plane crash the night before (or morning of) September 11th 2001 this would have been a premonitory dream. Or one might dream of a relative or celebrity dying and then wake to find this actually happens in real life. The probability of the occurrence of a dream being loosely (or even exactly) about an unforeseen, future event is increased when one has a whole lifetime of dreams – a lifetime to generate one or two coincidences. The dreams which are not premonitory are under-reported and the apparently prophetic dreams are over-reported. The probability of having a dream with content vaguely similar to the future is increased by the fact that a single individual has many dreams over a lifetime. Big events are witnessed and acknowledged by large numbers of people which increases the probability that someone will have a dream with similar content (Dawkins, 1998: p.158). No doubt when the twin towers were attacked some people just so happened to have dreams about plane crashes, and a few might have even had dreams more or less related to planes crashing into towers, or even the Twin Towers. Billions of dreams would have been generated the night prior, as they are every night around the world, and so some would have invariably “hit the mark.” Dennett’s anecdotes are somewhat different, but they too may have the same problem of being over-reported in this way. At the same time, they may also suffer from being under-reported because they are not always obviously related to the stimulus of awakening, particularly where someone pays attention to the content of the dream and forgets how they awakened. Hence, the extent to which we experience anecdote-like dreams is currently uncertain, though they can be explained on the received view.

4. The Function of Dreaming

The function of dreaming – exactly why we dream and what purpose it could fulfil in helping us to survive and reproduce - is explained in evolutionary terms of natural selection. Natural selection is best understood as operating through three principles: Variation, Heredity and Selection. A group of living creatures within a species vary from one another in their traits, their traits are passed on to their own offspring, and there is competition for survival and reproduction amongst all of these creatures. The result is that those creatures that have traits better suited for surviving and reproducing will be precisely the creatures to survive, reproduce and pass on the successful traits. Most traits of any organism are implicated in helping it to survive and thereby typically serve some purpose. Although the question of what dreaming might do for us has to this day remained a mystery, there has never been a shortage of proposed theories. The most sustained first attempt to account for why we dream comes from Freud (1900), whose theory was countered by his friend and later adversary Carl Jung. An outline of these two early approaches will be followed by a leading theory in philosophical and neuro-biological literature: that dreaming is an evolutionary by-product. On this view, dreaming has no function but comes as a side effect of other useful traits, namely, cognition and sleep. A contemporary theory opposing the view that dreaming has no function, in comparison, holds that dreaming is a highly advantageous state where the content of the dream aids an organism in later waking behaviour that is survival-enhancing by rehearsing the perception and avoidance of threat.

a. Early Approaches

i. Freud: Psychoanalysis

The psychoanalytic approach, initiated by Freud, gives high esteem to dreams as a key source of insight into, and the main method of opening up, the unconscious (Freud, 1900: §VII, E, p.381). Psychoanalysis is a type of therapy aimed at helping people overcome mental problems. Therapy often involves in depth analyses of patient’s dreams. In order to analyse dreams though, Freudian psychoanalysis is committed to an assumption that dreams are fulfilling a certain function. Freud explicitly put forward a theory of the function of dreams. He believed that dreams have not been culturally generated by humans, as Malcolm thought, but are rather a mental activity that our ancestors also experienced. During sleep, the mind is disconnected from the external world but remains instinctual. The psychoanalytic take on dreaming is better understood in terms of Freud’s overall picture of the mind, which he split into id, ego and super-ego. The id is an entirely unconscious part of the mind, something we cannot gain control of, but is rather only systematically suppressed. It is present at birth, does not understand the differences between opposites and seeks to satisfy its continually generated libidinal instinctual impulses (Storr, 1989: p.61). The id precedes any sense of self, is chaotic, aggressive, unorganised, produces no collective will and is incapable of making value judgements As the child develops and grows up his instinctual needs as a baby are repressed and covered up (Storr, 1989: p.63). If humans were guided only by the id they would behave like babies, trying to fulfil their every need instantly, without being able to wait; the id has no conception of time. It is the id that will get an individual into social trouble. The super-ego is the opposing, counterbalance to the id, containing all of our social norms such as morality. Though we grow up and develop egos and super-egos, the id constantly generates new desires that pressurize us in our overall psychologies and social relations. The work of repression is constant for as long as we are alive. The super-ego is mostly unconscious. In the case of dreams, the work of censorship is carried out by the super-ego. The ego was initially conceived by Freud as our sense of self. He later thought of it more as planning, delayed gratification and other types of thinking that have developed late in evolution. The ego is mostly conscious and has to balance the struggle between the id and the super-ego and also in navigating the external and internal worlds. This means that we experience the continual power struggle between the super-ego and the id. The ego has to meet a quota of fulfilling some of the id’s desires, but only where this will marginally affect the individual.

Freud’s first topographical division of the mind consisted of the conscious, pre-conscious and unconscious (the pre-conscious is that which is not quite conscious but could quite easily be brought into conscious awareness). His second division of the mind into id, ego and super-ego explains how consciousness can become complicated when the two topographies are superimposed upon one another. Mental content from the unconscious constantly struggles to become conscious. Dreams are an example of unconscious content coming up to the conscious and being partially confronted in awareness, although the content is distorted. Dreaming is an opportunity for the desires of the id to be satisfied without causing the individual too much trouble. We cannot experience the desires of the id in naked form, for they would be too disturbing. The super-ego finds various ways of censoring a dream and fulfilling the desires latently. Our conscious attention is effectively misdirected. What we recall of dreams is further distorted as repression and censorship continues into attempted recollections of the dream. The resulting dream is always a compromise between disguise and the direct expression of the id’s desires. The censorship is carried out in various ways (Hopkins, 1991: pp. 112-113). Images which are associated with each other are condensed (two people might become one, characters may morph based on fleeting similarities) and emotions are displaced onto other objects. Dreams are woven together in a story-like element to further absorb the dreamer in the manifest content.

It is better that these wishes come disguised in apparently nonsensical stories in order to stop the dreamer from awakening in horror (Flanagan: 2000, p.43). The content of the dream, even with some censorship in place, still might shock an individual on waking reflection and is therefore further distorted by the time it reaches memory, for the censor is still at work. Malcolm positively cited a psychoanalyst who had claimed that the psychoanalyst is really interested in what the patient thought the dream was about (that is, the memory of the dream) rather than the actual experience (Malcolm, 1959: pp. 121-123, Appendix).

Though Freud was not an evolutionary biologist, his theory of dreams can be easily recast in evolutionary terms of natural selection. Freud thought that we dream in the way we do because it aids individuals in surviving and so is passed on and therefore positively selected. Individuals who dreamt in a significantly different way would not have survived and reproduced, and their way of dreaming would have died with them. But how does dreaming help individuals to survive? Freud postulates that dreaming simultaneously serves two functions. The primary function at the psychological level is wish fulfilment (Storr, 1989: p.44). During the day humans have far too many desires than they could possibly satisfy. These are the desires continually generated by the id. Desire often provides the impetus for action. If acted upon, some of these desires might get the individual killed or in social trouble, such as isolation or ostracism which may potentially result in not reproducing. Since wish fulfilment occurring during sleep is a mechanism that could keep their desires in check, the mechanism would be selected for. Freud claims that, having fulfilled the multifarious desires of the id in sleep, they can remain suppressed during the following days. The individual no longer needs to carry the action out in waking life, thereby potentially stopping the individual from being killed or having fitness levels severely reduced. This provides reason as to why we would need to sleep with conscious imagery. Freud only applied his theory to humans but it could be extrapolated to other animals that also have desires – for example, mammals and other vertebrates, though the desires of most members of the animal kingdom will surely be less complex than that of human desire.

The primary function at the physiological level is to keep an individual asleep while satisfying unconscious desires (Freud, 1900: §V, C, p.234). Also, it is clear that keeping an individual asleep and stopping him or her from actually carrying out the desires during sleep is beneficial to survival. So though the individual has their desires satisfied during sleep, it is done so in a disguised manner. Here, Freud separates the dream into manifest and latent content. The manifest content is the actual content that is experienced and recalled at the surface level (a dream of travelling on a train as it goes through a tunnel). The latent content is the underlying desire that is being fulfilled by the manifest content (a desire for sexual intercourse with somebody that will land the dreaming individual in trouble). The individual is kept asleep by the unconscious disguising the wishes.

What about dreams where the manifest content seems to be emotionally painful, distressing, or a good example of an anxiety dream, rather than wish fulfilment? How can our desires be satisfied when our experiences do not seem to involve what we wanted? Freud suggests that these can be examples where the dream fails to properly disguise the content. Indeed, these usually wake the individual up. Freud was aware of an early study which suggested that dream content is biased towards the negative (Freud, 1900: pp. 193-194) – a study which has been subsequently confirmed. Freud’s distinction between manifest and latent content explains away this objection. The underlying, latent content carries the wish and dreams are distorted by a psychological censor because they will wake an individual up if they are too disturbing, and then the desire will remain unsuppressed. Perhaps dreams operate with an emotion which is the opposite of gratification precisely to distract the sleeping individual from the true nature of the desire. Wish fulfilment might be carried out in the form of an anxiety dream where the desire is especially disturbing if realized. Other anxiety dreams can actually fulfil wishes through displacement of the emotions. Freud uses an example of one of his own dreams where a patient is infected but it fulfils the wish of alleviating the guilt he felt that he could not cure her (the dream of Irma’s injection). The dream allowed the blame to be lifted away from himself and projected onto a fellow doctor who was responsible for the injection. Despite the displeasure of the dream, the wish for the alleviation guilt was fulfilled.

The dream for Freudians relies on a distinction between indicative and imperative content. An indicative state of mind is where my representational systems show the world to be a certain way. The exemplar instance of indicative representations is belief. I might accurately perceive and believe that it is raining and I thus have an indicative representational mental state. Imperative states of mind, on the other hand, are ones in which I desire the world to be a certain way – a way that is different to the way it currently is. I might desire that it will snow. A dream is an instance of an indicative representation replacing an imperative one in order to suppress a desire. We see something in a dream and believe it is there in front of us. This is what we want and what we desired, so once we believe we have it, the desire is vanquished, making it unlikely we will try to satisfy the desire in waking life.

ii. Jung: Analytic Psychology

Trying to provide even a simple exposition of Freud’s or Jung’s theories of dreams will not please all scholars and practitioners that adhere to the thinkers’ traditions. Drawing out the differences or similarities between their views is an exegetical task and the resulting statements about their theories are thus always debatable. Many working in the tradition of psychoanalysis or analytical psychology opt for a synthesis between their views. At the risk of caricaturing Jung’s theory of dreams, the differences in their views will be emphasized to contrast with Freud’s.

Like Freud, Jung also believed that dream analysis is a major way of gaining knowledge about the unconscious – a deep facet of the mind that has been present in our ancestors throughout evolutionary history. But Jung’s evolutionary story of dreaming differs from Freud’s. Whereas Freud understood dreams as using memory from the days preceding the dream (particularly the “day residue” of the day immediately leading up to the dream) and earlier childhood experiences, Jung thought the dream also worked with more distant material: the collective unconscious. The collective unconscious is where the ancestral memories of the species are stored and are common to all people. Some philosophers, such as Locke, had believed the mind is a blank slate. Jung believed that the collective unconscious underlies the psychology of all humans and is even identical across some different species. The collective unconscious is especially pronounced in dreaming where universal symbols are processed, known as the archetypes. The distinction between signs and symbols is an important one (Jung, 1968: p.3). Signs refer to what is already known, whereas symbols contain a multiplicity of meanings (Mathers, 2001: p.116). More, indeed, than can be captured, thereby always leaving an unknowable aspect, and hence always requiring further work to think about the dream, thereby hinting at future perspectives one may take toward oneself and one’s dreams (Mathers, 2001: p.116).

Jung saw dreaming as playing an integral role in the overall architecture of the mind and conducive to surviving in the world for during the process of dreaming the day’s experiences are connected to previous experiences and our whole personal history. Whereas Freud thought that the adaptive advantage to dreams was to distract us and thereby keep us asleep, Jung thought the reverse: we need to sleep in order to dream and dreaming serves multiple functions. What does dreaming do for us, according to Jung? Dreams compensate for imbalances in the conscious attitudes of the dreamer. Psychology depends upon the interplay of opposites (Jung, 1968: p.47). By dreaming of opposite possibilities to waking life, such as a logical individual (with a strong thinking function) having dreams which are much more feeling–based, the balance is restored (Stevens, 1994: p.106). “Dreams perform some homeostatic or self-regulatory function … they obey the biological imperative of adaptation in the interests of personal adjustment, growth, and survival” (Stevens, 1994: p.104). They play a role in keeping the individual appropriately adapted to their social setting. Dreaming also carries out a more general type of compensation, concerning the psychology of gender. The psyche is essentially androgynous and so dreams provide an opportunity to balance the overall self to an individual’s opposite gender that make up their personality and mind. Hence dreams are not the mere “guardians of sleep” as Freud thought, but are rather a necessary part to maintaining psychological well being and development. When the conscious is not put in touch with the unconscious, homeostasis is lost and psychological disturbance will result (Jung, 1968: p.37). Dreams serve up unconscious content that has been repressed, ignored or undervalued in waking life. Although he emphasized an objective element to dreaming (that the unconscious often makes use of universal and culturally shared symbols), Jung was opposed to the possibility of a fixed dream dictionary because the meaning of symbols will change depending on the dreamer and over time as they associate images with different meanings.

Jung agrees with Freud that there are a wealth of symbols and allegorical imagery that can stand in for the sexual act, from breaking down a door to placing a sword in a sheath. In the course of his analysis, Freud would gradually move from manifest content to the latent content, though he did encourage the dreamer to give their own interpretations through free association. Jung believed that the unconscious choice of symbol itself is just as important and can tell us something about that individual (Jung, 1968: p.14). Alternatively, apparent phallic symbols might symbolize other notions – a key in a lock might symbolize hope or security, rather than anything sexual. The dream imagery, what Freud called the manifest content, is what will reveal the meaning of the dream. Where Freud had asked “What is the dream trying to hide?” Jung asks: “What is the dream trying to express or communicate?” because dreams occur without a censor: they are “undistorted and purposeful” (Whitmont & Perera, 1989: p.1).

Another function of dreaming that distinguishes Jung’s from Freud’s account is that dreams provide images of the possibilities the future may have in store for the sleeping individual. Not that dreams are precognitive in a paranormal sense, Jung emphasized, but the unconscious clearly does entertain counter-factual situations during sleep. Those possibilities entertained are usually general aspects of human character that are common to us all (Johnson, 2009: p.46). Dreams sometimes warn us of dangerous upcoming events, but all the same, they do not always do so (Jung, 1968: p.36). The connection of present events to past experience is where dreams are especially functional (Mathers, 2001: p.126). Freud and Jung’s theories clearly overlap here. But whereas Freud assessed dreams in terms of looking back into the past, especially the antagonisms of childhood experience, dreaming for Jung importantly also lays out the possibilities of the future. This clearly has survival value, since it is in the future where that individual will eventually develop and try to survive and reproduce. Dreaming for Jung is like “dreaming” in the other sense – that of aspiring, wishing and hoping. Dreams point towards our future development and individuation. The notion of personal development brought about by dreaming might depend upon regularly recalling a dream, which is something that Freud’s claim of wish fulfilment need not be committed to, and it is not clear how this deals with animal’s dreaming.

Dreams are a special instance of the collective unconscious at work, where we can trace much more ancient symbolism. The collective unconscious gives rise to the archetypes: “the universal patterns or tendencies in the human unconscious that find their way into our individual psyches and form us. They are actually the psychological building blocks of energy that combine together to create the individual psyche” (Johnson, 2009: p.46). Dreams demonstrate how the unconscious processes thought in “the language of symbolism” (Johnson, 2009: p.4). The most important function of dreams is teaching us how to think symbolically and deal with communication from the unconscious. Dreams always have a multiplicity of meanings, can be re-interpreted and new meanings discovered. Jung believed that “meaning making enhances survival” (Mathers, 2001: p.117). According to Flanagan, it is much harder to construe Jung’s theory of dreams as adhering to the basic tenets of evolutionary biology than Freud’s, for two fundamental reasons. Firstly, why would the collective unconscious expressing symbols relevant to the species have any gains in terms of reproductive success? Secondly, Jung’s theory is more often interpreted in Lamarckian, and not Darwinian, terms, which leaves it out in the cold in regards to accepted evolutionary biology (Flanagan, 2000: p.44). According to Lamarck’s conception of evolution, traits developed during a lifetime can then be subsequently passed onto the next generation. Hence Lamarck believed that the giraffe got its long neck because one stretched its neck during its lifetime to reach high up leaves and this made the neck longer, a trait which was passed onto its offspring. According to the  widely favoured Darwinian view of evolution, those giraffes that just so happened to have a longer neck would have had access to the source of food over those with shorter necks, and would have been selected. Jung’s theory is interpreted as Lamarckian rather than Darwinian because symbols that are learned during an individual’s particular historical period can be genetically encoded and passed on (Flanagan, 2000: p.64). Perhaps one could reply in Jungian-Darwinian terms that those individuals who just so happened to be born with a brain receptive to certain symbols having certain meanings (rather than learning them), survived over those individuals that did not. But then Flanagan’s first reason remains as an objection – what advantage would this have in terms of surviving? Others have argued that the collective unconscious is actually much more scientifically credible than critics have taken it to be. According to the line of argument, the collective unconscious essentially coincides with current views about innate behaviours, in fields such as socio-biology and ethology (Stevens, 1994: p.51). Appropriate environmental (or cognitive) stimuli trigger patterns of behaviours or thought that are inherited, such as hunting, fighting and mothering. In humans, there are, for example, the universal expressions of emotion (anger, disgust, fear, happiness, sadness, surprise). Humans and other mammals have an inbuilt fear of snakes that we do not need to learn. These patterns can be found in dreams. The dream images are generated by homologous neural structures that are shared amongst animals, not merely passed on to the next generation as images. The Jungian can dodge the accusation of Lamarckism by arguing that dreams involve inherited patterns of behaviour based on epigenetic rules (the genetically inherited rules upon which development then proceeds for an individual) and arguing that epigenetics does not necessarily need to endorse Lamarckism (epigenetics is a hot topic in the philosophy of biology. For a cogent introduction, readers can consult Jablonka & Lamb, 2005). Alternatively, in light of epigenetics, the Jungian can defend a Lamarackist view of evolution - a re-emerging but still controversial position.

To rehearse the differences: Freud believed that dreams are the result of a power struggle between the id’s constantly generating desires and the super-ego’s censorship of the exact nature of these desires; various techniques are employed by the censor, including emotional displacement, weaving together a narrative to pay attention to and reworking the latent desires into the manifest content we experience and eventually and occasionally recollect; dreams are a distraction to keep us asleep; dreaming is a mechanism that has been selected to provide social stability amongst individuals. Dreams essentially deal with one’s relationship to their past. Jung thought dreams point towards the future development of the individual. The experiences which process symbols shared amongst the species are a form of compensation to keep the individual at a psychological homeostasis. Dreams do not especially deal with sexuality but have a more general attempt for the individual to understand themselves and the world they exist in.

b. Contemporary Approaches

i. Pluralism

Flanagan represents a nuanced position in which dreaming has no fitness-enhancing effects on an organism that dreams, but neither does it detract from their fitness. Dreams are the by-products of sleep. The notion of evolutionary by-product, or spandrel (see figures 6 and 7), was first introduced by the evolutionary Pluralists Gould and Lewontin in “The Spandrels of San Marco and Panglossian Paradigm” where the authors borrowed the term from the field of architecture. Evolutionary Pluralism claims that traits in the natural world are not always the result of natural selection, but potentially for a plurality of other reasons. The debate between Adaptationists and Pluralists centres on the pervasiveness of natural selection in shaping traits. Pluralists look for factors other than natural selection in shaping a trait, such as genetic drift and structural constraints on development. One important example is the spandrel. Some traits might be necessary by-products of the design of the overall organism.

 

Fig 6

 

 

Fig 7

 

The two separate architectural examples, figures 6 and 7, display spandrels as roughly triangular spaces. The point that Gould and Lewontin are making by introducing the notion of spandrel is that some aspects of the design of an object inevitably come as a side-effect. Perhaps the architect only wanted archways and a dome. In producing such a work of architecture, spandrels cannot be avoided. If architecture can tell us something about metaphysical truths, then might dreams be such spandrels, lodged in between thought and sleep? Flanagan, an evolutionary Pluralist, argues that there is no fitness-enhancing function of dreaming. Though, as a matter of structure, we cannot avoid dreaming for dreams sit in between the functioning of the mind and sleep, like spandrels between two architecturally desired pillars in an archway. Hence Flanagan’s argument is that in evolutionary biological terms, dreaming is not for anything – it just comes as a side effect to the general architecture of the mind. He states more strongly that “so long as a spandrel does not come to detract from fitness, it can sit there forever as a side effect or free rider without acquiring any use whatsoever” (Flanagan, 2000: p.108). Dreaming neither serves the function of wish-fulfilment or psychological homeostasis. Flanagan emphasizes the disorganized nature of dreaming, something which can clearly be appealed to given any individual’s experience of dreams. To be sure, wish-fulfilment and apparent psychological compensation sometimes appear in dreams, but this is because we are just thinking during sleep and so a myriad of human cognition will take place. Another reason put forward by Flanagan for the view that dreams are spandrels is that, unlike for sleep, there is nothing close to an ideal adaptation explanation for dreaming (Flanagan, 2000: p.113). The ideal explanation lays out what evidence there is that selection has operated on the trait, gives proof that the trait is heritable, provides an explanation of why some types of the trait are better adapted than others in different environments and amongst other species and also offers evidence of how later versions of the trait are derived from earlier ones. The spandrel thesis about dreaming is more plausible, then, because trying to argue that dreams have function ends up as nothing more than a “just-so” story, the result of speculation and guess work that cannot meet the standards of an ideal adaptation explanation.

ii. Adaptationism

Antti Revonsuo stands in opposition to Flanagan by arguing that dreaming is an adaptation. He also stands in opposition to Freud and Jung in that the function of dreams is not to deliver wish fulfilment whilst keeping the individual asleep or to connect an individual to the symbolism of the collective unconscious. Revonsuo strives to deliver an account that meets the stringent criteria of a scientific explanation of the adaptation of dreaming. Though there have been many attempts to explain dreaming as an Adaptation, few come close to the ideal adaptation explanation, and those functional theories favoured by neurocognitive scientists (for example, dreams are for consolidating memories or for forgetting useless information) cannot clearly distinguish their theory of the function of dreams from the function of sleep, that is, the spandrel thesis. The Threat Simulation Theory account can be clearly distinguished from the spandrel thesis. According to Revonsuo, the actual content of dreams is helpful to the survival of an organism because dreaming enhances behaviours in waking life such as perceiving and avoiding threat. Revonsuo’s Threat Simulation Theory presents dreams as specializing in the recreation of life-like threatening scenarios. His six claims are as follows: Claim 1: Dream experience is more an organized state of mind than disorganized; Claim 2: Dreaming is tailored to and biased toward simulating threatening events of all kinds found in waking life; Claim 3: Genuine threats experienced in waking life have a profound effect on subsequent dreaming; Claim 4: Dreams provide realistic simulacra of waking life threatening scenarios and waking consciousness generally; Claim 5: Simulation of perceptual and motor activities leads to enhanced performance even when the rehearsal is not recalled later; Claim 6: Dreaming has been selected for (Revonsuo, 2000).

The Threat Simulation Theory is committed to a certain conception of dreams as a realistic and organized state of consciousness, as implied by Claim 1. This claim motivates the challenge to any spandrel thesis of dreams by asking why dreams would show the level of organization that they do, namely, the construction and engagement of virtual realities, if they were just mere “mental noise” as a spandrel thesis might imply or be committed to. Dreaming is allegedly similar to waking life and is indeed experienced as waking life at least at the time of the dream (Valli & Revonsuo, 2009: p.18). These features of dreams are essential in putting in motion the same sort of reaction to threat that will occur during waking life and aid survival. Hence the dream self should be able to react in the dream situation with reasonable courses of action to combat the perceived threat in ways that would also be appropriate in real life. Revonsuo appeals to phenomenological data where a significant proportion of dreams involve situations in which the dreamer comes under attack. We do indeed generally have more negative emotions during an REM related dream. This claim is well supported by experiential examples of dreaming, where anxiety is the most frequent emotion in dreaming, joy/ elation was second and anger third (Hobson, 1994: p.157), making two thirds of dream emotion negative. That dreams process negative emotions likely occurs because the amygdala is highly activated. The amygdala is also the key brain part implicated in the “fight or flight” sympathetic nervous system response to especially intense life-threatening situations. During waking hours this part of the brain is used in handling unpleasant emotions like anxiety, intense fear or anger. This is well explained by the Threat Simulation Theory. We experience more threats in dreams (and especially demanding ones) than in waking life because it is selected to be especially difficult, leaving the individual bestowed with a surplus of successful threat avoidance strategies, coping skills and abilities to anticipate, detect and out-manoeuvre the subtleties of certain threats.

Though Revonsuo claims that dreams specialize in the full panoply of threatening scenarios that will effect overall survival, the most obvious example is the “fight or flight” response. All instances of stress in waking life occur when an individual feels threatened and this will feed back into the system of acknowledging what is dangerous and what is not. In waking life, the fight or flight response essentially involves making a snap decision in a life or death situation to fight a predatory enemy or flee from the scene. The activation of the Sympathetic Nervous System – the stress response implicated in the fight or flight response – is an involuntary, unconsciously initiated process. One might object to Claims 1 and 4, that many dreams simply do not seem realistic representations of threatening scenarios such as those requiring the fight or flight response. One study found, for example, that many recurrent dreams are unrealistic (Zadra et al, 2006). Revonsuo has some scope for manoeuvre for he believes that dreams may no longer be adaptive due to the dramatic environmental changes, and the possible adaptation for human dream consciousness was when humans were hunter-gatherers in the Pleistocene environment over a period of hundreds of thousands of years; so dream content may now no longer seem to have the same function, given that we live in a radically different environment. Dreams are then comparable to the human appendix – useful and adaptive in times when our diet was radically different, but now an essentially redundant and, occasionally, maladaptive vestigial trait.

Dreams may have also become more lax in representing threatening content in the Western world, since life and death threatening situations are no longer anywhere near as common as in the evolutionary past. This will be the case because a lack of exposure to threat in waking life will not activate the threat simulation system of dreaming as it did in earlier times. Revonsuo uses evidence of ordinary dream reports from the population but he also cites cases of psychopathology such as the dreams of individuals with post-traumatic stress disorder (PTSD), where, crucially, their traumatic and threatening experiences dramatically affects what they dream about. Thus the relationship between dreaming and experiencing threats in waking life is bi-directional: Dreams try to anticipate the possible threats of waking life and improve the speed of perceiving and ways of reacting to them. At the same time, any perceived threats that are actually experienced in waking life will alter the course of later dreaming to re-simulate those threats perceived. This feedback element dovetails with Claim 4 for an accurate picture will be built as information is constructed from the real world.

Perhaps the updating element of the Threat Simulation Theory (Claim 4 – that individuals learn from what is threatening and this gets passed onto offspring) suffers from the same accusation of Lamarckism that faces Jung’s theory. Revonsuo believes that dreaming during sleep allows an individual to repetitively rehearse the neurocognitive mechanisms that are indispensable to waking life threat perception and avoidance. Flanagan asks why behaviour that is instinctual would need to be repetitively rehearsed, but what is provided by Revonsuo is an explanation of how instinct is actually preserved in animals. But not all of our dreams are threatening. This fact surely helps the Threat Simulation Theory to show that there is variation amongst the trait and that the threatening type comes to dominate. The neural mechanisms, or hardware, underlying the ability to dream are transferred genetically and became common in the population. Those individuals with an insufficient number of threatening dreams were not able to survive to pass on the trait because they were left ill prepared for the trials and tribulations of the real world’s evolutionary environment. One might still object that survival – the avoidance of threat – is only one half of propagating a trait. Individuals also have to reproduce for the trait to be passed on. So dreams ought to also specialize in enhancing behaviours that help individuals to find mates. Animals and humans have many and varied courting rituals, requiring complicated behaviours If dreaming can, and does, make any difference to behaviour in waking life as Revonsuo must claim, then why would the presence of mate selection behaviours not make up a big factor in dreams? In humans, no more than 6% of adult dreams contain direct sexual themes (Flanagan, 2000: p.149). There is also a slight gender difference in that males tend to dream more of male characters than female characters. Given that many threats would have occurred from same sex individuals within one’s own species, this fares well for the claim that dreams are for threat perception and rehearsal, but not for courtship rituals that could help pass on the trait. There are ways around this objection, however. Threatening and sexual encounters are such polar opposites that different parts of the nervous system deal with them – the sympathetic and parasympathetic – dramatically alternating between the two could potentially disrupt sleep. Clearly, simply surviving is prioritized over reproducing.

5. Dreaming in Contemporary Philosophy of Mind and Consciousness

a. Should Dreaming Be a Scientific Model?

Visual awareness has been used as the model system in consciousness research. It is easy to manipulate and investigate. Humans are predominantly visual creatures and so visual awareness is an excellent paradigm of conscious experience, at least for humans. A good example of the virtues of using paradigm cases to study, from biological science, is drosophila melanogaster (the common fruit fly). This organism is often used in experiments and routinely cited in papers detailing experiments or suggesting further research. A scientific model is a practical necessity and the fruit fly has such a fast reproduction rate that it allows geneticists to see the effects of genes over many generations that they would not see in their own lifetime in, say, humans. The fruit fly also has enough genetic similarity to humans such that the findings can be extrapolated. Hence, a model system has something special about it – some ideal set of features for empirically investigating. Model systems are good because they ideally also display the phenomena being investigated in an “exceptionally prominent form” (Revonsuo, 2006: p.73-74).

i. Dreaming as a Model of Consciousness

Revonsuo (2006) argues that dreaming should also have a place alongside visual awareness, as a special instance of consciousness and therefore a worthy model to be studied. Revonsuo argues that the dreaming brain also captures consciousness in a “theoretically interesting form” (Revonsuo, 2006: p.73). The claim is that dreaming is an unusually rare example of “pure” consciousness, being as it is devoid of ongoing perceptual input and therefore might deserve special status in being scientifically investigated. The system is entirely dependent on internal resources (Metzinger, 2003: p.255) and is isolated from the external world. Representational content changes much quicker than in the waking state, and this is why Metzinger claims that dreams are more dynamic (Metzinger, 2003: p.255). Whereas Malcolm and Dennett had argued that dreams are not even plausibly conscious states, Revonsuo and Metzinger argue that dreams may reveal the very essence of consciousness because of the conditions under which dream consciousness takes place. Crucially, there is a blockade of sensory input. Only very rarely will sensory input contribute to information processing during dreaming, (Metzinger, 2003: p.257) for example in dreams where the sound of the alarm clock is interwoven into a narrative involving wedding bells. This means that most other dreams are “epistemically empty” with regard to the external environment. At no other time is consciousness completely out of synch with the environment, and, so to speak, left to its own devices. Dreaming is especially interesting and fundamentally similar to waking consciousness because it entails consciousness of a world which we take to be the real one, just as we do during waking consciousness. “Only rarely do we realize during the dream that it is only a dream. The estimated frequency of lucid dreams varies in different studies. Approximately 1 to 10% of dream reports include the lucidity and about 60% of the population have experienced a lucid dream at least once in their lifetime (Farthing, 1992; Snyder & Gackenbach, 1988). Thus, as many as 90 to 99% of dreams are completely non-lucid, so that the dream world is taken for real by the dreamer” (Revonsuo, 2006: p.83).

Revonsuo does not so much argue for the displacement of visual awareness qua a model system, as arguing that dreaming is another exemplary token of consciousness. Dreaming, unlike visual awareness, is both untainted by the external world and behavioural activity (Revonsuo, 2006: p.75). Dreams also reveal the especially subjective nature of consciousness: the creation of a “world-for-me”.

Another reason that might motivate modelling dreaming is that it might turn out to be a good instance for looking at the problem of localization in consciousness research. For example, during dreaming the phenomenology is demonstrably not ontologically dependent on any process missing during dreaming. Any parts of the brain not used in dreaming can be ruled out as not being necessary to phenomenal consciousness (Revonsuo, 2006: p.87).

During dreams there is an output blockade (Revonsuo, 2006: p.87; Metzinger, 2003: p.257), making dreaming an especially pure and isolated system. Malcolm had argued that dreaming was worthy of no further empirical work for the notion was simply incoherent, and Dennett was sceptical that dreams would turn out to even involve consciousness. The radical proposal now is that dreaming ought to be championed as an example of conscious experience, a mascot for scientific investigation in consciousness studies. It is alleged that dreams can recapitulate any experience from waking life and for this reason Revonsuo concludes that the same physical or neural realization of consciousness is instantiated in both examples of dreaming and waking experience (Revonsuo, 2006: p.86).

ii. Dreaming as a Contrast Case for Waking Consciousness

The argument of Windt & Noreika (2011) firstly lays out the alleged myriad of problems with taking dreaming as a scientific model in consciousness research and then proposes positive suggestions for the role dreaming can play in consciousness studies. They reject dreaming as a model system but suggest it will work better as a contrast system to wakefulness. The first major problem with using dreaming as a model of consciousness is that, whilst, for example, in biology everybody knows what the fruit fly is, there are a number of different conceptions and debates surrounding key features and claims about dreaming. Hence there is no accepted definition of dreaming (Windt, 2010: p.296). Taking dreaming as a model system of course requires and depends upon exactly what dreaming is characterized as. Revonsuo simply assumes his conception of dreaming is correct. He believes that dreaming can be a model of waking consciousness because dreams can be identical replicas of waking consciousness involving all possible experiences. Windt & Noreika believe that dreams tend to be different to waking life in important ways.

There are further problems with modelling dreams. Collecting dream reports in the laboratory might amount to about five reported dreams a night. This is not practical at all compared to visual awareness where hundreds of reportable experiences can occur in minutes without interference (Windt & Noreika, 2011: p.1099). Scientists do not even directly work with dreams themselves, but rather descriptions of dreams. There is also the added possibility of narrative fabrication (the difference between dream experience and dream report). During the dream itself, it is known that we do not have clarity of awareness as in waking life and we do have a poor memory during the dream experience. Though lucid dreaming is an exception to this rule, it differs from ordinary dreaming and lucid dreaming has not been proposed as a model. The laboratory is needed to control and measure the phenomenon properly but this in itself can influence the dream content and report and so the observer effect and interviewer biases are introduced.

It is not clear that these are insurmountable methodological problems. It might be very difficult to investigate dreams but this comes with the territory of trying to investigate isolated consciousness. Revonsuo is also not suggesting removing visual awareness as the paradigm model, only that dreaming ought to be alongside it. The fact remains, however, that despite suggestions from Revonsuo and others, dreaming being used as a model has simply not yet taken place (Windt & Noreika, 2011: p.1091), regardless of the theoretical incentives.

The problems with the modelling approach point us toward the more modest contrast analysis approach. Windt & Noreika argue that the contrast analysis of dreaming with other wake states should at least be the first step in scientific investigation, even if we wanted to establish what ought to be a model in consciousness research (Windt, 2011: p.1102). What are the reasons for endorsing the positive proposal of using dreams in a contrast analysis with waking life? Though waking consciousness is the default mode through which individuals experience the world, dreaming is the second global state of consciousness. As displayed in reports, dreams involve features which are markedly different to that of waking life – bizarreness and confabulation being key hallmarks of dreaming but not waking consciousness. The two states of waking and dreaming are mediated by radically different neurochemical systems. During wakefulness, the aminergic system is predominantly in control and for dreaming the cholinergic system takes over (Hobson, 1994: pp. 14-15; Hobson, 2005: p.143). This fundamental difference offers an opportunity to examine the neurology and neurochemistry that underpins both states of consciousness. The contrast analysis does not ignore dreaming, but proposes a more modest approach. With research divided between waking consciousness, dreaming and a comparison of the two states, this more practical approach will yield better results, so Windt and Noreika argue. By using the proposed method, we can see how consciousness works both with and without environmental input. Surely both are as equally important as each other, rather than trying to find reason to privilege one over the other. After all, both are genuine examples of consciousness. This approach also means that the outcome will be mutually informative as regards the two types of consciousness with insights gained in both directions. It is important to compare dreaming as an important example of consciousness operating with radically changed neural processing to waking consciousness (Windt & Noreika, 2011: p.1101). With the contrastive analysis there is the prospect of comparing dream consciousness to both pathological and non-pathology waking states, and there is thereby the promise of better understanding how waking consciousness works and how it can also malfunction. We spend about a tenth of our conscious lives dreaming, and yet it is one of the most difficult mental states to scientifically investigate. The contrast analysis is put forward as a possible solution to the problem of how to integrate dreams into consciousness studies. Windt and Noreika add the further proposal that dreams can be more specifically contrasted with pathological, non-pathological and altered states of consciousness. Unlike the modelling option, the contrast analysis seems to be how dreams have been hitherto investigated and so has already proven to be a viable option. This should not definitively preclude the modelling option, however. Perhaps modelling dreams really would be ideal simply because it involves isolated consciousness and the practicality concerns may be overcome in the future.

It remains the case that “one of the central desiderata in the field of empirical dream research is a commonly accepted definition of dreaming” (Windt, 2010: p.296). There are also examples of consciousness at the periphery of sleep that makes it difficult to delineate the boundaries of what does and does not count as a dream (Mavromatis, 1987: p.3; ‘hypnagogic’ experiences are the thoughts, images or quasi-hallucinations that occur prior to and during sleep onset whilst ‘hypnopompic’ experiences are thoughts, images or quasi-hallucinations that occur during or just after waking; these states are now known collectively as hypnagogia). This is a problem that both the modelling and the contrast analysis approach both must confront.

b. Is Dreaming an Instance of Images or Percepts?

Some believe that dreaming involves mental imagery of both hallucinatory and imagistic nature (Symons: 1993: p.185; Seligman & Yellen, 1987). However, other philosophers such as Colin McGinn believe that dreams should only be thought of in terms of images (the imagination) or percepts (perceptual experience). It is better to not inflate our ontology and invoke a third category that dreams are sui generis (of their own kind) if we do not need to. There are reasons why we should not believe dreams are a unique mental state. It would be strange, McGinn claims, if “the faculties recruited in dreaming were not already exploited during waking life” (McGinn, 2004: p.75). So he believes that dreaming is an instance of a psychological state we are already familiar with from waking life: perception (hallucination) or the imagination. There is a separate epistemic question of whether dreams involve beliefs or imaginings (Ichikawa, 2009). This debate comes apart from the psychological one as to whether the phenomenology of dreaming is percept or imagining because all four possible combinations can be held: A theorist might claim that dreams are hallucination which involve belief; another theorist might claim that dreams involve hallucinations which we do not generate belief, but we are rather always entertaining our dream hallucinations as imagined possibilities. Alternatively, dreams might involve the psychological state of the imagination and yet we happen to believe in our imaginings as though they were real; finally, dreams might involve imaginings which we recognize as such, imaginings and not beliefs. Though the two debates come apart, in waking life the psychological state of imagining is usually accompanied by the propositional attitude of imagining that something is the case, rather than believing that the something (psychologically imagined) is the case. Perception, or hallucination, usually triggers belief that what is perceived is the case, rather than merely imagining that what is perceived is the case. So if dreams are psychologically defined in terms of percepts, we can expect that dreams likely also involve belief because percepts usually trigger belief. This is not always the case, however. For example, a schizophrenic might come to realize that his hallucinations do not really express a way the world is and so no longer believe what he perceives. If dreams are propositionally imagined, then we would usually expect our imaginings to be recognized as not being real and therefore triggering the propositional attitude of imagining rather than believing that what we are imaging (in the psychological sense) that something is the case. There are more nuanced views than the four possible combinations. McGinn claims that dreaming involves the imagination and quasi-belief. When we dream we are immersed in the fictional plot, as we are in creative writing or film.

i. Dreaming as Hallucination

It is worth noting that the psychological literature assumes that dreams are hallucinations that occur during sleep. It is the typically unquestioned psychological orthodoxy that dreams are perceptual/ hallucinatory experiences. Evidence can be cited for dreams as percepts from neuroscience:

The offline world simulation engages the same brain mechanisms as perceptual consciousness and seems real to us because we are unaware that it is nothing but a hallucination (except very rarely in lucid dreams). (Valli & Revonsuo, 2009: p.19)

In REM sleep, when we are lying still due to muscle paralysis, the motor programs of the brain are nonetheless active. Phenomenologically, our dream selves are also highly active during dreams. As the content of a dream reveals, we are always on the move. Apart from the bodily paralysis, physiologically the body acts as though it perceives a real world, and continually reacting to events in that apparently real world. It is known that individuals will carry out their dream actions if the nerve cells that suppress movement are surgically removed or have deteriorated due to age, as demonstrated in people with REM Sleep Behavior Disorder. This suggests that dreaming involves the ordinary notion of belief because it is tied to action in the usual way and it is only because of an additional action-suppressing part of the brain that these actions are not carried out. We know that percepts ordinarily trigger belief and corresponding action, whereas the imagination does not.

The claim that dreams are hallucinations can find support in the further claim that dreaming replicates waking consciousness. Many philosophers and psychologists make note of the realistic and organized nature of dreams, and this has been couched in terms of a virtual reality, involving a realistic representation of the bodily self which we can feel. Consider false awakenings, where an individual believes they have woken up in the very place they went to sleep, yet they are actually still asleep in dreaming. False awakenings can arguably be used in support of the view that dreaming is hallucinatory because such dreams detail a realistic depiction of one’s surroundings. Dreams fit the philosophical concept of hallucination as an experience intrinsically similar to legitimate perceptual states with the difference that the apparent stimulus being perceived is non-existent (Windt, 2010: pp. 298-299). It is the job of perceptual states to display the self in a world.

Empirical evidence suggests that pain can be experienced in dreams, which is perceptual in nature and which the imagination can arguably not replicate. So dreams must be hallucinatory, according to this line of reasoning. It is not clear though whether this rules out dreams as mainly imaginative with occasional perceptual elements introduced. Pain is, after all, a rarity in dreaming.

We can plausibly speculate that human ancestors fell prey to a major metaphysical confusion and thought that their dreams involved genuine experiences in the past, including visitations from the dead and entering a different realm. Though few believe this today, we can sympathize with our ancestors’ mistake which is further supported by the occasional everyday feeling where we can be hesitant to categorize a memory of an experience as a dream or a waking event. We usually decide, not based on introspection, but on logical discrepancies between the memory and other factors. These two reasons suggest that we cannot always easily distinguish between dream experience and waking experience, because they are another instance of percepts.

Finally, we seem to have real emotions during dreams which are the natural reaction to our perceptions. According to the percept view of dreams, we dream that we are carrying actions out in an environment, but our accompanying emotions are not dreamed and play out alongside the rest of the dream content. The intensity of the emotions, actually felt, is what the percept theorist will take as support for the content of the dream not being merely imagined, but the natural response of realistic, perceptual-like experience.

ii. Dreaming as Imagination

A number of philosophers believe that dreaming is just the imagination at work during sleep (Ichikawa, 2008; Sosa, 2007; McGinn, 2004, 2005). Any conscious experiences during sleep are imagistic rather than perceptual. McGinn puts forward some reasons in favour of believing that dreams are imaginings. Firstly, he introduces The Observational Attitude: if we are perceiving (or hallucinating), say, two individuals having a conversation then we might need to strain our senses to hear or see what they are discussing. During dreams of course, the body is completely relaxed and the sleeping individual shows no interest in his or her surroundings. When imagining in waking life, I try to minimize my sensory awareness of the surrounding environment in order to get a better and more vivid picture of what it is I am imagining. For example, if I want to imagine a new musical tune, I do best to switch the radio off or cover my ears. Dreaming is the natural instance of shutting out all of our sensory awareness of the outside world, arguably to entirely engage the imagination. This suggests that the dreamer is hearing with their mind’s ear and seeing with their mind’s eye. They are entertaining images, not percepts. Secondly, McGinn claims that percepts and images can coexist in the mind in waking life. We can perceive at the same time as imagining. The novel presupposes that the reader can perceive the text at the same time as imagining its content. It ought to be possible, McGinn argues, that if dreams are hallucinatory, we should be able to imagine at the same time. If I am surfing in a dream, I should be able to imagine the Eiffel tower at the same time. In dreams we cannot do this, there is just the dream, McGinn claims, with no further ability to simultaneously imagine other content. Related to the Observational Stance is the notion of Recognition in dreams. In dreams we seem to already know who all of the characters are, without making any effort to find out who they are (without using any of our senses). This might suggest that in dreams we are partly in control of the content (even if we fail to realize it) because we allegedly summon up the characters that we want to. We recognize who dream characters are, such as relatives, even when they look drastically different. It is not clear, on the other hand, that we really are in control of other dream characters and that we accurately recognize them, for example, Gerrans (2012) has claimed that the same mechanism of misidentification is present in dreams as in certain delusional states where the feeling of familiarity of a person is over-active and aimed at the wrong individuals (as in Fregoli delusion). On this view, we do not accurately bring to mind certain dream characters, we try to identify them and make mistakes. This would be an alternative way of accommodating the evidence.

In the 1940’s and 1950’s, a survey found that a majority of Americans thought that their own dreams, and dreams more generally, occurred in black and white. Crucially, people have thought prior to and after this period that dreams occur in colour. This period coincided with the advent of black and white television. According to Schwitzgebel, the most reasonable conclusion to draw is that dreams are more like imagining written fiction, sketchy and without any colour; I can read a novel without summoning any definite imagery to mind (Schwitzgebel, 2002: p.656). Perhaps this is akin to reading a novel quickly and forming vague and indefinite imagery. Ichikawa also argues that dreaming involves only images and is indeterminate to colour because images can be indeterminate to colour, but to hallucinate would require some determinacy in colour, whether black and white or in full colour Crucially, even outside of Schwitzgebel’s findings, people have raised the question whether we dream in colour or black and white – something needs to explain the very possibility of such a dispute; “the imagination model may provide the best explanation for disagreement about colour sensation in dreams”  (Ichikawa, 2009: p.109). McGinn also believes that dreams can be indeterminate to colour, and they can be coloured in during the waking report, which can be affected by the current media.

6. References and Further Reading

  • Adams, R. (1985) “Involuntary Sins,” The Philosophical Review, Vol. 94, No. 1 (Jan., 1985), pp. 3 – 31.
  • Antrobus, J. S., Antrobus, J. S. & Fisher, C. (1965) “Discrimination of Dreaming and Nondreaming Sleep,” Archives of General Psychiatry 12: pp. 395 – 401.
  • Antrobus, J. (2000) “How Does the Dreaming Brain Explain the Dreaming Mind?” Behavioral and Brain Sciences, 23 (6): pp. 904 – 907.
  • Arikin, A. Antrobus, J. & Ellman, S. (1978) The Mind in Sleep: Psychology and Physiology New Jersey: Lawrence Erlbaum.
  • St. Augustine (398) Confessions in Great Books of the Western World |16 translated by R. S. Pine-Coffin, Chicago: Britannica, 1994.
    • In this historical canon of philosophy, Augustine discusses and comes to the conclusion that we are not immoral in our dreams, though it appears as though we sin.
  • Ayer, A. (1960) “Professor Malcolm on Dreams,” The Journal of Philosophy, Vol. 57, No. 16 (Aug. 4, 1960), pp. 517 - 535.
    • Ayer’s first reply to Malcolm’s thesis against the received view – part of a heated back and forth exchange that borders on the ad hominem.
  • Ayer, A. (1961) “Rejoinder to Professor Malcolm,” The Journal of Philosophy, Vol. 58, No. 11 (May 25, 1961), pp. 297 - 299.
  • Baghdoyan, H. A., Rodrigo-Angulo, M. L., McCarley, R. W. & Hobson, J. A. (1987) “A Neuroanatomical Gradient in the Pontine Tegmentum for the Cholinoceptive Induction of Desynchronized Sleep Signs,” Brain Research 414: pp. 245 – 61.
  • Bakeland, F. (1971) “Effects of Pre-sleep Procedures and Cognitive Style on Dream Content,” Perceptual and Motor Skills 32:63–69.
  • Bakeland, F., Resch, R. & Katz, D. D. (1968) “Pre-sleep Mentation and Dream Reports,” Archives of General Psychiatry 19: pp. 300 – 11.
  • Ballantyne, N. & Evans, E. (2010) “Sosa’s Dream,” Philos Stud (2010) 148: pp. 249 – 252.
  • Barrett, D. (1992) “Just How Lucid are Lucid Dreams?” Dreaming 2: pp. 221 – 28.
  • Berger, R. J. (1967) “When is a Dream is a Dream is a Dream?” Experimental Neurology (Supplement) 4:15–27.
  • Blackmore, S (1991) “Lucid Dreaming: Awake in your Sleep?” Skeptical Inquirer, 15, pp. 362 – 370.
  • Blackmore, S. (2004) Consciousness: An Introduction Oxford: OUP.
  • Blackmore, S. (2005) Consciousness: A Very Short Introduction Oxford: OUP.
    • Chapter 7, “Altered States of Consciousness,” looks at dreams and sleep and considers a “retro-selection theory of dreams” in reference to Dennett’s model of unconsciously dreaming.
  • Blaustein, J., (Eds.) Handbook of Neurochemistry and Molecular Neurobiology, 3rd Edition: Behavioral Neurochemistry and Neuroendocrinology New York: Springer Science, 2007.
    • Contains useful, but advanced, information on the neurochemistry involved during processes of sleep and waking.
  • Brown, J. (2009) “Sosa on Skepticism” Philosophy Studies.
  • Canfield, J. (1961) “Judgements in Sleep,” The Philosophical Review, Vol. 70, No. 2 (Apr., 1961), pp. 224 - 230.
  • Candlish, S. & Wrisley, G. (2012) “Private Language,” Stanford Encyclopedia of Philosophy.
  • Chappell, V. (1963) “The Concept of Dreaming,” Philosophical Quarterly, 13 (July): pp. 193 -  213.
  • Child, W. (2007) “Dreaming, Calculating, Thinking: Wittgenstein and Anti-Realism about the Past” The Philosophical Quarterly, Vol. 57, No. 227 (Apr., 2007), pp. 252-272.
  • Child, W. (2009) “Wittgenstein, Dreaming and Anti-Realism: A Reply to Richard Scheer,”Philosophical Investigations 32:4 October 2009, pp. 229 - 337.
  • Cicogna, P. & Bosinelli, M. (2001) “Consciousness during Dreams,” Consciousness and Cognition 10, pp. 26 – 41.
  • Cioffi, F. (2009) “Making the Unconscious Conscious: Wittgenstein versus Freud,” Philosophia, 37: pp. 565 – 588.
  • Cohen, D., “Sweet Dreams,” New Scientist, Issue 2390, p. 50.
  • Dawkins, R. (1998) Unweaving the Rainbow: Science, Delusion and the Appetite for Wonder London: Penguin, p. 158.
    • Dawkins briefly considers the phenomenon of purported dream prophecy and the relationship between coincidence and reporting, something he also discusses in The Magic of Reality (2011).
  • Dennett, D. (1976) “Are Dreams Experiences?” The Philosophical Review, Vol. 85, No. 2 (Apr., 1976), pp. 151-171.
    • This is the seminal paper by Dennett in which he advances the claim that dreams might not be experiences that occur during sleep.
  • Dennett, D. (1979) “The Onus Re Experiences: A Reply to Emmett,” Philosophical Studies 35: pp. 315 - 318.
  • Dennett, D. (1991) Consciousness Explained London: Penguin.
  • Descartes, R. (1641) Meditations on First Philosophy in Great Books of the Western World 28 Chicago: Britannica, 1994.
  • Domhoff, G. & Kamiya, J. (1964) “Problems in Dream Content Study with Objective Indicators: I. A Comparison of Home and Laboratory Dream Reports,” Archives of General Psychiatry 11: pp. 519 – 24.
  • Doricchi, F. et al (2007) “The “Ways” We Look at Dreams: Evidence from Unilateral Spatial Neglect (with an evolutionary account of dream bizarreness)” Exp Brain Res (2007) 178: pp. 450 – 461.
  • Driver, J. (2007) “Dream Immorality,” Philosophy, Vol. 82, No. 319 (Jan., 2007), pp. 5 – 22.
  • Emmett, K. (1978) “Oneiric Experiences,” Philosophical Studies, 34 (1978): pp. 445 - 450.
  • Empson, J. (1989) Sleep and Dreaming Hempstead: Harvester Wheatsheaf.
  • Farthing, W. (1992) The Psychology of Consciousness New York: Prentice Hall.
  • Flanagan, O. (1995) “Deconstructing Dreams: The Spandrels of Sleep,” Journal of Philosophy, 92, no. 1 (1995): pp. 5 – 27.
  • Flanagan, O. (2000) Dreaming Souls: Sleep, Dreams and the Evolution of the Conscious Mind  Oxford: OUP.
    • This rare book-length treatment of philosophical issues about dreaming is also devoted to arguing that dreams are by-products of sleep.
  • Flowers, L. & Delaney, G. (2003) “Dream Therapy,” in Encyclopedia of Neurological Sciences, Elsevier Science USA: pp. 40 - 44.
  • Foulkes, D. (1999) Children’s Dreaming and the Development of Consciousness Harvard.
  • Freud, S. (1900) The Interpretation of Dreams in Great Books of the Western World 54 Chicago: Britannica, 1994.
  • Gendler, T. (2011) “Imagination,” Stanford Encyclopedia of Philosophy.
  • Gerrans, P. (2012) “Dream Experience and a Revisionist Account of Delusions of Misidentification,” Consciousness and Cognition, 21 (2012) pp. 217 – 227.
    • This paper is primarily about delusions that involve misidentifying individuals, such as Fregoli delusion, and alleges that dreams contain similar delusional features.
  • Ghosh, P. (2010) “Dream Recording Device ‘Possible’ Researcher Claims,” BBC News.
  • Gould, S. & Lewontin, R. (1979) “The Spandrels of San Marco and the Panglossian Paradigm: a Critique of the Adaptationist Programme” Proceedings of the Royal Society of London, B 205, pp. 581 – 598.
  • Green, C. (1968) Lucid Dreams Oxford: Institute of Psychophysical Research.
  • Green, C. & McCreery, C. (1994) Lucid Dreaming: The Paradox of Consciousness During Sleep Philadelphia: Routledge, 2001.
  • Greenberg, M. & Farah, M. (1986) “The Laterality of Dreaming,” Brain and Cognition, 5, pp. 307 – 321.
  • Gunderson K (2000) “The Dramaturgy of Dreams in Pleistocene Minds and Our Own,” Behavioral and Brain Sciences 23(6), pp. 946 – 947.
  • Hacking, I. (1975) “Norman Malcolm’s Dreams,” in Why Does Language Matter toPhilosophy Cambridge: Cambridge University Press.
  • Hacking, I. (2001) “Dreams in Place,” The Journal of Aesthetics and Art Criticism, Vol. 59, No. 3 (Summer, 2001): pp. 245 – 260.
  • Hill, J. (2004a) “The Philosophy of Sleep: The Views of Descartes, Locke and Leibniz,” Richmond Journal of Philosophy, Spring.
  • Hill, J. (2004b) “Descartes’ Dreaming Argument and Why We Might Be Skeptical of It,” Richmond Journal of Philosophy, Winter.
  • Hill, J. (2006) “Meditating with Descartes,” Richmond Journal of Philosophy, Spring.
  • Hobbes, T. (1651) Leviathan in Great Books of the Western World | 21 Chicago: Britannica, 1994.
  • Hobson, J. (1988) Dreaming Brain Basic Books.
    • J. Allan Hobson is an authoritative source on the psychology of sleep and dreaming.
  • Hobson, J. (1989) Sleep USA: Scientific American Library Press, 1995.
    • A rich textbook on the history of the science of sleep.
  • Hobson, J. (1994) The Chemistry of Conscious States: How the Brain Changes Its Mind New York: Little, Brown & Company.
  • Hobson, J. (2001) Dream Drugstore: Chemically Altered States of Consciousness Cambridge, MA: MIT Press.
  • Hobson, J. (2005) Dreaming: A Very Short Introduction Oxford: OUP.
  • Hobson, J. (2009) “The Neurobiology of Consciousness: Lucid Dreaming Wakes Up,” International Journal of Dream Research, Volume 2, No. 2 (2009)
  • Hobson, J. (2012) Dream Life: An Experimental Memoir Cambridge, MA: MIT Press.
  • Hodges, M. & Carter, W. (1969) “Nelson on Dreaming a Pain,” Philosophical Studies, 20 (April): pp. 43 - 46.
  • Hopkins, J. (1991) “The Interpretation of Dreams,” In Neu, J. (ed.), The Cambridge Companion to Freud Cambridge: Cambridge University Press.
  • Horton, C. et al (2009) “The Self and Dreams During a Period of Transition,” Consciousness and Cognition, 18 (2009) pp. 710 – 717.
  • Hunter, J. (1971) “Some Questions about Dreaming,” Mind, 80 (January): pp. 70 - 92.
  • Ichikawa, J. (2008) “Skepticism and the Imagination Model of Dreaming” The Philosophical Quarterly, Vol. 58, No. 232.
  • Ichikawa, J. (2009) “Dreaming and Imagination” Mind & Language, Vol. 24 No. 1 February 2009, pp. 103 121.
  • Jablonka, E. & Lamb, M. (2005) Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life MIT.
    • This is the go-to introduction for epigenetics (though the content of the book does not include the subject of dreaming).
  • Jeshion, R. (2010) New Essays On Singular Thought Oxford: OUP.
    • This is a good guide to the notion of singular thought (though the content of the book does not include the subject of dreaming).
  • Johnson, R. (2009) Inner Work: Using Dreams and Active Imagination for Personal Growth Harper & Row.
    • A manual for applying the principles of Jungian analysis, working specifically with the imagination and dreaming.
  • Jouvet, M. (1993) The Paradox of Sleep Cambridge, MA: MIT Press, 1999.
  • Jung, C. (eds) (1968) Man and His Symbols USA: Dell Publishing.
  • Jung, C. (1985) Dreams Ark: London.
    • A collection of Jung’s papers on dreams.
  • Kahan, T. & LaBerge, S. (2011) “Dreaming and Waking: Similarities and Differences Revisited,” Consciousness and Cognition, 20, pp. 494–514.
  • Kamitani, Y. et al (2008) “Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders,” Neuron 60, 915–929, Dec. 11.
    • In this controversial article Kamitani and colleagues outline some research into decoding (purely from brain scanning) what individuals can consciously see. The speculative idea is that eventually the method discussed might be applied to dreaming and scientists will thereby be able to “record” dreams.
  • Knoth, I. & Schredl, M. (2011) “Physical Pain, Mental Pain and Malaise in Dreams,” International Journal of Dream Research, Volume 4, No. 1.
  • Kramer, M. (1962) “Malcolm on Dreaming,” Mind, New Series, Vol. 71, No. 281 Jan.
  • LaBerge, S. & Rheingold, H. (1990) Exploring the World of Lucid Dreaming New York: Ballantine Books.
  • LaBerge, S. (1990) “Lucid Dreaming: Psychophysiological Studies of Consciousness During REM Sleep,” in Bootzin, R., Kihlstrom, J., & Schacter, D. (Eds.) Sleep and Cognition  (pp. 109 – 126) Washington, D.C.: American Psychological Association.
  • LaBerge, S. (2000) “Lucid Dreaming: Evidence that REM Sleep Can Support Unimpaired Cognitive Function and a Methodology for Studying the Psychophysiology of Dreaming,” The Lucidity Institute Website.
  • LaBerge, S. & DeGracia, D.J. (2000). “Varieties of Lucid Dreaming Experience,” In R.G.
  • Kunzendorf & B. Wallace (Eds.), Individual Differences in Conscious Experience (pp. 269 - 307). Amsterdam: John.
  • Lakoff, G. (1993) “How Metaphor Structures Dreams: The Theory of Conceptual Metaphor Applied to Dream Analysis,” Dreaming, 3 (2): pp. 77 – 98.
  • Lansky, M. (eds) (1992) Essential Papers on Dreams New York: New York University Press.
  • Lavie, P. (1996) The Enchanted World of Sleep Yale.
  • Llinas, R. & Ribary, U. (1993) “Coherent 40-Hz Oscillation Characterizes Dream State in Humans,” Proceedings of the National Academy of Sciences USA 90: pp. 2078 – 81.
  • Locke, J. (1690) An Essay Concerning Human Understanding in Great Books of the Western World | 33 Chicago: Britannica, 1994.
  • Lockley, S. & Foster, R. (2012) Sleep: A Very Short Introduction Oxford: OUP.
  • Love, D. (2013) Are You Dreaming? Exploring Lucid Dreams: A Comprehensive Guide Enchanted Loom.
  • Lynch, G., Colgin, L. & Palmer, L. (2000) “Spandrels of the Night?” Behavioral and Brain Sciences, 23 (6), pp. 966 – 967.
  • MacDonald, M. (1953) “Sleeping and Waking,” Mind, New Series, Vol. 62, No. 246 (Apr., 1953), pp. 202 - 215.
  • Malcolm, N. (1956) “Dreaming and Skepticism,” The Philosophical Review, Vol. 65, No. 1 (Jan., 1956), pp. 14 - 37.
    • Malcolm’s first articulation of the argument that “dreaming”, as the received view understands the concept, is flawed.
  • Malcolm, N. (1959) Dreaming London: Routledge & Kegan Paul, 2nd Impression 1962.
    • Malcolm’s controversial book length treatment of the topic.
  • Malcolm, N. (1961) “Professor Ayer on Dreaming,” The Journal of Philosophy, Vol.58, No.11 (May), pp. 297 - 299.
  • Mann, W. (1983) “Dreams of Immorality,” Philosophy, Vol. 58, No. 225 (Jul., 1983), pp. 378 – 385.
  • Mathers, D. (2001) An Introduction to Meaning and Purpose in Analytical Psychology East Sussex: Routledge.
    • Contains a chapter on dreams and meaning from the perspective of analytical psychology.
  • Matthews, G. (1981) “On Being Immoral in a Dream,” Philosophy, Vol. 56, No. 215 (Jan., 1981), pp. 47 – 54.
  • Mavromatis, A. (1987) Hypnagogia: The Unique State of Consciousness Between Wakefulness and Sleep London: Routledge & Kegan Paul.
  • McFee, G. (1994) “The Surface Grammar of Dreaming,” Proceedings of the Aristotelian Society, New Series, Vol. 94 (1994), pp. 95 - 115.
  • McGinn, C. (2004) Mindsight: Image, Dream, Meaning USA: Harvard University Press, 2004.
    • The argument that dreaming is a type of imagining.
  • McGinn, C. (2005) The Power of Movies: How Screen and Mind Interact Toronto: Vintage Books, 2007.
    • McGinn argues that dreaming preconditions humans to be susceptible to film (including emotional reaction) because dreaming makes use of the imagination via fictional immersion – this is why we can easily become absorbed in stories and appreciate works of art.
  • Metzinger T (2003) Being No-One. MIT Press.
  • Metzinger, T. (2009) The Ego Tunnel: The Science of the Mind and the Myth of the Self New York: Basic Books.
    • Contains substantial sections on dreaming.
  • Möller, H. (1999) “Zhuangzi's ‘Dream of the Butterfly’: A Daoist Interpretation,” Philosophy East and West, Vol. 49, No. 4 (Oct., 1999), pp. 439 – 450.
  • Nagel (1959) “Dreaming,” Analysis, Vol. 19, No. 5 (Apr., 1959), pp. 112-116.
  • Nelson, J. (1966) “Can One Tell That He Is Awake By Pinching Himself?” Philosophical Studies, Vol. XVII.
  • Newman, L. (2010) “Descartes’ Epistemology,” Stanford Encyclopedia of Philosophy.
  • Nielsen, T. (2012) “Variations in Dream Recall Frequency and Dream Theme Diversity by Age and Sex,” July 2012 | Volume 3 | Article 106 | 1.
  • Nielsen, T. & Levin, R. (2007) “Nightmares: A New Neurocognitive Model,” Sleep Medicine Reviews (2007) 11: pp. 295 – 310.
  • Nir, Y. & Tononi, G. (2009) “Dreaming and the Brain: from Phenomenology to Neurophysiology,” Trends in Cognitive Sciences, Vol.14, No.2.
  • O’Shaughnessy, B. (2002) “Dreaming,” Inquiry, 45, pp. 399 – 432.
  • Occhionero, M. & Cicogna, P. (2011) “Autoscopic Phenomena and One’s Own Body Representation in Dreams,” Consciousness and Cognition, 20 (2011) pp. 1009 – 1015.
  • Oswald, I. & Evans, J. (1985) “On Serious Violence During Sleep-walking,” British Journal of Psychiatry, 147: pp. 688 – 91.
  • Pears, D. (1961) “Professor Norman Malcolm: Dreaming,” Mind, New Series, Vol. 70, No. 278 (Apr., 1961), pp. 145 -163.
  • Pesant, N. & Zadra, A. (2004) “Working with Dreams in Therapy: What do We Know and What Should we Do?” Clinical Psychology Review, 24: pp. 489–512
  • Putnam, H. (1962) “Dreaming and ‘Depth Grammar’,” in Butler (Eds.) Analytical Philosophy Oxford: Basil & Blackwell, 1962.
  • Rachlin, H. (1985) “Pain and Behavior,” Behavioral and Brain Sciences, 8: 1, pp. 43 – 83.
  • Revonsuo, A. (1995) “Consciousness, Dreams, and Virtual Realities,” Philosophical Psychology 8: pp. 35 – 58.
  • Revonsuo, A. (2000) “The Reinterpretation of Dreams: An Evolutionary Hypothesis of the Function of Dreaming,” Behavioral and Brain Sciences 23, pp. 793 - 1121.
    • An exposition and defence of the theory that dreaming is an escape rehearsal and so will have an adaptive value over those creatures who do not dream, contra Flanagan.
  • Revonsuo A (2006) Inner Presence MIT Press.
  • Rosenthal, D. (2002) “Explaining Consciousness” in Philosophy of Mind: Classical and Contemporary Readings, (eds) David Chalmers, OUP.
    • In this paper, Rosenthal asks in passing whether dreams are conscious experiences that occur during sleep, before he goes on to highlight his higher-order-thought theory of consciousness. In this paper, he introduces important terminology in the philosophy of mind (creature consciousness and state consciousness).
  • Salzarulo, P. & Cipolli, C. (1974) “Spontaneously Recalled Verbal Material and its Linguistic Organization in Relation to Different Stages of Sleep,” Biological Psychology 2: pp. 47 – 57.
  • Schultz, H. & Salzarulo, P. (1993) “Methodological Approaches to Problems in Experimental Dream Research: An Introduction,” Journal of Sleep Research 2: pp. 1 – 3.
  • Schroeder, S. (1997) “The Concept of Dreaming: On Three Theses by Malcolm,” Philosophical Investigations 20:1 January 1997, pp. 15 – 38.
  • Schwitzgebel, E. (2002) “Why Did We Think We Dreamed in Black and White?” Stud. Hist. Phil. Sci. 33 (2002) pp. 649 – 660.
    • This is a more specific case of Schwitzgebel’s general attack on the reliability of introspection. This paper is particularly interesting in examining the relationship between dream experience itself and episodically remembering those experiences. Schwitzgebel couches his account of dreams as a type of imagination.
  • Schwitzgebel, E. (2003) “Do People Still Report Dreaming in Black and White? An Attempt to Replicate a Questionnaire from 1942,” Perceptual and Motor Skills, 96: pp. 25 – 29.
  • Schwitzgebel, E. Huang, C. & Zhou, Y. (2006) “Do We Dream In Color? Cultural Variations and Skepticism,” Dreaming, Vol. 16, No. 1: pp. 36 - 42.
  • Seligman, E & Yellen, A (1987) “What is a Dream?” Behar. Res. Thu. Vol. 25, No. I, pp. 1 - 24.
  • Shaffer, J. (1984) “Dreaming,” American Philosophical Quarterly, Vol. 21, No. 2 (Apr., 1984), pp. 135 - 146.
  • Sharma, K. (2001) “Dreamless Sleep and Some Related Philosophical Issues,” Philosophy East and West, Vol. 51, No. 2 (Apr., 2001), pp. 210 – 231.
  • Shredl, M. & Erlacher, D. (2004) “Lucid Dreaming Frequency and Personality,” Personality and Individual Differences, 37 (2004) pp. 1463 – 1473.
  • Snyder, T. & Gackenbach, J. (1988) “Individual Differences Associated with Lucid Dreaming” in Gackenbach, J. & LaBerge, S. (Eds.) Conscious Mind, Sleeping Brain (pp. 221-259) New York: Plenum.
  • Solms, M. (1997) The Neuropsychology of Dreams Mahwah: Lawrence Erlbaum Associates.
    • This book brings together research on the counter-intuitive idea of abnormalities in dreaming. Some individuals with brain lesions have reported significant changes in the phenomenology of their dreams.
  • Sosa, E. (2007) A Virtue Epistemology: Apt Belief and Reflective Knowledge Oxford: OUP.
    • Chapter One contains Sosa’s brief sketch of the imagination model of dreams, with the consequent attack on Descartes’ dream argument.
  • Squires, R. (1995) “Dream Time,” Proceedings of the Aristotelian Society, New Series, Vol. 95 (1995), pp. 83 - 91.
  • Stevens, A. (1990) On Jung London: Routledge.
  • Stevens, A. (1994) Jung: A Very Short Introduction New York: OUP
  • Stewart, C. (2002) “Erotic Dreams and Nightmares from Antiquity to the Present,” The Journal of the Royal Anthropological Institute, Vol. 8, No. 2 (Jun., 2002), pp. 279 – 309.
  • Stickgold, R., Rittenhouse, C. & Hobson, J. A. (1994) “Dream splicing: A new technique for assessing thematic coherence in subjective reports of mental activity” Consciousness and Cognition 3:pp. 114 – 28.
  • Storr, A. (1989) Freud: A Very Short Introduction Oxford: OUP, 2001.
  • Sutton, J. (2009) “Dreaming” In Calvo, P., & Symons, J. (Eds.), Routledge Companion to the Philosophy of Psychology, pp. 522 - 542.
  • Symons, D. (1993) “The Stuff That Dreams Aren’t Made Of: Why Wake-State and Dream- State Sensory Experiences Differ,” Cognition, 47 (1993), pp. 181 - 217.
  • Valli, K & Revonsuo, A. (2009) “The Threat Simulation Theory in the Light of Recent Empirical Evidence—A Review,” The American Journal of Psychology, 122: pp. 17 - 38.
  • Valli, K. (2011) “Dreaming in the Multilevel Framework,” Consciousness and Cognition, 20 (2011): pp. 1084 – 1090.
  • Whitmont, E. & Perera, S. (1989) Dreams, A Portal to the Source London: Routledge, 1999.
  • Windt, J. M. & Metzinger, T. (2007) “The Philosophy of Dreaming and Self-consciousness: What Happens to the Experiential Subject During the Dream State?” In Barrett, D. & McNamara, P. (Eds.), The New Science of Dreaming, Vol 3: Cultural and Theoretical Perspectives, 193–247. Westport, CT and London: Praeger Perspectives/Greenwood Press.
  • Windt, J.M. (2010) “The Immersive Spatiotemporal Hallucination Model of Dreaming,” Phenomenology and the Cognitive Sciences, 9, pp. 295 - 316.
  • Windt, J. M., & Noreika, V. (2011). “How to Integrate Dreaming into a General Theory of Consciousness—A Critical Review of Existing Positions and Suggestions for Future Research,” Consciousness and Cognition, 20(4), pp. 1091 – 1107.
  • Wittgenstein, L. (1953) Philosophical Investigations, In Great Books of the Western World | 55 USA: Britannica, 1994.
  • Wollheim, R. (1991) Freud 2nd Edition London: Fontana.
  • Wollheim, R. & Hopkins, J. (1982) Philosophical Essays on Freud Cambridge: Cambridge University Press.
  • Wolman, R. & Kozmova, M. (2007) “Last Night I Had the Strangest Dream: Varieties of Rational Thought Processes in Dream Reports,” Consciousness and Cognition, 16 (2007): pp. 838 – 849.
  • Yost, R. & Kalish, D. (1955) “Miss MacDonald on Sleeping and Waking,” The Philosophical Quarterly, Vol. 5, No. 19 (Apr., 1955), pp. 109 - 124.
  • Yost, R. (1959) “Professor Malcolm on Dreaming and Skepticism-I,” The Philosophical Quarterly, Vol. 9, No. 35 (Apr.), pp. 142 - 151.
  • Zadra, A. (1998) “The Nature and Prevalence of Pain in Dreams,” Pain Research and Management, 3, pp. 155 -161.
  • Zadra, A. and others (2006) “Evolutionary Function of Dreams: A Test of the Threat Simulation Theory in Recurrent Dreams,” Consciousness and Cognition, 15 (2006) pp. 450 – 463.

 

Author Information

Ben Springett
Email: bs1844@my.bristol.ac.uk
University of Bristol
United Kingdom

Internalism and Externalism in the Philosophy of Mind and Language

This article addresses how our beliefs, our intentions, and other contents of our attitudes are individuated, that is, what makes those contents what they are. Content externalism (henceforth externalism) is the position that our contents depend in a constitutive manner on items in the external world, that they can be individuated by our causal interaction with the natural and social world. In the 20th century, Hilary Putnam, Tyler Burge and others offered Twin Earth thought experiments to argue for externalism. Content internalism (henceforth internalism) is the position that our contents depend only on properties of our bodies, such as our brains. Internalists typically hold that our contents are narrow, insofar as they locally supervene on the properties of our bodies or brains.

Although externalism is the more popular position, internalists such as David Chalmers, Gabriel Segal, and others have developed versions of narrow content that may not be vulnerable to typical externalist objections. This article explains the variety of positions on the issues and explores the arguments for and against the main positions. For example, externalism incurs problems of privileged access to our contents as well as problems about mental causation and psychological explanation, but externalists have offered responses to these objections.

Table of Contents

  1. Hilary Putnam and Natural Kind Externalism
  2. Tyler Burge and Social Externalism
  3. Initial Problems with the Twin Earth Thought Experiments
  4. Two Problems for Content Externalism
    1. Privileged Access
    2. Mental Causation and Psychological Explanation
  5. Different Kinds of Content Externalism
  6. Content Internalism and Narrow Content
    1. Jerry Fodor and Phenomenological Content
    2. Brian Loar and David Chalmers on Conceptual Roles and Two-Dimensional Semantics
    3. Radical Internalism
  7. Conclusion
  8. References and Further Reading

1. Hilary Putnam and Natural Kind Externalism

In “The Meaning of ‘Meaning’” and elsewhere, Putnam argues for what might be called natural kind externalism. Natural kind externalism is the position that our natural kind terms (such as “water” and “gold”) mean what they do because we interact causally with the natural kinds that these terms are about. Interacting with natural kinds is a necessary (but not sufficient) condition for meaning (Putnam 1975, 246; Putnam 1981, 66; Sawyer 1998 529; Nuttecelli 2003, 172; Norris 2003, 153; Korman, 2006 507).

Putnam asserts that the traditional Fregean theory of meaning leaves a subject “as much in the dark as it ever was” (Putnam 1975, 215) and that it, and any descriptivist heirs to it, are mistaken in all their forms. To show this, Putnam insists that Frege accounts for the sense, intension, or meaning of our terms by making two assumptions:

  1. The meaning of our terms (for example, natural kind terms) is constituted by our being in a certain psychological state.
  2. The meaning of such terms determines its extension (Putnam 1975, 219).

Putnam concedes that assumptions one and two may seem appealing, but the conjunction of these assumptions is “…not jointly satisfied by any notion, let alone the traditional notion of meaning.” Putnam suggests abandoning the first while retaining a modified form of the second.

Putnam imagines that somewhere there is a Twin Earth. Earthlings and Twin Earthlings are exact duplicates (down to the last molecule) and have the same behavioral histories. Even so, there is one difference: the substance that we call water, on Twin Earth does not consist of H20, but of XYZ. This difference has not affected how the inhabitants of either planet use their respective liquids, but the liquids have different “hidden structures” (Putnam 1975, 235, 241; Putnam 1986, 288, Putnam 1988, 36). Putnam then imagines it is the year is 1750, before the development of chemistry on either planet. In this time, no experts knew of the hidden structure of water or of its twin, twater. Still, Putnam says, when the inhabitants of Earth use the term “water,” they still refer to the liquid made of H20; when the inhabitants of Twin Earth do the same, they refer to their liquid made of XYZ (Putnam 1975, 270).

The reason this is so, according to Putnam, is that natural kind terms like water have an “unnoticed indexical component.” Water, he says, receives its meaning by our originally pointing at water, such that water is always water “around here.” Putnam admits this same liquid relation “…is a theoretical relation, and may take an indefinite amount of investigation to determine” (Putnam 1975, 234). Given this, earthlings and their twin counterparts use their natural kind terms ((for example, “water,” “gold,” “cat,” and so forth), to refer to the natural kinds of their respective worlds (for example, on Earth, “water” refers to H20 but on Twin Earth it refers to XYZ). In other words, on each planet, twins refer to the “hidden structure” of the liquid of that planet. Thus, Putnam concludes that the first Fregean assumption is false, for while twins across planets are in the same psychological state, when they say “water,” they mean different things. However, understood correctly, the second assumption is true; when the twins say “water,” their term refers to the natural kind (that is, with the proper structural and chemical properties) of their world, and only to that kind.

Natural kind externalism, then, is the position that the meanings of natural kind terms are not determined by psychological states, but are determined by causal interactions with the natural kind itself (that is, kinds with certain structural properties). “Cut the pie any way you like,” Putnam says, “Meaning just ain’t in the head” (Putnam, 1975 227). However, soon after Putnam published “The Meaning of ‘Meaning,’” Colin McGinn and Tyler Burge noted that what is true of linguistic meaning extends to the contents of our propositional attitudes. As Burge says, the twins on Earth and Twin Earth “…are in no sense duplicates in their thoughts,” and this is revealed by how we ascribe attitudes to our fellows (Burge 1982b, 102; Putnam 1996, viii). Propositional attitude contents, it seems, are also not in our heads.

2. Tyler Burge and Social Externalism

In “Individualism and the Mental,” “Two Thought Experiments Reviewed,” “Other Bodies,” and elsewhere Tyler Burge argues for social externalism (Burge, 1979, 1982a, 1982b). Social externalism is the position that our attitude contents (for example, beliefs, intentions, and so forth) depend essentially on the norms of our social environments.

To argue for social externalism, Burge introduces a subject named Bert. Bert has a number of beliefs about arthritis, many of which are true, but in this case, falsely believes that he has arthritis in his thigh. While visiting his doctor, Bert says, “I have arthritis.” After hearing Bert's complaint, the doctor informs his patient that he is not suffering from arthritis. Burge then imagines a counterfactual subject – let us call him Ernie. Ernie is physiologically and behaviorally the same as Bert (non-intentionally described), but was raised in a different linguistic community. In Ernie's community, “arthritis” refers to a disease that occurs in the joints and muscles. When Ernie visits his doctor and mentions arthritis, he is not corrected.

Burge notes that although Ernie is physically and behaviorally the same as Bert, he “…lacks some – perhaps all – of the attitudes commonly attributed with content causes containing the word ‘arthritis…’” Given that arthritis is part of our linguistic community and not his, “it is hard to see how the patient could have picked up the notion of ‘arthritis’” (Burge 1979, 79; Burge 1982a, 286; Burge 1982b, 109). Although Bert and Ernie have always been physically and behaviorally the same, they differ in their contents. However, these differences “…stem from differences outside the patient.” As a result, Burge concludes: “it is metaphysically or constitutively necessary that in order to have a thought about arthritis one must be in relations to others who are in a better position to specify the disease” (Burge 1988, 350; Burge 2003b, 683; Burge 2007, 154).

Unlike Putnam, Burge does not restrict his externalism to natural kinds, or even to obvious social kinds. Regardless of the fact that our fellows suffer incomplete understanding of terms, typically we “take discourse literally” (Burge 1979, 88). In such situations, we still have “… assumed responsibility for communal conventions governing language symbols,” and given this, maintain the beliefs of our home communities (Burge 1979, 114). Burge states that this argument

…does not depend, for example, on the kind of word “arthritis” is. We could have used an artifact term, an ordinary natural kind word, a color adjective, a social role term, a term for a historical style, an abstract noun, an action verb, a physical movement verb, or any of other various sorts of words (Burge 1979, 79).

In other papers, Burge launches similar thought experiments to extend his externalism to cases where we have formed odd theories about artifacts (for example, where we believe sofas to be religious artifacts), to the contents of our percepts (for example, to our perceptions of, say, cracks or shadows on a sidewalk), and even to our memory contents (Burge 1985, 1986, 1998). Other externalists—William Lycan, Michael Tye, and Jonathan Ellis among them—rely on similar thought experiments to extend externalism to the contents of our phenomenal states, or seemings (Lycan 2001; Tye 2008; Ellis 2010). Externalism, then, can seem to generalize to all contents.

3. Initial Problems with the Twin Earth Thought Experiments

Initially, some philosophers found Putnam's Twin Earth thought experiments misleading. In an early article, Eddy Zemach pointed out that Putnam had only addressed the meaning of water in 1750, “…before the development of modern chemistry” (Zemach 1976; 117). Zemach questions whether twins pointing to water “around here” would isolate the right liquids (that is, with H20 on Earth and XYX on Twin Earth). No, such pointing would not have done this, especially long ago; Putnam just stipulates that “water” on Earth meant H20 and on Twin Earth meant XYZ, and that “water” means the same now (Zemach 1976, 123). D. H. Mellor adds that, in point of fact, “specimens are causally downwind of the usage they are supposed to constrain.  They are chosen to fit botanical and genetic knowledge, not the other way around” (Mellor 1977, 305; Jackson 1998, 216; Jackson 2003, 61).

In response to this criticism, externalists have since developed more nuanced formulations of the causal theory of reference, downplaying supposed acts of semantic baptism (for example, “this is water”). The theoretical adjustments can then account for how natural kind term meanings can change or diminish the role of referents altogether (Sainsbury 2007). Zemach, Mellor, Jackson, and others infer that Putnam should have cleaved to the first Fregean principle (that our being in a certain psychological state constitutes meaning) as well as to the second. By combining both Fregean principles, earthlings and their twins, mean the same thing after all.

Similarly, Kent Bach, Tim Crane, Gabriel Segal, and others argue that since Bert incompletely understands “arthritis,” he does not have the contents that his linguistic community does (that is, that the disease refers only to the joints). Given his misunderstanding of “arthritis,” Bert would make odd inferences about the disease, but “these very inferences constitute the evidence for attributing to the patient some notion other than arthritis” (Bach 1987, 267). Crane adds that since Bert and Ernie would have “…all the same dispositions to behavior in the actual and counterfactual situations,” they have the same contents (Crane 1991, 19; Segal 2000, 81). However, Burge never clarifies how much understanding Bert needs to actually have arthritis.(Burge 1986, 702; Burge 2003a, 276). Burge admits, “when a person incompletely understands an attitude, he has some other content that more or less captures his understanding” (Burge 1979, 95). If Bert does have some other content ­– perhaps, “tharthritis”– there is little motive to insist that he also has arthritis. Bert and Ernie, it seems, have the same contents.

According to these criticisms, the Twin Earth thought experiments seem to show that twins across planets, or in those raised in different linguistic communities, establish differing contents by importing questionable assumptions such as; that the causal theory of reference is true, that we refer to hidden structural properties, or dictate norms for ascribing contents. Without these assumptions, the Twin Earth thought experiments can easily invoke internalist intuitions. When experimental philosophers tested such thought experiments, they elicited different intuitions. At the very least, these philosophers say, “intuitions about cases continue to underdetermine the selection of the correct theory of reference” (Machery, Mallon, Nichols, and Stich 2009, 341).

4. Two Problems for Content Externalism

a. Privileged Access

René Descartes famously insisted that he could not be certain he had a body: “…if I mean only to talk of my sensation, or my consciously seeming to see or to walk, it becomes quite true because my assertion refers only to my mind.” In this view, privileged access (that is, a priori knowledge) to our content would seem to be true.

However, as Paul Boghossian has argued, externalism may not be able to honor this apparent truism. Boghossian imagines that we earthlings have slowly switched from Earth to Twin Earth, unbeknownst to us. As a result, he proposes that the externalist theory would not allow for discernment of whether contents refer to water or twater, to arthritis or tharthritis (Boghossian 1989, 172; Boghossian 1994, 39). As Burge sees the challenge, the problem is that "…[a] person would have different thoughts under the switches, but the person would not be able to compare the situations and note when and where the differences occurred (Burge 1988, 653; Burge 1998, 354; Burge 2003a, 278).

Boghossian argues that since externalism seems to imply that we cannot discern whether our contents refer to water or twater, arthritis or tharthritis, then we do not have privileged access to the proper contents. Either the approach for understanding privileged access is in need of revision (for example, restricting the scope of our claims to certain a priori truths), or externalism is itself false.

In response, Burge and others claim the “enabling conditions” (for example, of having lived on Earth or Twin Earth) to have water or twater, arthritis or tharthritis, the referential contents need not be known. As long as the enabling conditions are satisfied, first-order contents can be known (for example, “that is water” or “I have arthritis”); this first-order knowledge gives rise to our second-order reflective contents about them. Indeed, our first-order thoughts (that is, about objects) completely determine the second-order ones (Burge 1988, 659). However, as Max de Gaynesford and others have argued, when considering switching cases, one cannot assume that such enabling conditions are satisfied. Since we may be incorrect about such enabling conditions, and may be incorrect about our first-order attitude contents as well, then we may be incorrect about our reflective contents too (de Gaynesford 1996, 391).

Kevin Falvey and Joseph Owens have responded that although we cannot discern among first order contents, doing so is unnecessary. Although the details are “large and complex,” our contents are always determined by the environment where we happen to be, eliminating the possibility of error (Falvey and Owens 1994, 118). By contrast, Burge and others admit that contents are individuated gradually by “how the individual is committed” to objects. and that this commitment is constituted by other factors (Burge 1988, 652, Burge 1998, 352; Burge 2003a, 252). If we were suddenly switched from Earth to Twin Earth, or back again, our contents would not typically change immediately. Externalism, Burge concedes, is consistent with our having “wildly mistaken beliefs” about our enabling conditions (for example, about where we are) and so allows for errors about our first-order contents, as well as reflections upon them (Burge 2003c, 344). Because there is such a potential to be mistaken about enabling conditions, perhaps we do need to know them after all.

Other externalists have claimed that switching between Earth and Twin Earth is not a “relevant alternative”. Since such a switch is not relevant, they say, our enabling conditions (that is, conditions over time) are what we take them to be, such that we have privileged access to our first-order contents, and to our reflections on these (Warfield 1997, 283; Sawyer 1999, 371). However, as Peter Ludlow has argued, externalists cannot so easily declare switching cases to be irrelevant (Ludlow 1995, 48; Ludlow 1997, 286). As many externalists concede, relevant switching cases are easy to devise. Moreover, externalists cannot both rely on fictional Twin Earth thought experiments to generate externalist intuitions yet also object to equally fictional switching cases launched to evoke internalist intuitions. Either all such cases of logical possibility are relevant or none of them are (McCulloch 1995, 174).

Michael McKinsey, Jessica Brown and others offer another set of privileged access objections to externalism. According to externalism, they say, we do have privileged access to our contents (for example, to those expressed by water or twater, or arthritis or tharthritis). Given such access, we should be able to infer, by conceptual implication, that we are living in a particular environment (for example, on a planet with water and arthritis, on a planet twater and tharthritis, and so forth). Externalism, then, should afford us a priori knowledge of our actual environments (McKinsey 1991, 15; Brown 1995, 192; Boghossian 1998, 208). Since privileged access to our contents does not afford us a priori knowledge of our environments (the latter always remaining a posteriori), externalism implies that we possess knowledge we do not have.

Externalists, such as Burge, Brian McLaughlin and Michael Tye, Anthony Brueckner, and others have proposed that privileged access to our contents does not afford any a priori knowledge of our environments. Since we can be mistaken about natural kinds in our actual environment (for example, we may even discover that there are no kinds) the facts must be discovered. Actually, as long as objects exist somewhere, we can form contents about them just by hearing others theorize about them (Burge 1982, 114; Burge 2003c, 344; Brueckner 1992, 641; Gallois and O’Leary-Hawthorne 1996, 7; Ball 2007, 469). Although external items individuate contents, our privileged access to those contents does not afford us a priori knowledge of our environments (such details always remaining a posteriori). However, if we can form contents independent of the facts of our environment (for example, if we can have “x,” “y,” or “z” contents about x, y, or z objects when these objects have never existed around us), then externalism seems to be false in principle. Externalists must explain how this possibility does not compromise their position, or they must abandon externalism (Boghossian 1998, 210; Segal 2000, 32, 55; Besson 2012, 420).

Some externalists, Sarah Sawyer and Bill Brewer among them, have responded to this privilege access objection by suggesting that, assuming externalism is true, we can only think about objects by the causal interaction we have with them. Since this is so, privileged access to our contents allows us to infer, by conceptual implication, that those very objects exist in our actual environments. In other words, externalism affords us a priori knowledge of our environments after all (Sawyer 1998, 529-531; Brewer 2000, 416). Perhaps this response is consistent, but it requires privileged access to our contents by itself determine a priori knowledge of our environments.

b. Mental Causation and Psychological Explanation

Jerry Fodor, Colin McGinn, offer a second objection to externalism based in scientific psychology. When explaining behavior, psychologists typically cite local contents as causes of our behavior. Psychologists typically ascribe twins the same mental causes and describe twins as performing the same actions (Fodor 1987, 30; McGinn 1982, 76-77; Jacob 1992, 208, 211). However, according to externalism, contents are individuated in relation to different distal objects. Since twins across planets relate to different objects, they have different mental causes, and so should receive different explanations. Yet as Fodor and McGinn note, since the only differences between twins across planets are the relations they bear to such objects, those objects are not causally relevant. As Jacob says, the difference between twin contents and behavior seems to drop out as irrelevant (Jacob 1992, 211). Externalism, then, seems to violate principles of scientific psychology, rendering mental causation and psychological explanation mysterious (Fodor 1987, 39; McGinn 1982, 77).

Externalists, such as Burge, Robert van Gulick, and Robert Wilson have responded that, when considering mental causation and psychological explanation, local causes are important. There is still good reason to individuate contents between planets by the relation twins bear to distal (and different) objects (Burge 1989b, 309; Burge 1995, 231; van Gulick 1990; 156; Wilson 1997, 141). Externalists have also noted that any relations we bear to such objects may yet be causally relevant to behavior, albeit indirectly (Adams, Drebushenko, Fuller, and Stecker 1990, 221-223; Peacocke 1995, 224; Williamson 2000, 61-64). However, as Fodor, Chalmers, and others have argued, our relations to such distal and different objects are typically not causal but only conceptual (Fodor 1991b 21; Chalmers 2002a, 621). Assuming this is so, psychologists do not yet have any reason to prefer externalist recommendations for the individuation of content, since these practices do not distinguish the causal from the conceptual aspects of our contents.

5. Different Kinds of Content Externalism

Historically, philosophers as diverse as Georg Wilhelm Friedrich Hegel, Martin Heidegger, and Ludwig Wittgenstein have all held versions of externalism. Putnam and Burge utilize Twin Earth thought experiments to argue for natural kind and social externalism, respectively, which they extend to externalism about artifacts, percepts, memories, and phenomenal properties. More recently, Gareth Evans and John McDowell, Ruth Millikan, Donald Davidson, Andy Clark and David Chalmers have developed four more types of externalism – which do not primarily rely upon Twin Earth thought experiments, and which differ in their motivations, scopes and strengths.

Gareth Evans, John McDowell adapt neo-Fregean semantics to launch a version of externalism for demonstrative thoughts (for example, this is my cat). In particular, we bear a “continuing informational link” to objects (for example, cats, tables, and so forth) where we can keep track of them, as manifested in a set of abilities (Evans 1982, 174). Without this continuing informational link and set of abilities, Evans says, we can only have “mock thoughts” about such objects. Such failed demonstrative thoughts appear to have contents, but in fact, do not (Evans 1981, 299, 302; McDowell 1986, 146; McCulloch 1989, 211-215). Evans and McDowell conclude that when we express demonstrative thoughts, our contents about them are individually dependent on the objects they represent. In response, some philosophers have questioned whether there is any real difference between our successful and failed demonstrative thoughts (that is, rendering the latter mock thoughts). Demonstrative thoughts, successful or not, seem to have the same contents and we seem to behave the same regardless (Segal 1989; Noonan 1991).

Other externalists – notably David Papineau and Ruth Garrett Millikan – employ teleosemantics to argue that contents are determined by proper causes, or those causes that best aid in our survival. Millikan notes that when thoughts are “developed in a normal way,” they are “a kind of cognitive response to” certain substances and that it is this process that determines content (Millikan 2004, 234). Moreover, we can “make the same perceptual judgment from different perspectives, using different sensory modalities, under different mediating circumstances” (Millikan 2000, 103). Millikan claims that improperly caused thoughts may feel the same as genuine thoughts, but have no place in this system. Therefore, contents are, one and all, dependent on only proper causes. However, some philosophers have responded that the prescribed contents can be produced in aberrant ways and aberrant contents can be produced by proper causes. Contents and their proper causes may be only contingently related to one another (Fodor 1990).

Donald Davidson offers a third form of externalism that relies on our basic linguistic contact with the world, on the causal history of the contents we derive from language, and the interpretation of those contents by others. Davidson insists that

…in the simplest and most basic cases, words and sentences derive their meaning from the objects and circumstances in whose presence they were learned (Davidson 1989, 44; Davidson, 1991; 197).

In such cases, we derive contents from the world directly; this begins a causal history where we can manifest the contents. An instantly created swampman – a being with no causal history – could not have the contents we have, or even have contents at all (Davidson 1987, 19). Since our contents are holistically related to each other as interpreters, we must find each other to be correct in most things. Interpretations, however, change what our contents are and do so essentially (Davidson 1991, 211). Davidson concludes that our contents are essentially dependent on our basic linguistic contact with the world, our history of possessing such contents, and on how others interpret us. In response, some philosophers have argued that simple basic linguistic contact with the world, from which we derive specific contents directly, may not be possible. A causal history may not be relevant to our having contents and the interpretations by others may have nothing to do with content (Hacker 1989; Leclerc 2005).

Andy Clark and David Chalmers have developed what has been called “active externalism,” which addresses the vehicles of content – our brains. When certain kinds of objects (for example, notebooks, mobile phones, and so forth) are engaged to perform complex tasks, the objects change the vehicles of our contents as well as how we process information. When processing information, we come to rely upon objects (for example, phones), such that they literally become parts of our minds, parts of our selves (Clark and Chalmers 1998; Clark 2010). Clark and Chalmers conclude that our minds and selves are “extended” insofar as there is no boundary between our bodies, certain objects, and our social worlds. Indeed, other externalists have broadened active or vehicle externalism to claim that there are no isolated persons at all (Noe 2006; Wilson 2010). Philosophers have suggested various ways to draw a boundary between the mind and objects, against active or vehicle externalism (Adams and Aizawa 2010). Other philosophers have responded that because active or vehicle externalism implies that the mind expands to absorb objects, it is not externalism at all (Bartlett 2008).

These newer forms of externalism do not rely on Twin Earth thought experiments. Moreover, these forms of externalism do not depend on the causal theory of reference, reference to hidden structural properties, incomplete understanding, or dictated norms for ascribing content. Consequently, these newer forms of externalism avoid various controversial assumptions in addition to any problems associated with them. However, even if one or more of these externalist approaches could overcome their respective objections, all of them, with the possible exception of active or vehicle externalism (which is not universally regarded as a form of externalism) must still face the various problems associated with privileged access, mental causation, and psychological explanation.

6. Content Internalism and Narrow Content

Internalism proposes that our contents are individuated by the properties of our bodies (for example, our brains), and these alone. According to this view, our contents locally supervene on the properties of our bodies. In the history of philosophy, René Descartes, David Hume, and Gottlob Frege have defended versions of internalism. Contemporary internalists typically respond to Twin Earth thought experiments that seem to evoke externalist intuitions. Internalists offer their own thought experiments (for example, brain in a vat experiments) to invoke internalist intuitions. For example, as David Chalmers notes, clever scientists could arrange a brain such that it has

...the same sort of inputs that a normal embodied brain receives…the brain is connected to a giant computer simulation of the world. The simulation determines which inputs the brain receives. When the brain produces outputs, these are fed back into the simulation. The internal state of the brain is just like that of a normal brain (Chalmers 2005, 33).

Internalists argue that such brain in vat experiments are coherent; they contend that brains would have the same contents as they do without their having any causal interaction with objects. It follows that neither brains in vats nor our contents are essentially dependent on the environment (Horgan, Tienson and Graham 2005; 299; Farkas 2008, 285). Moderate internalists, such as Jerry Fodor, Brian Loar, and David Chalmers, who accept Twin Earth inspired externalist intuitions, insofar as they agree that some contents are broad (that is, some contents are individuated by our causally interacting with objects). However, these philosophers also argue that some contents are narrow (that is, some contents depending only on our bodies) Radical internalists, such as Gabriel Segal, reject externalist intuitions altogether and argue that all content is narrow.

Moderate and radical internalists are classed as internalists because they all develop accounts of narrow content, or attempt to show how contents are individuated by the properties of our bodies. Although these accounts of narrow content differ significantly, they are all designed to have particular features. First, narrow content is intended to mirror our perspective, which is the unique set of contents in each person, regardless of how aberrant these turn out to be. Second, narrow content is intended to mirror our patterns of reasoning – again regardless of aberrations. Third, narrow content is intended to be relevant to mental causation and to explaining our behavior, rendering broad content superfluous for that purpose (Chalmers 2002a, 624, 631). As internalists understand narrow content, then, our unique perspective, reasoning, and mental causes are all important – especially when we hope to understand the precise details of ourselves and our fellows.

a. Jerry Fodor and Phenomenological Content

In his early work, Jerry Fodor developed a version of moderate internalism. Fodor concedes that the Twin Earth thought experiments show that some of our contents (for example, water or twater) are broad. Still, for the purposes of scientific psychology, he argues, we must construct some content that respects “methodological solipsism,” which in turn explains why the twins are psychologically the same. Consequently, Fodor holds that narrow content respects syntax, or does so “without reference to the semantic properties of truth and reference” (Fodor 1981, 240; Fodor 1982, 100). Phenomenologically, the content of “water is wet” is something like "…a universally quantified belief with the content that all the potable, transparent, sailable-on…and so forth, kind of stuff is wet (Fodor 1982, 111)." However, soon after Fodor offered this phenomenological version of narrow content, many philosophers noted that if such content by itself has nothing to do with truth or reference it is hard to see how this could be content, or how it could explain our behavior (Rudder-Baker 1985, 144; Adams, Drebushenko, Fuller, and Stecker 1990, 217).

Fodor responded by slightly changing his position. According to his revised position, although narrow content respects syntax, contents are functions, or “mappings of thoughts and contexts onto truth conditions” (for example, the different truth conditions that obtain on either planet). In other words, before we think any thought in context we have functions that determine that when we are on one planet we have certain contents and when on the other we will have different ones. If we were switched with our twin, our thoughts would still match across worlds (Fodor 1987, 48). However, Fodor concedes, independent of context, we cannot say what narrow contents the twins share (that is, across planets). Still, he insists that we can approach these contents, albeit indirectly (Fodor 1987, 53). In other words, we can use broad contents understood in terms of “primacy of veridical tokening of symbols” and then infer narrow contents from them (Fodor 1991a, 268). Fodor insists that while twins across worlds have different broad contents, they have the same narrow functions that determine these contents.

But as Robert Stalnaker, Ned Block, and others have argued, if narrow contents are but functions that map contexts of thoughts onto truth conditions, they do not themselves have truth conditions – arguably, they are not contents. Narrow content, in this view, is just syntax (Stalnaker 1989, 580; Block 1991, 39, 51). Even if there was a way to define narrow content so that it would not reduce to syntax, it would still be difficult to say what about the descriptions must be held constant for them to perform the function of determining different broad contents on either planet (Block 1991, 53). Lastly, as Paul Bernier has noted, even if we could settle on some phenomenological descriptions to be held constant across planets, we could not know that any tokening of symbols really did determine their broad contents correctly (Bernier 1993, 335).

Ernest Lepore, Barry Loewer, and others have added that even if narrow contents could be defined as functions that map thoughts and contexts onto truth conditions, and even if these were short phenomenological descriptions, parts of the descriptions (for example, water or twater “…is wet,” or arthritis or tharthritis “…is painful” and so forth) might themselves be susceptible to new externalist thought experiments. If we could run new thought experiments on terms like “wet” or “painful,” those terms may not be narrow (Lepore and Loewer 1986, 609; Rowlands 2003, 113). Assuming that this is possible, there can be no way to pick out the truly narrow component in such descriptions, rendering this entire account of narrow content hopeless.

In the early 21st century, proponents of “phenomenal intentionality” defended this phenomenological version of narrow content. Experience, these philosophers say, must be characterized as the complex relation that “x presents y to z,” where z is an agent. In other words, agents are presented with various apparent properties, relations, and so forth, all with unique feels. Moreover, these presentations can be characterized merely by using “logical, predicative, and indexical expressions” (Horgan, Tienson, and Graham 2005, 301). These philosophers insist the phenomenology of such presentations is essential to them, such that all who share them also share contents. Phenomenal intentionality, then, is narrow (Horgan, Tienson, and Graham 2005, 302; Horgan and Kriegel 2008, 360). Narrow contents, furthermore, have narrow truth conditions. The narrow truth conditions of “that picture is crooked” may be characterized as "…there is a unique object x, located directly in front of me and visible by me, such that x is a picture and x is hanging crooked (Horgan, Tienson, and Graham 2005, 313)."

Defenders of phenomenological intentionality insist that although it is difficult to formulate narrow truth conditions precisely, we still share them with twins and brains in vats – although brains will not have many of their conditions satisfied (Horgan, Tienson, and Graham 315; Horgan 2007, 5). Concerning the objections to this account, if phenomenology is essential to content, there is little worry that content will reduce to syntax, about what part of content should be held constant across worlds, or about new externalist thought experiments. Externalists, of course, argue that phenomenology is not essential to content and that phenomenology is not narrow (Lycan 2001; Tye 2008; Ellis 2010). Because proponents of phenomenological intentionality hold that phenomenology is essential to content, they can respond that such arguments beg the question against them.

b. Brian Loar and David Chalmers on Conceptual Roles and Two-Dimensional Semantics

Brian Loar, David Chalmers, and others have developed another version of moderate internalism. Loar and Chalmers concede that the Twin Earth thought experiments show that some contents are broad (for example, water, twater or arthritis, tharthritis, and so forth), but also argue that our conceptual roles are narrow. Conceptual roles, these philosophers say, mirror our perspective and reasoning, and are all that is relevant to psychological explanations.

Conceptual roles, Colin McGinn notes, can be characterized by our “subjective conditional probability function on particular attitudes. In other words, roles are determined by our propensity to “assign probability values” to attitudes, regardless of how aberrant these attitudes turn out to be (McGinn 1982 234). In a famous series of articles, Loar adds that we experience such roles subjectively as beliefs, desires, and so forth. Such subjective states, he says, are too specific to be captured in public discourse and only have “psychological content,” or the content that is mirrored in how we conceive things (Loar 1988a, 101; Loar 1988b, 127). When we conceive things in the same way (for example, as twins across worlds do), we have the same contents. When we conceive things differently (for example, when we have different evaluations of a place), we have different contents. However, Loar admits that psychological contents do not themselves have truth conditions but only project what he calls “realization conditions,” or the set of possible worlds where contents “…would be true,” or satisfied (Loar 1988a, 108; Loar 1988b, 123). Loar concedes that without truth conditions psychological contents may seem odd, but they are narrow. In other words, psychological contents project realization conditions and do so regardless of whether or not these conditions are realized (Loar 1988b, 135).

David Chalmers builds on this conceptual role account of narrow content but defines content in terms of our understanding of epistemic possibilities. When we consider certain hypotheses, he says, it requires us to accept some things a priori and to reject others. Given enough information about the world, agents will be “…in a position to make rational judgments about what their expressions refer to” (Chalmers 2006, 591). Chalmers defines scenarios as “maximally specific sets of epistemic possibilities,” such that the details are set (Chalmers 2002a, 610). By dividing up various epistemic possibilities in scenarios, he says, we assume a “centered world” with ourselves at the center. When we do this, we consider our world as actual and describe our “epistemic intensions” for that place, such that these intentions amount to “…functions from scenarios to truth values” (Chalmers 2002a, 613). Epistemic intensions, though, have epistemic contents that mirror our understanding of certain qualitative terms, or those terms that refer to “certain superficial characteristics of objects…in any world” and reflect our ideal reasoning (for example, our best reflective judgments) about them (Chalmers 2002a, 609; Chalmers 2002b, 147; Chalmers 2006, 586). Given this, our “water” contents are such that "If the world turns out one way, it will turn out that water is H20; if the world turns out another way, it will turn out that water is XYZ" (Chalmers 2002b, 159).

Since our “water” contents constitute our a priori ideal inferences, the possibility that water is XYZ may remain epistemically possible (that is, we cannot rule it out a priori). Chalmers insists that given this background, epistemic contents really are narrow, because we evaluate them prior to whichever thoughts turn out to be true, such that twins (and brains in vats) reason the same (Chalmers 2002a, 616; Chalmers 2006, 596). Lastly, since epistemic contents do not merely possess realization conditions or conditions that address “…what would have been” in certain worlds but are assessed in the actual world, they deliver “epistemic truth conditions” for that place (Chalmers 2002a, 618; Chalmers 2002b, 165; Chalmers 2006, 594). Epistemic contents, then, are not odd but are as valuable as any other kind of content. In sum, Chalmers admits that while twins across worlds have different broad contents, twins (and brains) still reason exactly the same, and must be explained in the same way. Indeed, since cognitive psychology “…is mostly concerned with epistemic contents” philosophy should also recognize their importance (Chalmers, 2002a, 631).

However, as Robert Stalnaker and others have argued, if conceptual roles only have psychological contents that project realization conditions, we can best determine them by examining broad contents and then extracting narrow contents (Stalnaker 1990, 136). If this is how we determine narrow contents, they are merely derivative and our access to them is limited. However, Stalnaker says, that if psychological contents have actual truth conditions, then they are broad (Stalnaker 1990, 141). Chalmers, though, has responded that on his version of the position, our a priori ideal evaluations of epistemic possibilities in scenarios yields epistemic intensions with epistemic contents, and these contents have epistemic truth conditions of their own. Given these changes, he says, the problems of narrow content do not arise for him. Still, as Christian Nimtz and others have argued, epistemic contents seem merely to be qualitative descriptions of kinds, and so fall victim to typical modal and epistemological objections to descriptivist theories of reference (Nimtz 2004, 136). Similarly, as Alex Byrne and James Pryor argue, because such descriptions do not yield “substantive identifying knowledge” of kinds, it is doubtful we can know what these contents are, or even that they exist (Byrne and Pryor 2006, 45, 50). Lastly, other externalists have responded that the qualitative terms we use to express capture epistemic contents must be semantically neutral. However, since such neutral terms are not available, such contents are beyond our ken (Sawyer 2008, 27-28).

Chalmers has responded that although epistemic contents can be phrased descriptively, “there is no reason to think that grasping an epistemic intension requires any sort of descriptive articulation by a subject” (Chalmers 2002b, 148). Since this is so, he says, theory is not committed to the description theory of reference. Similarly, Chalmers insists that as long as we have enough information about the world, we do not require substantive identifying knowledge of kinds, but only “…a conditional ability” to identify them (Chalmers 2006, 600). Some philosophers have postulated that repeated contact with actual or hypothetical referents may change and refine it, and others have suggested that this ability may include abductive reasoning (Brogaard 2007, 320). However, Chalmers denies that this objection applies to his position because the conditional ability to identify kinds concerns only our ideal reasoning (for example, our best reflective judgments). Chalmers concedes that our qualitative language must be semantically neutral, but he insists that once we have enough information about the world, “…a qualitative characterization… will suffice” (Chalmers 2006, 603). Such a characterization, he says, will guarantee that twins (and brains in vats) have contents with the same epistemic truth conditions.

c. Radical Internalism

Unlike moderate internalists such as Fodor, Loar, and Chalmers, Gabriel Segal makes the apparently radical move of rejecting Twin Earth inspired externalist intuitions altogether (Segal, 2000, 24). Radical internalism offers the virtues of not needing to accommodate externalism, and of avoiding typical criticisms of narrow content. Internalists, Segal says, are not required to concede that some of our contents are broad and then to develop versions of narrow contents as phenomenology, epistemic contents, or conceptual roles beyond this. Indeed, he insists that this move has always been an “unworkable compromise” at best (Segal 2000, 114). Rather, as Segal says, narrow content is…a variety of ordinary representation” (Segal 2000, 19).

Segal notes that when offering Twin Earth thought experiments, externalists typically assume that the causal theory of reference is true, that we refer to hidden structural properties, can have contents while incompletely understanding them, and assume tendentious norms for ascribing contents. Moreover, Segal says, externalists seem unable to account for cases of referential failure (for example, cases of scientific failure), and so their account of content is implausible on independent grounds (Segal 2000, 32, 55). When all this is revealed, he says, our intuitions about content can easily change, and do. Interpreted properly, in this view, the Twin Earth thought experiments do not show that our contents are broad, but rather reveals “psychology, as it is practiced by the folk and the scientist, is already, at root, internalist” (Segal 2000, 122).

Segal describes our narrow contents as “motleys,” or organic entities that can “persist through changes of extension” yet still evolve. Motleys “…leave open the possibility of referring to many kinds,” depending on the circumstances (Segal 2000, 77, 132). Although Segal disavows any general method for determining when contents change or remain the same, he insists that we can employ various charitable means to determine this. When there is any mystery, we can construct neologisms to be more accurate. Neologisms allow us to gauge just how much our fellows deviate from us, and even to factor these deviations out (Segal 2000, 141; Segal 2008, 16). Regardless of these details, motleys are shared by twins (and brains in vats). Segal suggests that we follow the lead of experimental psychologists who track the growth and changes in our concepts by using scientific methods and models, but who only recognize narrow contents (Segal 2000, 155).

Importantly, because Segal rejects Twin Earth inspired externalist intuitions, externalists cannot accuse his version of narrow content of not being content, or of being broad content in disguise, without begging the question against him. Still, as Jessica Brown has argued, his account is incomplete. Segal, she notes, assumes a neo-Fregean account of content which she paraphrases as

If S rationally assents to P (t1) and dissents from or abstains from the truth value of P (t2), where P is in an extensional context, then for S, t1 and t2 have different contents, and S associates different concepts with them (Brown 2002, 659).

Brown insists that Segal assumes this neo-Fregean principle, but “…fails to provide any evidence” for it (Brown 2002, 660). Given this lacuna in his argument, she says, externalists might either try to accommodate this principle, or to reject it outright. Externalists have done both. Although Segal does not explicitly defend this neo-Fregean principle, he begins an implicit defense elsewhere (Segal 2003, 425). Externalists, by contrast, have struggled to accommodate this neo-Fregean principle and those externalists who have abandoned it have rarely offered replacements (Kimbrough 1989, 480).

Because Segal bases his radical internalism on rejecting externalist intuitions altogether, its very plausibility rests on this. Externalists typically take their intuitions about content obvious and expect others to agree. Still, as Segal notes, although externalist intuitions are popular, it is reasonable to reject them. By doing this, he hopes to “reset the dialectical balance” between externalism and internalism (Segal 2000, 126). Radical internalism, he says, may not be so radical after all.

7. Conclusion

Externalism and internalism, as we have seen, address how we individuate the content of our attitudes, or address what makes those contents what they are. Putnam, Burge, and other externalists insist that contents can be individuated by our causal interaction with the natural and social world. Externalists seem to incur problems of privileged access to our contents as well as problems about mental causation and psychological explanation. Externalist responses to these issues have been numerous, and have dominated the literature. By contrast, Chalmers, Segal, and other content internalists argue that contents can be individuated by our bodies. Internalists, though, have faced the charge that since narrow content does not have truth conditions, it is not content. Moreover, if such content does have truth conditions it is just broad content in disguise. Internalist responses to these charges have ranged from trying to show how narrow content is not vulnerable to these criticisms, to suggesting that the externalist intuitions upon which the charges rest should themselves be discarded.

Externalists and internalists have very different intuitions about the mind. Externalists seem unable to imagine minds entirely independent of the world (repeatedly claiming that possibility revises our common practices, or is incoherent in some way). By contrast, internalists see the split between mind and world as fundamental to understanding ourselves and our fellows, and so concentrate on questions of perspective, reasoning, and so forth. Given the contrasting intuitions between externalists and internalists, both sides often mischaracterize the other and see the other as clinging to doubtful, biased, or even incoherent conventions. Unless some way to test such intuitions is developed, proponents of both sides will continue to cleave to their assumptions in the debate, convinced that the burden of proof must be borne by the opposition.

8. References and Further Reading

  • Adams, F., Drebushenko, D., Fuller, G., and Stecker., R. 1990. “Narrow Content: Fodor's Folly.” Mind and Language 5, 213-229.
  • Adams, F. & Aizawa, K. 2010. “Defending the Bounds of Cognition.” In R. Menary (ed.) The Extended Mind, Cambridge, MA: MIT Press, 67-80.
  • Bach, K. 1987. Thought and Reference. Oxford: Oxford University Press.
  • Ball, D. 2007. “Twin Earth Externalism and Concept Possession.” Australasian Journal of Philosophy, 85.3, 457-472.
  • Bartlett G. 2008. “Whither Internalism. How Should Internalists Respond to the Extended Mind.” Metaphilosophy 39.2, 163-184.
  • Bernecker, S. 2010. Memory: a Philosophical Study. Oxford: Oxford University Press.
  • Bernier, P. 1993. “Narrow Content, Context of Thought, and Asymmetric Dependency.” Mind and Language 8, 327-342.
  • Besson, C. 2012. “Externalism and Empty Natural Kind Terms.” Erkenntnis 76, 403-425.
  • Biggs, Stephen & Wilson, Jessica. (forthcoming). “Carnap, the Necessary A Posteriori, and Metaphysical Nihilism.” In Stephan Blatti and Sandra Lapointe (eds.) Ontology After Carnap. Oxford: Oxford University Press.
  • Block, N., 1991. “What Narrow Content is Not.” In Lepore and Rey eds., Meaning and Mind: Fodor and His Critics. Oxford: Blackwell Publishers, 33-64.
  • Boghossian, P. 1989. “Content and Self-Knowledge.” In Ludlow and Martin eds., Externalism and Self-Knowledge. Stanford: CSLI Publications, 149-175.
  • Boghossian, P. 1994. “The Transparency of Mental Content.” Philosophical Perspectives 8, 33-50.
  • Boghossian, P. 1998. “What Can the Externalist Know A Priori?” Philosophical Issues 9, 197-211.
  • Brewer, B. 2000. “Externalism and A Priori Knowledge of Empirical Facts.” In Peacocke and Boghossian eds., New Essays on the A Priori. Oxford: Oxford University Press.
  • Brogaard, B. 2007. “That May Be Jupiter: A Heuristic for Thinking Two-Dimensionally.” American Philosophical Quarterly 44.4, 315-328.
  • Brown, J. 1995. “The Incompatibility of Anti-Individualism and Privileged Access.” Analysis 55.3, 149-156.
  • Brown, J. 2002. “Review of ‘A Slim Book about Narrow Content’.” Philosophical Quarterly, 657-660.
  • Brueckner, A. 1992. “What the Anti-Individualist can Know A Priori.” Reprinted in Chalmers, ed. The Philosophy of Mind: Classic and Contemporary Readings. Oxford: Oxford University Press, 639-651.
  • Burge, T. 1979. “Individualism and the Mental.” In French, P., Uehling, T., and Wettstein, H., eds., Midwest Studies in Philosophy 4. Minneapolis: University of Minnesota Press, 73-121.
  • Burge, T. 1982a. “Two Thought Experiments Reviewed.” Notre Dame Journal of Formal Logic 23.3, 284-293.
  • Burge, T. 1982b. “Other Bodies.” In Woodfield, Andrew, ed., Thought and Object: Essays on Intentionality. Oxford: Oxford University Press, 97-121.
  • Burge, T. 1985. “Cartesian Error and the Objectivity of Perception.” In: Pettit, P. and McDowell, J., eds. Subject, Thought, and Context. Oxford: Clarendon Press, 117-135.
  • Burge, T. 1986 “Intellectual Norms and the Foundations of Mind.” Journal of Philosophy 83, 697-720.
  • Burge, T. 1988. “Individualism and Self-Knowledge.” Journal of Philosophy 85, 647-663.
  • Burge, T. 1989a. “Wherein is Language Social?” Reprinted in Owens J. ed., Propositional Attitudes: The Role of Content in Logic, Language and Mind Cambridge: Cambridge University Press, 113-131.
  • Burge, T. 1989b. "Individuation and Causation in Psychology.” In Pacific Philosophical Quarterly, 303-322.
  • Burge, T. 1995. “Intentional Properties and Causation.” In C. Macdonald (ed.), Philosophy of Psychology: Debates on Psychological Explanation. Cambridge: Blackwell, 225-234.
  • Burge, T. 1998. “Memory and Self-Knowledge.” In Ludlow and Martin eds., Externalism and Self-Knowledge. Stanford: CLSI Publications, 351-370.
  • Burge, T. 2003a. “Replies from Tyler Burge.” In Frapolli, M. and Romero, E., eds. Meaning, Basic Self-Knowledge, and Mind. Stanford, Calif.: CSLI Publications, 250-282.
  • Burge, T. 2003b. “Social Anti-Individualism, Objective Reference.” Philosophy and Phenomenological Research 73, 682-692.
  • Burge, T. 2003c. “Some Reflections on Scepticism: Reply to Stroud.” In Martin Hahn & B. Ramberg (eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge. Mass: MIT Press, 335-346.
  • Burge, T., 2007. “Postscript to ‘Individualism and the Mental.’” Reprinted in Foundations of Mind, by Tyler Burge. Oxford: Oxford University Press, 151-182.
  • Byrne, A. and J. Pryor, 2006, “Bad Intensions.” In Two-Dimensional Semantics: Foundations and Applications, M. Garcia-Carprintero and J. Macia (eds.), Oxford: Oxford University Press, 38–54.
  • Chalmers, D. 2002a. “The Components of Content.” Reprinted in Chalmers, ed. The Philosophy of Mind: Classic and Contemporary Readings. Oxford: Oxford University Press, 607-633.
  • Chalmers, D. 2002b. “On Sense and Intension.” In Philosophical Perspectives; Language and Mind, 16, James Tomberlin, ed., 135-182.
  • Chalmers, D. 2005. “The Matrix as Metaphysics.” Reprinted in Philosophy and Science Fiction: From Time Travel to Super Intelligence, ed. Schneider, S. London: Wiley-Blackwell Press, 33-53.
  • Chalmers, D. 2006. “Two-Dimensional Semantics.” In Oxford Handbook of Philosophy of Language, E. Lepore and B. Smith eds., Oxford: Oxford University Press, 575–606.
  • Clark, A, Chalmers, D.1998. “The Extended Mind.” Analysis 58,10-23.
  • Clark, A. 2010. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford: Oxford University Press.
  • Crane, T. 1991. “All the Difference in the World.” Philosophical Quarterly 41, 3-25.
  • Davidson, D. 1987. “Knowing One’s Own Mind.” Reprinted in Davidson, Subjective, Intersubjective, Objective. Oxford: Oxford University Press, 15-39.
  • Davidson, D. 1989. “The Myth of the Subjective.” Reprinted in Davidson, Subjective, Intersubjective, Objective. Oxford: Oxford University Press, 39-52.
  • Davidson, D. 1991. “Epistemology Externalized.” Reprinted in Davidson, Subjective, Intersubjective, Objective. Oxford: Oxford University Press, 193-204.
  • De Gaynesford, M. 1996. “How Wrong can One Be?” Proceedings of the Aristotelian Society 96, 387-394.
  • Descartes, 1934-1955. Haldane, E. S. and Ross, G. R. T., trs. The Philosophical Works, vols. 1 and 2. Cambridge: Cambridge University Press.
  • Ellis, J. 2010. “Phenomenal Character, Phenomenal Consciousness, and Externalism.” Philosophical Studies, 147.2, 279-298.
  • Evans, G. 1981. “Understanding Demonstratives.” In Evans, Collected Papers. Oxford: Oxford University Press, 291-321.
  • Evans, G. 1982. The Varieties of Reference. Oxford: Oxford University Press.
  • Falvey and Owens, 1994. “Externalism, Self-Knowledge, and Skepticism.” Philosophy and Phenomenological Research, vol. 103.1, 107-137.
  • Farkas, K. 2003. “What is Externalism?” Philosophical Studies 112.3, 187-201.
  • Farkas, K 2008. “Phenomenal Intentionality without Compromise.” The Monist 71.2, 273-293.
  • Fodor, J. 1981. “Methodological Solipsism Considered as a Research Strategy in Cognitive Science.” In Fodor, J. RePresentations: Philosophical Essays on the Foundations of Cognitive Science. Cambridge MA: MIT Press, 225-256.
  • Fodor, J. 1982. “Cognitive Science and the Twin Earth Problem.” Notre Dame Journal of Formal Logic 23, 97-115.
  • Fodor, J. 1987. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, Mass: MIT Press.
  • Fodor, J. 1990. A Theory of Content and Other Essays. Cambridge, Mass: MIT Press.
  • Fodor, J. 1991a. “Replies.” In Loewer, B. and Rey, G. eds. Meaning in Mind: Fodor and His Critics. Cambridge, Mass.: B. Blackwell, 255-319.
  • Fodor, J. 1991b. “A Modal Argument for Narrow Content,” Journal of Philosophy 87, 5-27.
  • Gallois, A. and O’Leary-Hawthorne, 1996. “Externalism and Skepticism,” Philosophical Studies 81.1, 1-26.
  • Hacker, P. 1989. “Davidson and Externalism,” Philosophy 73, 539-552.
  • Horgan, T. 2007. “Agentive Phenomenal Intentionality and the Limits of Introspection.” Psyche 13.1, 1-29.
  • Horgan, T., Kriegel, J. 2008. “Phenomenal Intentionality Meets the Extended Mind.” The Monist 91, 353-380.
  • Horgan, T., Tienson J., and Graham, G. 2005. “Phenomenal Intentionality and the Brain in a Vat.” In The Externalist Challenge, ed. Shantz. Berlin and New York: de Gruyter, 297-319.
  • Jacob, P. 1992. “Externalism and Mental Causation.” Proceedings of the Aristotelian Society 92, 203-219.
  • Jackson, F. 1998. “Reference and Description Revisited.” in Philosophical Perspectives, 12, Language, Mind and Ontology, 201-218.
  • Jackson, F 2003. Narrow Content and Representation- Or Twin Earth Revisited.” Proceedings and Addresses of the American Philosophical Association Vol. 77.2, 55-70.
  • Kimbrough, S. 1989. “Anti-Individualism and Fregeanism.” Philosophical Quarterly 193, 471-483.
  • Korman, D 2006. “What Should Externalists Say about Dry Earth?” Journal of Philosophy 103, 503-520.
  • Leclerc, A. 2005. “Davidson’s Externalism and Swampman’s Troublesome Biography.” Principia 9, 159-175.
  • Lepore E., and Loewer B. 1986. “Solipsistic Semantics.” Midwest Studies in Philosophy 10, 595-613.
  • Loar, B., 1988a. “Social Content and Psychological Content.” In Grim and Merrill, eds., Contents of Thought: Proceedings of the Oberlin Colloqium in Philosophy, Tucson, Arizona: University of Arizona Press, 99-115.
  • Loar, B. 1988b. “A New Kind of Content.” In Grim, R. H. and Merrill, D. D., eds. Contents of Thought: Proceedings of the Oberlin Colloquium in Philosophy. Tucson, Arizona: University of Arizona Press, 117-139.
  • Ludlow, P. 1995. “Externalism, Self-Knowledge, and the Relevance of Slow Switching.” Analysis 55.1, 45-49.
  • Ludlow, P. 1997. “On the Relevance of Slow Switching.” Analysis, 57.4, 285-286.
  • Lycan. W. 2001. “The Case for Phenomenal Externalism.” Philosophical Perspectives 15, 17-35.
  • Machery, Mallon, Nichols and Stich, 2009. “Against Arguments from Reference.” Philosophy and Phenomenological Research 79.2, 332-356.
  • McCulloch, G., 1989 Game of the Name: Introducing Logic, Language, and Mind. Oxford: Clarendon Press.
  • McCulloch, G., 1995 Mind and its World. London: Routledge Press.
  • McDowell, J. 1986. “Singular Thought and the Extent of Inner Space.” In McDowell and Pettit eds., Subject, Thought, Context. Oxford: Clarendon Press, 137-169.
  • McGinn, C. 1982. “The Structure of Content,” In Woodfield, A., ed. Thought and Object: Essays on Intentionality. Oxford: Oxford University Press, 207-254.
  • McKinsey, M. 1991. “Anti-Individualism and Privileged Access. Analysis 51, 9-15.
  • McLaughlin, B and Tye, M. 1998. “Is Content Externalism Compatible with Privileged Access?” Philosophical Review, 107.3, 149-180.
  • Mellor, H. 1979. “Natural Kinds.” British Journal for the Philosophy of Science 28, 299-313.
  • Millikan, R. G. 2000. On Clear and Confused Concepts. Cambridge: Cambridge University Press.
  • Millikan R. G. 2004. “Existence Proof for a Viable Externalism,” In The Externalist Challenge, ed. Shantz. Berlin and New York: de Gruyter, 297-319.
  • Nimtz, C. 2004. “Two-Dimensionalism and Natural Kind Terms.” Synthese 138.1, 125-48.
  • Noe, A. 2006. “Experience without the Head,” In Gendler and Hawthorne, ed. Perceptual Experience. Oxford: Oxford University Press, 411-434.
  • Noonan, H. 1991. “Object-Dependent Thoughts and Psychological Redundancy,” Analysis 51, 1-9.
  • Norris, C. 2003. “Twin Earth Revisited: Modal Realism and Causal Explanation.” Reprinted in Norris, The Philosophy of Language and the Challenge to Scientific Realism. London: Routledge Press, 143-174.
  • Nuttetelli, S. 2003. “Knowing that One Knows What One is Talking About.” In New Essays on Semantic Externalism and Self-Knowledge, ed. Susanna Nuttetelli. Cambridge. Cambridge University Press, 169-184.
  • Peacocke, C 1995. “Content.” In A Companion to the Philosophy of Mind, S. Guttenplan, ed. Cambridge: Blackwell Press, 219-225.
  • Putnam, H., 1975. “The Meaning of Meaning.” In Mind, Language and Reality; Philosophical Papers Volume 2. Cambridge: Cambridge University Press, 215-271.
  • Putnam, H. 1981. Reason, Truth, and History. Cambridge: Cambridge University Press.
  • Putnam, 1986. “Meaning Holism.” Reprinted in Conant ed., Realism with a Human Face, by Hilary Putnam. Harvard: Harvard University Press, 278-303.
  • Putnam, 1988. Representation and Reality. Cambridge, Mass: A Bradford Book , MIT Press.
  • Putnam, H. 1996. “Introduction.” In: The Twin Earth Chronicles, eds. Pessin and Goldberg, i-xii.
  • Rowlands, M. 2003. Externalism: Putting Mind and World Back Together Again. New York: McGill-Queens Press.
  • Rudder-Baker, L. 1985. “A Farewell to Functionalism.” In Silvers, S. ed. Rerepresentations: Readings in the Philosophy of Mental Representation. Holland: Kluwer Academic Publishers, 137-149.
  • Sainsbury, M. 2007. Reality without Referents. Oxford. Oxford University Press.
  • Sawyer, S. 1998. “Privileged Access to the World.” Australasian Journal of Philosophy, 76.4, 523-533.
  • Sawyer, S. 1999. “An Externalist Account of Introspective Experience.” Pacific Philosophical Quarterly, 4.4, 358-374.
  • Sawyer, 2008. “There is no Viable Notion of Narrow Content.” In Contemporary Debates in the Philosophy of Mind, eds. McLaughlin and Cohen. London: Blackwell Press, 20-34.
  • Segal, 1989. “Return of the Individual.” Mind 98, 39-57.
  • Segal, G. 2000. A Slim Book about Narrow Content. Cambridge Mass: MIT Press.
  • Segal, G. 2003. “Ignorance of Meaning.” In The Epistemology of Language, Barber, A. ed. Oxford: Oxford University Press, 415-431.
  • Segal, 2008. “Cognitive Content and Propositional Attitude Attributions.” In Contemporary Debates in the Philosophy of Mind, eds. McLaughlin and Cohen. London: Blackwell Press, 4-19.
  • Stalnaker, R. 1989. “On What’s in the Head.” Reprinted in Rosenthal, ed., The Nature of Mind. Oxford: Oxford University Press, 576-590.
  • Stalnaker, R. 1990. “Narrow Content.” In Anderson, C.A. and Owens, J., eds. Propositional Attitudes: The Role of Content in Logic, Language and Mind. Stanford: Center for the Study of Language and Information, 131-148.
  • Tye, M. 2008. Consciousness Revisited; Materialism without Phenomenal Concepts. Cambridge, Mass: MIT Press.
  • Van Gulick, R. 1990. “Metaphysical Arguments for Internalism and Why They Do Not Work.” In Silvers, Stuart, ed. Rerepresentations: Readings in the Philosophy of Mental Representation. Holland: Kluwer Academic Publishers, 153-158.
  • Warfield, T. 1997. “Externalism, Self-Knowledge, and the Irrelevance of Slow Switching.” Analysis 52, 232-237.
  • Williamson, T. 2002. Knowledge and its Limits. Oxford: Oxford University Press.
  • Wilson, R. 1995. Cartesian Psychology and Physical Minds: Individualism and the Sciences of the Mind. Cambridge: Cambridge University Press.
  • Wilson, R. 2010. “Meaning, Making, and the Mind of the Externalist.” In: Menary, ed. The Extended Mind. Cambridge, Mass. MIT Press, 167-189.
  • Zemach, E. 1976. “Putnam on the Reference of Substance Kind Terms.” Journal of Philosophy 88, 116-127.

 

Author Information

Basil Smith
Email: Bsmith108@saddleback.edu
Saddleback College
U. S. A.

Consciousness

Explaining the nature of consciousness is one of the most important and perplexing areas of philosophy, but the concept is notoriously ambiguous. The abstract noun “consciousness” is not frequently used by itself in the contemporary literature, but is originally derived from the Latin con (with) and scire (to know). Perhaps the most commonly used contemporary notion of a conscious mental state is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view. But how are we to understand this? For instance, how is the conscious mental state related to the body? Can consciousness be explained in terms of brain activity? What makes a mental state be a conscious mental state? The problem of consciousness is arguably the most central issue in current philosophy of mind and is also importantly related to major traditional topics in metaphysics, such as the possibility of immortality and the belief in free will. This article focuses on Western theories and conceptions of consciousness, especially as found in contemporary analytic philosophy of mind.

The two broad, traditional and competing theories of mind are dualism and materialism (or physicalism). While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense, whereas the latter holds that, to put it crudely, the mind is the brain, or is caused by neural activity. It is against this general backdrop that many answers to the above questions are formulated and developed. There are also many familiar objections to both materialism and dualism. For example, it is often said that materialism cannot truly explain just how or why some brain states are conscious, and that there is an important “explanatory gap” between mind and matter. On the other hand, dualism faces the problem of explaining how a non-physical substance or mental state can causally interact with the physical body.

Some philosophers attempt to explain consciousness directly in neurophysiological or physical terms, while others offer cognitive theories of consciousness whereby conscious mental states are reduced to some kind of representational relation between mental states and the world. There are a number of such representational theories of consciousness currently on the market, including higher-order theories which hold that what makes a mental state conscious is that the subject is aware of it in some sense. The relationship between consciousness and science is also central in much current theorizing on this topic: How does the brain “bind together” various sensory inputs to produce a unified subjective experience? What are the neural correlates of consciousness? What can be learned from abnormal psychology which might help us to understand normal consciousness? To what extent are animal minds different from human minds? Could an appropriately programmed machine be conscious?

Table of Contents

  1. Terminological Matters: Various Concepts of Consciousness
  2. Some History on the Topic
  3. The Metaphysics of Consciousness: Materialism vs. Dualism
    1. Dualism: General Support and Related Issues
      1. Substance Dualism and Objections
      2. Other Forms of Dualism
    2. Materialism: General Support
      1. Objection 1: The Explanatory Gap and The Hard Problem
      2. Objection 2: The Knowledge Argument
      3. Objection 3: Mysterianism
      4. Objection 4: Zombies
      5. Varieties of Materialism
  4. Specific Theories of Consciousness
    1. Neural Theories
    2. Representational Theories of Consciousness
      1. First-Order Representationalism
      2. Higher-Order Representationalism
      3. Hybrid Representational Accounts
    3. Other Cognitive Theories
    4. Quantum Approaches
  5. Consciousness and Science: Key Issues
    1. The Unity of Consciousness/The Binding Problem
    2. The Neural Correlates of Consciousness (NCCs)
    3. Philosophical Psychopathology
  6. Animal and Machine Consciousness
  7. References and Further Reading

1. Terminological Matters: Various Concepts of Consciousness

The concept of consciousness is notoriously ambiguous. It is important first to make several distinctions and to define related terms. The abstract noun “consciousness” is not often used in the contemporary literature, though it should be noted that it is originally derived from the Latin con (with) and scire (to know). Thus, “consciousness” has etymological ties to one’s ability to know and perceive, and should not be confused with conscience, which has the much more specific moral connotation of knowing when one has done or is doing something wrong. Through consciousness, one can have knowledge of the external world or one’s own mental states. The primary contemporary interest lies more in the use of the expressions “x is conscious” or “x is conscious of y.” Under the former category, perhaps most important is the distinction between state and creature consciousness (Rosenthal 1993a). We sometimes speak of an individual mental state, such as a pain or perception, as conscious. On the other hand, we also often speak of organisms or creatures as conscious, such as when we say “human beings are conscious” or “dogs are conscious.” Creature consciousness is also simply meant to refer to the fact that an organism is awake, as opposed to sleeping or in a coma. However, some kind of state consciousness is often implied by creature consciousness, that is, the organism is having conscious mental states. Due to the lack of a direct object in the expression “x is conscious,” this is usually referred to as intransitive consciousness, in contrast to transitive consciousness where the locution “x is conscious of y” is used (Rosenthal 1993a, 1997). Most contemporary theories of consciousness are aimed at explaining state consciousness; that is, explaining what makes a mental state a conscious mental state.

It might seem that “conscious” is synonymous with, say, “awareness” or “experience” or “attention.” However, it is crucial to recognize that this is not generally accepted today. For example, though perhaps somewhat atypical, one might hold that there are even unconscious experiences, depending of course on how the term “experience” is defined (Carruthers 2000). More common is the belief that we can be aware of external objects in some unconscious sense, for example, during cases of subliminal perception. The expression “conscious awareness” does not therefore seem to be redundant. Finally, it is not clear that consciousness ought to be restricted to attention. It seems plausible to suppose that one is conscious (in some sense) of objects in one’s peripheral visual field even though one is only attending to some narrow (focal) set of objects within that visual field.

Perhaps the most fundamental and commonly used notion of “conscious” is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is “something it is like” for me to be in that state from the subjective or first-person point of view. When I am, for example, smelling a rose or having a conscious visual experience, there is something it “seems” or “feels” like from my perspective. An organism, such as a bat, is conscious if it is able to experience the outer world through its (echo-locatory) senses. There is also something it is like to be a conscious creature whereas there is nothing it is like to be, for example, a table or tree. This is primarily the sense of “conscious state” that will be used throughout this entry. There are still, though, a cluster of expressions and terms related to Nagel’s sense, and some authors simply stipulate the way that they use such terms. For example, philosophers sometimes refer to conscious states as phenomenal or qualitative states. More technically, philosophers often view such states as having qualitative properties called “qualia” (prounced like "kwal' ee uh"; the singular is quale). There is significant disagreement over the nature, and even the existence, of qualia, but they are perhaps most frequently understood as the felt properties or qualities of conscious states.

Ned Block (1995) makes an often cited distinction between phenomenal consciousness (or “phenomenality”) and access consciousness. The former is very much in line with the Nagelian notion described above. However, Block also defines the quite different notion of access consciousness in terms of a mental state’s relationship with other mental states; for example, a mental state’s “availability for use in reasoning and rationality guiding speech and action” (Block 1995: 227). This would, for example, count a visual perception as (access) conscious not because it has the “what it’s likeness” of phenomenal states, but rather because it carries visual information which is generally available for use by the organism, regardless of whether or not it has any qualitative properties. Access consciousness is therefore more of a functional notion; that is, concerned with what such states do. Although this concept of consciousness is certainly very important in cognitive science and philosophy of mind generally, not everyone agrees that access consciousness deserves to be called “consciousnesses” in any important sense. Block himself argues that neither sense of consciousness implies the other, while others urge that there is a more intimate connection between the two.

Finally, it is helpful to distinguish between consciousness and self-consciousness, which plausibly involves some kind of awareness or consciousness of one’s own mental states (instead of something out in the world). Self-consciousness arguably comes in degrees of sophistication ranging from minimal bodily self-awareness to the ability to reason and reflect on one’s own mental states, such as one’s beliefs and desires. Some important historical figures have even held that consciousness entails some form of self-consciousness (Kant 1781/1965, Sartre 1956), a view shared by some contemporary philosophers (Gennaro 1996a, Kriegel 2004).

2. Some History on the Topic

Interest in the nature of conscious experience has no doubt been around for as long as there have been reflective humans. It would be impossible here to survey the entire history, but a few highlights are in order. In the history of Western philosophy, which is the focus of this entry, important writings on human nature and the soul and mind go back to ancient philosophers, such as Plato. More sophisticated work on the nature of consciousness and perception can be found in the work of Plato’s most famous student Aristotle (see Caston 2002), and then throughout the later Medieval period. It is, however, with the work of René Descartes (1596-1650) and his successors in the early modern period of philosophy that consciousness and the relationship between the mind and body took center stage. As we shall see, Descartes argued that the mind is a non-physical substance distinct from the body. He also did not believe in the existence of unconscious mental states, a view certainly not widely held today. Descartes defined “thinking” very broadly to include virtually every kind of mental state and urged that consciousness is essential to thought. Our mental states are, according to Descartes, infallibly transparent to introspection. John Locke (1689/1975) held a similar position regarding the connection between mentality and consciousness, but was far less committed on the exact metaphysical nature of the mind.

Perhaps the most important philosopher of the period explicitly to endorse the existence of unconscious mental states was G.W. Leibniz (1686/1991, 1720/1925). Although Leibniz also believed in the immaterial nature of mental substances (which he called “monads”), he recognized the existence of what he called “petit perceptions,” which are basically unconscious perceptions. He also importantly distinguished between perception and apperception, roughly the difference between outer-directed consciousness and self-consciousness (see Gennaro 1999 for some discussion). The most important detailed theory of mind in the early modern period was developed by Immanuel Kant. His main work Critique of Pure Reason (1781/1965) is as equally dense as it is important, and cannot easily be summarized in this context. Although he owes a great debt to his immediate predecessors, Kant is arguably the most important philosopher since Plato and Aristotle and is highly relevant today. Kant basically thought that an adequate account of phenomenal consciousness involved far more than any of his predecessors had considered. There are important mental structures which are “presupposed” in conscious experience, and Kant presented an elaborate theory as to what those structures are, which, in turn, had other important implications. He, like Leibniz, also saw the need to postulate the existence of unconscious mental states and mechanisms in order to provide an adequate theory of mind (Kitcher 1990 and Brook 1994 are two excellent books on Kant’s theory of mind.).

Over the past one hundred years or so, however, research on consciousness has taken off in many important directions. In psychology, with the notable exception of the virtual banishment of consciousness by behaviorist psychologists (e.g., Skinner 1953), there were also those deeply interested in consciousness and various introspective (or “first-person”) methods of investigating the mind. The writings of such figures as Wilhelm Wundt (1897), William James (1890) and Alfred Titchener (1901) are good examples of this approach. Franz Brentano (1874/1973) also had a profound effect on some contemporary theories of consciousness. Similar introspectionist approaches were used by those in the so-called “phenomenological” tradition in philosophy, such as in the writings of Edmund Husserl (1913/1931, 1929/1960) and Martin Heidegger (1927/1962). The work of Sigmund Freud was very important, at minimum, in bringing about the near universal acceptance of the existence of unconscious mental states and processes.

It must, however, be kept in mind that none of the above had very much scientific knowledge about the detailed workings of the brain.  The relatively recent development of neurophysiology is, in part, also responsible for the unprecedented interdisciplinary research interest in consciousness, particularly since the 1980s.  There are now several important journals devoted entirely to the study of consciousness: Consciousness and Cognition, Journal of Consciousness Studies, and Psyche.  There are also major annual conferences sponsored by world wide professional organizations, such as the Association for the Scientific Study of Consciousness, and an entire book series called “Advances in Consciousness Research” published by John Benjamins.  (For a small sample of introductory texts and important anthologies, see Kim 1996, Gennaro 1996b, Block et. al. 1997, Seager 1999, Chalmers 2002, Baars et. al. 2003, Blackmore 2004, Campbell 2005, Velmans and Schneider 2007, Zelazo et al. 2007, Revonsuo 2010.)

3. The Metaphysics of Consciousness: Materialism vs. Dualism

Metaphysics is the branch of philosophy concerned with the ultimate nature of reality. There are two broad traditional and competing metaphysical views concerning the nature of the mind and conscious mental states: dualism and materialism. While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense. On the other hand, materialists hold that the mind is the brain, or, more accurately, that conscious mental activity is identical with neural activity. It is important to recognize that by non-physical, dualists do not merely mean “not visible to the naked eye.” Many physical things fit this description, such as the atoms which make up the air in a typical room. For something to be non-physical, it must literally be outside the realm of physics; that is, not in space at all and undetectable in principle by the instruments of physics. It is equally important to recognize that the category “physical” is broader than the category “material.” Materialists are called such because there is the tendency to view the brain, a material thing, as the most likely physical candidate to identify with the mind. However, something might be physical but not material in this sense, such as an electromagnetic or energy field. One might therefore instead be a “physicalist” in some broader sense and still not a dualist. Thus, to say that the mind is non-physical is to say something much stronger than that it is non-material. Dualists, then, tend to believe that conscious mental states or minds are radically different from anything in the physical world at all.

a. Dualism: General Support and Related Issues

There are a number of reasons why some version of dualism has been held throughout the centuries. For one thing, especially from the introspective or first-person perspective, our conscious mental states just do not seem like physical things or processes. That is, when we reflect on our conscious perceptions, pains, and desires, they do not seem to be physical in any sense. Consciousness seems to be a unique aspect of the world not to be understood in any physical way. Although materialists will urge that this completely ignores the more scientific third-person perspective on the nature of consciousness and mind, this idea continues to have force for many today. Indeed, it is arguably the crucial underlying intuition behind historically significant “conceivability arguments” against materialism and for dualism. Such arguments typically reason from the premise that one can conceive of one’s conscious states existing without one’s body or, conversely, that one can imagine one’s own physical duplicate without consciousness at all (see section 3b.iv). The metaphysical conclusion ultimately drawn is that consciousness cannot be identical with anything physical, partly because there is no essential conceptual connection between the mental and the physical. Arguments such as these go back to Descartes and continue to be used today in various ways (Kripke 1972, Chalmers 1996), but it is highly controversial as to whether they succeed in showing that materialism is false. Materialists have replied in various ways to such arguments and the relevant literature has grown dramatically in recent years.

Historically, there is also the clear link between dualism and a belief in immortality, and hence a more theistic perspective than one tends to find among materialists. Indeed, belief in dualism is often explicitly theologically motivated. If the conscious mind is not physical, it seems more plausible to believe in the possibility of life after bodily death. On the other hand, if conscious mental activity is identical with brain activity, then it would seem that when all brain activity ceases, so do all conscious experiences and thus no immortality. After all, what do many people believe continues after bodily death? Presumably, one’s own conscious thoughts, memories, experiences, beliefs, and so on. There is perhaps a similar historical connection to a belief in free will, which is of course a major topic in its own right. For our purposes, it suffices to say that, on some definitions of what it is to act freely, such ability seems almost “supernatural” in the sense that one’s conscious decisions can alter the otherwise deterministic sequence of events in nature. To put it another way: If we are entirely physical beings as the materialist holds, then mustn’t all of the brain activity and behavior in question be determined by the laws of nature? Although materialism may not logically rule out immortality or free will, materialists will likely often reply that such traditional, perhaps even outdated or pre-scientific beliefs simply ought to be rejected to the extent that they conflict with materialism. After all, if the weight of the evidence points toward materialism and away from dualism, then so much the worse for those related views.

One might wonder “even if the mind is physical, what about the soul?” Maybe it’s the soul, not the mind, which is non-physical as one might be told in many religious traditions. While it is true that the term “soul” (or “spirit”) is often used instead of “mind” in such religious contexts, the problem is that it is unclear just how the soul is supposed to differ from the mind. The terms are often even used interchangeably in many historical texts and by many philosophers because it is unclear what else the soul could be other than “the mental substance.” It is difficult to describe the soul in any way that doesn’t make it sound like what we mean by the mind. After all, that’s what many believe goes on after bodily death; namely, conscious mental activity. Granted that the term “soul” carries a more theological connotation, but it doesn’t follow that the words “soul” and “mind” refer to entirely different things. Somewhat related to the issue of immortality, the existence of near death experiences is also used as some evidence for dualism and immortality. Such patients experience a peaceful moving toward a light through a tunnel like structure, or are able to see doctors working on their bodies while hovering over them in an emergency room (sometimes akin to what is called an “out of body experience”). In response, materialists will point out that such experiences can be artificially induced in various experimental situations, and that starving the brain of oxygen is known to cause hallucinations.

Various paranormal and psychic phenomena, such as clairvoyance, faith healing, and mind-reading, are sometimes also cited as evidence for dualism. However, materialists (and even many dualists) will first likely wish to be skeptical of the alleged phenomena themselves for numerous reasons. There are many modern day charlatans who should make us seriously question whether there really are such phenomena or mental abilities in the first place. Second, it is not quite clear just how dualism follows from such phenomena even if they are genuine. A materialist, or physicalist at least, might insist that though such phenomena are puzzling and perhaps currently difficult to explain in physical terms, they are nonetheless ultimately physical in nature; for example, having to do with very unusual transfers of energy in the physical world. The dualist advantage is perhaps not as obvious as one might think, and we need not jump to supernatural conclusions so quickly.

i. Substance Dualism and Objections

Interactionist Dualism or simply “interactionism” is the most common form of “substance dualism” and its name derives from the widely accepted fact that mental states and bodily states causally interact with each other. For example, my desire to drink something cold causes my body to move to the refrigerator and get something to drink and, conversely, kicking me in the shin will cause me to feel a pain and get angry. Due to Descartes’ influence, it is also sometimes referred to as “Cartesian dualism.” Knowing nothing about just where such causal interaction could take place, Descartes speculated that it was through the pineal gland, a now almost humorous conjecture. But a modern day interactionist would certainly wish to treat various areas of the brain as the location of such interactions.

Three serious objections are briefly worth noting here. The first is simply the issue of just how does or could such radically different substances causally interact. How something non-physical causally interacts with something physical, such as the brain? No such explanation is forthcoming or is perhaps even possible, according to materialists. Moreover, if causation involves a transfer of energy from cause to effect, then how is that possible if the mind is really non-physical? Gilbert Ryle (1949) mockingly calls the Cartesian view about the nature of mind, a belief in the “ghost in the machine.” Secondly, assuming that some such energy transfer makes any sense at all, it is also then often alleged that interactionism is inconsistent with the scientifically well-established Conservation of Energy principle, which says that the total amount of energy in the universe, or any controlled part of it, remains constant. So any loss of energy in the cause must be passed along as a corresponding gain of energy in the effect, as in standard billiard ball examples. But if interactionism is true, then when mental events cause physical events, energy would literally come into the physical word. On the other hand, when bodily events cause mental events, energy would literally go out of the physical world. At the least, there is a very peculiar and unique notion of energy involved, unless one wished, even more radically, to deny the conservation principle itself. Third, some materialists might also use the well-known fact that brain damage (even to very specific areas of the brain) causes mental defects as a serious objection to interactionism (and thus as support for materialism). This has of course been known for many centuries, but the level of detailed knowledge has increased dramatically in recent years. Now a dualist might reply that such phenomena do not absolutely refute her metaphysical position since it could be replied that damage to the brain simply causes corresponding damage to the mind. However, this raises a host of other questions: Why not opt for the simpler explanation, i.e., that brain damage causes mental damage because mental processes simply are brain processes? If the non-physical mind is damaged when brain damage occurs, how does that leave one’s mind according to the dualist’s conception of an afterlife? Will the severe amnesic at the end of life on Earth retain such a deficit in the afterlife? If proper mental functioning still depends on proper brain functioning, then is dualism really in no better position to offer hope for immortality?

It should be noted that there is also another less popular form of substance dualism called parallelism, which denies the causal interaction between the non-physical mental and physical bodily realms. It seems fair to say that it encounters even more serious objections than interactionism.

ii. Other Forms of Dualism

While a detailed survey of all varieties of dualism is beyond the scope of this entry, it is at least important to note here that the main and most popular form of dualism today is called property dualism. Substance dualism has largely fallen out of favor at least in most philosophical circles, though there are important exceptions (e.g., Swinburne 1986, Foster 1996) and it often continues to be tied to various theological positions. Property dualism, on the other hand, is a more modest version of dualism and it holds that there are mental properties (that is, characteristics or aspects of things) that are neither identical with nor reducible to physical properties. There are actually several different kinds of property dualism, but what they have in common is the idea that conscious properties, such as the color qualia involved in a conscious experience of a visual perception, cannot be explained in purely physical terms and, thus, are not themselves to be identified with any brain state or process.

Two other views worth mentioning are epiphenomenalism and panpsychism. The latter is the somewhat eccentric view that all things in physical reality, even down to micro-particles, have some mental properties. All substances have a mental aspect, though it is not always clear exactly how to characterize or test such a claim. Epiphenomenalism holds that mental events are caused by brain events but those mental events are mere “epiphenomena” which do not, in turn, cause anything physical at all, despite appearances to the contrary (for a recent defense, see Robinson 2004).

Finally, although not a form of dualism, idealism holds that there are only immaterial mental substances, a view more common in the Eastern tradition. The most prominent Western proponent of idealism was 18th century empiricist George Berkeley. The idealist agrees with the substance dualist, however, that minds are non-physical, but then denies the existence of mind-independent physical substances altogether. Such a view faces a number of serious objections, and it also requires a belief in the existence of God.

b. Materialism: General Support

Some form of materialism is probably much more widely held today than in centuries past. No doubt part of the reason for this has to do with the explosion in scientific knowledge about the workings of the brain and its intimate connection with consciousness, including the close connection between brain damage and various states of consciousness. Brain death is now the main criterion for when someone dies. Stimulation to specific areas of the brain results in modality specific conscious experiences. Indeed, materialism often seems to be a working assumption in neurophysiology. Imagine saying to a neuroscientist “you are not really studying the conscious mind itself” when she is examining the workings of the brain during an fMRI. The idea is that science is showing us that conscious mental states, such as visual perceptions, are simply identical with certain neuro-chemical brain processes; much like the science of chemistry taught us that water just is H2O.

There are also theoretical factors on the side of materialism, such as adherence to the so-called “principle of simplicity” which says that if two theories can equally explain a given phenomenon, then we should accept the one which posits fewer objects or forces. In this case, even if dualism could equally explain consciousness (which would of course be disputed by materialists), materialism is clearly the simpler theory in so far as it does not posit any objects or processes over and above physical ones. Materialists will wonder why there is a need to believe in the existence of such mysterious non-physical entities. Moreover, in the aftermath of the Darwinian revolution, it would seem that materialism is on even stronger ground provided that one accepts basic evolutionary theory and the notion that most animals are conscious. Given the similarities between the more primitive parts of the human brain and the brains of other animals, it seems most natural to conclude that, through evolution, increasing layers of brain areas correspond to increased mental abilities. For example, having a well developed prefrontal cortex allows humans to reason and plan in ways not available to dogs and cats. It also seems fairly uncontroversial to hold that we should be materialists about the minds of animals. If so, then it would be odd indeed to hold that non-physical conscious states suddenly appear on the scene with humans.

There are still, however, a number of much discussed and important objections to materialism, most of which question the notion that materialism can adequately explain conscious experience.

i. Objection 1: The Explanatory Gap and The Hard Problem

Joseph Levine (1983) coined the expression “the explanatory gap” to express a difficulty for any materialistic attempt to explain consciousness. Although not concerned to reject the metaphysics of materialism, Levine gives eloquent expression to the idea that there is a key gap in our ability to explain the connection between phenomenal properties and brain properties (see also Levine 1993, 2001). The basic problem is that it is, at least at present, very difficult for us to understand the relationship between brain properties and phenomenal properties in any explanatory satisfying way, especially given the fact that it seems possible for one to be present without the other. There is an odd kind of arbitrariness involved: Why or how does some particular brain process produce that particular taste or visual sensation? It is difficult to see any real explanatory connection between specific conscious states and brain states in a way that explains just how or why the former are identical with the latter. There is therefore an explanatory gap between the physical and mental. Levine argues that this difficulty in explaining consciousness is unique; that is, we do not have similar worries about other scientific identities, such as that “water is H2O” or that “heat is mean molecular kinetic energy.” There is “an important sense in which we can’t really understand how [materialism] could be true.” (2001: 68)

David Chalmers (1995) has articulated a similar worry by using the catchy phrase “the hard problem of consciousness,” which basically refers to the difficulty of explaining just how physical processes in the brain give rise to subjective conscious experiences. The “really hard problem is the problem of experience…How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?” (1995: 201) Others have made similar points, as Chalmers acknowledges, but reference to the phrase “the hard problem” has now become commonplace in the literature. Unlike Levine, however, Chalmers is much more inclined to draw anti-materialist metaphysical conclusions from these and other considerations. Chalmers usefully distinguishes the hard problem of consciousness from what he calls the (relatively) “easy problems” of consciousness, such as the ability to discriminate and categorize stimuli, the ability of a cognitive system to access its own internal states, and the difference between wakefulness and sleep. The easy problems generally have more to do with the functions of consciousness, but Chalmers urges that solving them does not touch the hard problem of phenomenal consciousness. Most philosophers, according to Chalmers, are really only addressing the easy problems, perhaps merely with something like Block’s “access consciousness” in mind. Their theories ignore phenomenal consciousness.

There are many responses by materialists to the above charges, but it is worth emphasizing that Levine, at least, does not reject the metaphysics of materialism. Instead, he sees the “explanatory gap [as] primarily an epistemological problem” (2001: 10). That is, it is primarily a problem having to do with knowledge or understanding. This concession is still important at least to the extent that one is concerned with the larger related metaphysical issues discussed in section 3a, such as the possibility of immortality.

Perhaps most important for the materialist, however, is recognition of the fact that different concepts can pick out the same property or object in the world (Loar 1990, 1997). Out in the world there is only the one “stuff,” which we can conceptualize either as “water” or as “H2O.” The traditional distinction, made most notably by Gottlob Frege in the late 19th century, between “meaning” (or “sense”) and “reference” is also relevant here. Two or more concepts, which can have different meanings, can refer to the same property or object, much like “Venus” and “The Morning Star.” Materialists, then, explain that it is essential to distinguish between mental properties and our concepts of those properties. By analogy, there are so-called “phenomenal concepts” which uses a phenomenal or “first-person” property to refer to some conscious mental state, such as a sensation of red (Alter and Walter 2007). In contrast, we can also use various concepts couched in physical or neurophysiological terms to refer to that same mental state from the third-person point of view. There is thus but one conscious mental state which can be conceptualized in two different ways: either by employing first-person experiential phenomenal concepts or by employing third-person neurophysiological concepts. It may then just be a “brute fact” about the world that there are such identities and the appearance of arbitrariness between brain properties and mental properties is just that – an apparent problem leading many to wonder about the alleged explanatory gap. Qualia would then still be identical to physical properties. Moreover, this response provides a diagnosis for why there even seems to be such a gap; namely, that we use very different concepts to pick out the same property. Science will be able, in principle, to close the gap and solve the hard problem of consciousness in an analogous way that we now have a very good understanding for why “water is H2O” or “heat is mean molecular kinetic energy” that was lacking centuries ago. Maybe the hard problem isn’t so hard after all – it will just take some more time. After all, the science of chemistry didn’t develop overnight and we are relatively early in the history of neurophysiology and our understanding of phenomenal consciousness. (See Shear 1997 for many more specific responses to the hard problem, but also for Chalmers’ counter-replies.)

ii. Objection 2: The Knowledge Argument

There is a pair of very widely discussed, and arguably related, objections to materialism which come from the seminal writings of Thomas Nagel (1974) and Frank Jackson (1982, 1986). These arguments, especially Jackson’s, have come to be known as examples of the “knowledge argument” against materialism, due to their clear emphasis on the epistemological (that is, knowledge related) limitations of materialism. Like Levine, Nagel does not reject the metaphysics of materialism. Jackson had originally intended for his argument to yield a dualistic conclusion, but he no longer holds that view. The general pattern of each argument is to assume that all the physical facts are known about some conscious mind or conscious experience. Yet, the argument goes, not all is known about the mind or experience. It is then inferred that the missing knowledge is non-physical in some sense, which is surely an anti-materialist conclusion in some sense.

Nagel imagines a future where we know everything physical there is to know about some other conscious creature’s mind, such as a bat. However, it seems clear that we would still not know something crucial; namely, “what it is like to be a bat.” It will not do to imagine what it is like for us to be a bat. We would still not know what it is like to be a bat from the bat’s subjective or first-person point of view. The idea, then, is that if we accept the hypothesis that we know all of the physical facts about bat minds, and yet some knowledge about bat minds is left out, then materialism is inherently flawed when it comes to explaining consciousness. Even in an ideal future in which everything physical is known by us, something would still be left out. Jackson’s somewhat similar, but no less influential, argument begins by asking us to imagine a future where a person, Mary, is kept in a black and white room from birth during which time she becomes a brilliant neuroscientist and an expert on color perception. Mary never sees red for example, but she learns all of the physical facts and everything neurophysiologically about human color vision. Eventually she is released from the room and sees red for the first time. Jackson argues that it is clear that Mary comes to learn something new; namely, to use Nagel’s famous phrase, what it is like to experience red. This is a new piece of knowledge and hence she must have come to know some non-physical fact (since, by hypothesis, she already knew all of the physical facts). Thus, not all knowledge about the conscious mind is physical knowledge.

The influence and the quantity of work that these ideas have generated cannot be exaggerated. Numerous materialist responses to Nagel’s argument have been presented (such as Van Gulick 1985), and there is now a very useful anthology devoted entirely to Jackson’s knowledge argument (Ludlow et. al. 2004). Some materialists have wondered if we should concede up front that Mary wouldn’t be able to imagine the color red even before leaving the room, so that maybe she wouldn’t even be surprised upon seeing red for the first time. Various suspicions about the nature and effectiveness of such thought experiments also usually accompany this response. More commonly, however, materialists reply by arguing that Mary does not learn a new fact when seeing red for the first time, but rather learns the same fact in a different way. Recalling the distinction made in section 3b.i between concepts and objects or properties, the materialist will urge that there is only the one physical fact about color vision, but there are two ways to come to know it: either by employing neurophysiological concepts or by actually undergoing the relevant experience and so by employing phenomenal concepts. We might say that Mary, upon leaving the black and white room, becomes acquainted with the same neural property as before, but only now from the first-person point of view. The property itself isn’t new; only the perspective, or what philosophers sometimes call the “mode of presentation,” is different. In short, coming to learn or know something new does not entail learning some new fact about the world. Analogies are again given in other less controversial areas, for example, one can come to know about some historical fact or event by reading a (reliable) third-person historical account or by having observed that event oneself. But there is still only the one objective fact under two different descriptions. Finally, it is crucial to remember that, according to most, the metaphysics of materialism remains unaffected. Drawing a metaphysical conclusion from such purely epistemological premises is always a questionable practice. Nagel’s argument doesn’t show that bat mental states are not identical with bat brain states. Indeed, a materialist might even expect the conclusion that Nagel draws; after all, given that our brains are so different from bat brains, it almost seems natural for there to be certain aspects of bat experience that we could never fully comprehend. Only the bat actually undergoes the relevant brain processes. Similarly, Jackson’s argument doesn’t show that Mary’s color experience is distinct from her brain processes.

Despite the plethora of materialist responses, vigorous debate continues as there are those who still think that something profound must always be missing from any materialist attempt to explain consciousness; namely, that understanding subjective phenomenal consciousness is an inherently first-person activity which cannot be captured by any objective third-person scientific means, no matter how much scientific knowledge is accumulated. Some knowledge about consciousness is essentially limited to first-person knowledge. Such a sense, no doubt, continues to fuel the related anti-materialist intuitions raised in the previous section. Perhaps consciousness is simply a fundamental or irreducible part of nature in some sense (Chalmers 1996). (For more see Van Gulick 1993.)

iii. Objection 3: Mysterianism

Finally, some go so far as to argue that we are simply not capable of solving the problem of consciousness (McGinn 1989, 1991, 1995). In short, “mysterians” believe that the hard problem can never be solved because of human cognitive limitations; the explanatory gap can never be filled. Once again, however, McGinn does not reject the metaphysics of materialism, but rather argues that we are “cognitively closed” with respect to this problem much like a rat or dog is cognitively incapable of solving, or even understanding, calculus problems. More specifically, McGinn claims that we are cognitively closed as to how the brain produces conscious awareness. McGinn concedes that some brain property produces conscious experience, but we cannot understand how this is so or even know what that brain property is. Our concept forming mechanisms simply will not allow us to grasp the physical and causal basis of consciousness. We are not conceptually suited to be able to do so.

McGinn does not entirely rest his argument on past failed attempts at explaining consciousness in materialist terms; instead, he presents another argument for his admittedly pessimistic conclusion. McGinn observes that we do not have a mental faculty that can access both consciousness and the brain. We access consciousness through introspection or the first-person perspective, but our access to the brain is through the use of outer spatial senses (e.g., vision) or a more third-person perspective. Thus we have no way to access both the brain and consciousness together, and therefore any explanatory link between them is forever beyond our reach.

Materialist responses are numerous. First, one might wonder why we can’t combine the two perspectives within certain experimental contexts. Both first-person and third-person scientific data about the brain and consciousness can be acquired and used to solve the hard problem. Even if a single person cannot grasp consciousness from both perspectives at the same time, why can’t a plausible physicalist theory emerge from such a combined approach? Presumably, McGinn would say that we are not capable of putting such a theory together in any appropriate way. Second, despite McGinn’s protests to the contrary, many will view the problem of explaining consciousness as a merely temporary limit of our theorizing, and not something which is unsolvable in principle (Dennett 1991). Third, it may be that McGinn expects too much; namely, grasping some causal link between the brain and consciousness. After all, if conscious mental states are simply identical to brain states, then there may simply be a “brute fact” that really does not need any further explaining. Indeed, this is sometimes also said in response to the explanatory gap and the hard problem, as we saw earlier. It may even be that some form of dualism is presupposed in McGinn’s argument, to the extent that brain states are said to “cause” or “give rise to” consciousness, instead of using the language of identity. Fourth, McGinn’s analogy to lower animals and mathematics is not quite accurate. Rats, for example, have no concept whatsoever of calculus. It is not as if they can grasp it to some extent but just haven’t figured out the answer to some particular problem within mathematics. Rats are just completely oblivious to calculus problems. On the other hand, we humans obviously do have some grasp on consciousness and on the workings of the brain -- just see the references at the end of this entry! It is not clear, then, why we should accept the extremely pessimistic and universally negative conclusion that we can never discover the answer to the problem of consciousness, or, more specifically, why we could never understand the link between consciousness and the brain.

iv. Objection 4: Zombies

Unlike many of the above objections to materialism, the appeal to the possibility of zombies is often taken as both a problem for materialism and as a more positive argument for some form of dualism, such as property dualism. The philosophical notion of a “zombie” basically refers to conceivable creatures which are physically indistinguishable from us but lack consciousness entirely (Chalmers 1996). It certainly seems logically possible for there to be such creatures: “the conceivability of zombies seems…obvious to me…While this possibility is probably empirically impossible, it certainly seems that a coherent situation is described; I can discern no contradiction in the description” (Chalmers 1996: 96). Philosophers often contrast what is logically possible (in the sense of “that which is not self-contradictory”) from what is empirically possible given the actual laws of nature. Thus, it is logically possible for me to jump fifty feet in the air, but not empirically possible. Philosophers often use the notion of “possible worlds,” i.e., different ways that the world might have been, in describing such non-actual situations or possibilities. The objection, then, typically proceeds from such a possibility to the conclusion that materialism is false because materialism would seem to rule out that possibility. It has been fairly widely accepted (since Kripke 1972) that all identity statements are necessarily true (that is, true in all possible worlds), and the same should therefore go for mind-brain identity claims. Since the possibility of zombies shows that it doesn’t, then we should conclude that materialism is false. (See Identity Theory.)

It is impossible to do justice to all of the subtleties here. The literature in response to zombie, and related “conceivability,” arguments is enormous (see, for example, Hill 1997, Hill and McLaughlin 1999, Papineau 1998, 2002, Balog 1999, Block and Stalnaker 1999, Loar 1999, Yablo 1999, Perry 2001, Botterell 2001, Kirk 2005). A few lines of reply are as follows: First, it is sometimes objected that the conceivability of something does not really entail its possibility. Perhaps we can also conceive of water not being H2O, since there seems to be no logical contradiction in doing so, but, according to received wisdom from Kripke, that is really impossible. Perhaps, then, some things just seem possible but really aren’t. Much of the debate centers on various alleged similarities or dissimilarities between the mind-brain and water-H2O cases (or other such scientific identities). Indeed, the entire issue of the exact relationship between “conceivability” and “possibility” is the subject of an important recently published anthology (Gendler and Hawthorne 2002). Second, even if zombies are conceivable in the sense of logically possible, how can we draw a substantial metaphysical conclusion about the actual world? There is often suspicion on the part of materialists about what, if anything, such philosophers’ “thought experiments” can teach us about the nature of our minds. It seems that one could take virtually any philosophical or scientific theory about almost anything, conceive that it is possibly false, and then conclude that it is actually false. Something, perhaps, is generally wrong with this way of reasoning. Third, as we saw earlier (3b.i), there may be a very good reason why such zombie scenarios seem possible; namely, that we do not (at least, not yet) see what the necessary connection is between neural events and conscious mental events. On the one side, we are dealing with scientific third-person concepts and, on the other, we are employing phenomenal concepts. We are, perhaps, simply currently not in a position to understand completely such a necessary connection.

Debate and discussion on all four objections remains very active.

v. Varieties of Materialism

Despite the apparent simplicity of materialism, say, in terms of the identity between mental states and neural states, the fact is that there are many different forms of materialism. While a detailed survey of all varieties is beyond the scope of this entry, it is at least important to acknowledge the commonly drawn distinction between two kinds of “identity theory”: token-token and type-type materialism. Type-type identity theory is the stronger thesis and says that mental properties, such as “having a desire to drink some water” or “being in pain,” are literally identical with a brain property of some kind. Such identities were originally meant to be understood as on a par with, for example, the scientific identity between “being water” and “being composed of H2O” (Place 1956, Smart 1959). However, this view historically came under serious assault due to the fact that it seems to rule out the so-called “multiple realizability” of conscious mental states. The idea is simply that it seems perfectly possible for there to be other conscious beings (e.g., aliens, radically different animals) who can have those same mental states but who also are radically different from us physiologically (Fodor 1974). It seems that commitment to type-type identity theory led to the undesirable result that only organisms with brains like ours can have conscious states. Somewhat more technically, most materialists wish to leave room for the possibility that mental properties can be “instantiated” in different kinds of organisms. (But for more recent defenses of type-type identity theory see Hill and McLaughlin 1999, Papineau 1994, 1995, 1998, Polger 2004.) As a consequence, a more modest “token-token” identity theory has become preferable to many materialists. This view simply holds that each particular conscious mental event in some organism is identical with some particular brain process or event in that organism. This seems to preserve much of what the materialist wants but yet allows for the multiple realizability of conscious states, because both the human and the alien can still have a conscious desire for something to drink while each mental event is identical with a (different) physical state in each organism.

Taking the notion of multiple realizability very seriously has also led many to embrace functionalism, which is the view that conscious mental states should really only be identified with the functional role they play within an organism. For example, conscious pains are defined more in terms of input and output, such as causing bodily damage and avoidance behavior, as well as in terms of their relationship to other mental states. It is normally viewed as a form of materialism since virtually all functionalists also believe, like the token-token theorist, that something physical ultimately realizes that functional state in the organism, but functionalism does not, by itself, entail that materialism is true. Critics of functionalism, however, have long argued that such purely functional accounts cannot adequately explain the essential “feel” of conscious states, or that it seems possible to have two functionally equivalent creatures, one of whom lacks qualia entirely (Block 1980a, 1980b, Chalmers 1996; see also Shoemaker 1975, 1981).

Some materialists even deny the very existence of mind and mental states altogether, at least in the sense that the very concept of consciousness is muddled (Wilkes 1984, 1988) or that the mentalistic notions found in folk psychology, such as desires and beliefs, will eventually be eliminated and replaced by physicalistic terms as neurophysiology matures into the future (Churchland 1983). This is meant as analogous to past similar eliminations based on deeper scientific understanding, for example, we no longer need to speak of “ether” or “phlogiston.” Other eliminativists, more modestly, argue that there is no such thing as qualia when they are defined in certain problematic ways (Dennett 1988).

Finally, it should also be noted that not all materialists believe that conscious mentality can be explained in terms of the physical, at least in the sense that the former cannot be “reduced” to the latter. Materialism is true as an ontological or metaphysical doctrine, but facts about the mind cannot be deduced from facts about the physical world (Boyd 1980, Van Gulick 1992). In some ways, this might be viewed as a relatively harmless variation on materialist themes, but others object to the very coherence of this form of materialism (Kim 1987, 1998). Indeed, the line between such “non-reductive materialism” and property dualism is not always so easy to draw; partly because the entire notion of “reduction” is ambiguous and a very complex topic in its own right. On a related front, some materialists are happy enough to talk about a somewhat weaker “supervenience” relation between mind and matter. Although “supervenience” is a highly technical notion with many variations, the idea is basically one of dependence (instead of identity); for example, that the mental depends on the physical in the sense that any mental change must be accompanied by some physical change (see Kim 1993).

4. Specific Theories of Consciousness

Most specific theories of consciousness tend to be reductionist in some sense. The classic notion at work is that consciousness or individual conscious mental states can be explained in terms of something else or in some other terms. This section will focus on several prominent contemporary reductionist theories. We should, however, distinguish between those who attempt such a reduction directly in physicalistic, such as neurophysiological, terms and those who do so in mentalistic terms, such as by using unconscious mental states or other cognitive notions.

a. Neural Theories

The more direct reductionist approach can be seen in various, more specific, neural theories of consciousness. Perhaps best known is the theory offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons fire in synchrony and all have oscillations within the 35-75 hertz range (that is, 35-75 cycles per second). However, many philosophers and scientists have put forth other candidates for what, specifically, to identify in the brain with consciousness. This vast enterprise has come to be known as the search for the “neural correlates of consciousness” or NCCs (see section 5b below for more). The overall idea is to show how one or more specific kinds of neuro-chemical activity can underlie and explain conscious mental activity (Metzinger 2000). Of course, mere “correlation” is not enough for a fully adequate neural theory and explaining just what counts as a NCC turns out to be more difficult than one might think (Chalmers 2000). Even Crick and Koch have acknowledged that they, at best, provide a necessary condition for consciousness, and that such firing patters are not automatically sufficient for having conscious experience.

b. Representational Theories of Consciousness

Many current theories attempt to reduce consciousness in mentalistic terms. One broadly popular approach along these lines is to reduce consciousness to “mental representations” of some kind. The notion of a “representation” is of course very general and can be applied to photographs, signs, and various natural objects, such as the rings inside a tree. Much of what goes on in the brain, however, might also be understood in a representational way; for example, as mental events representing outer objects partly because they are caused by such objects in, say, cases of veridical visual perception. More specifically, philosophers will often call such representational mental states “intentional states” which have representational content; that is, mental states which are “about something” or “directed at something” as when one has a thought about the house or a perception of the tree. Although intentional states are sometimes contrasted with phenomenal states, such as pains and color experiences, it is clear that many conscious states have both phenomenal and intentional properties, such as visual perceptions. It should be noted that the relation between intentionalilty and consciousness is itself a major ongoing area of dispute with some arguing that genuine intentionality actually presupposes consciousness in some way (Searle 1992, Siewart 1998, Horgan and Tienson 2002) while most representationalists insist that intentionality is prior to consciousness (Gennaro 2012, chapter two).

The general view that we can explain conscious mental states in terms of representational or intentional states is called “representationalism.” Although not automatically reductionist in spirit, most versions of representationalism do indeed attempt such a reduction. Most representationalists, then, believe that there is room for a kind of “second-step” reduction to be filled in later by neuroscience. The other related motivation for representational theories of consciousness is that many believe that an account of representation or intentionality can more easily be given in naturalistic terms, such as causal theories whereby mental states are understood as representing outer objects in virtue of some reliable causal connection. The idea, then, is that if consciousness can be explained in representational terms and representation can be understood in purely physical terms, then there is the promise of a reductionist and naturalistic theory of consciousness. Most generally, however, we can say that a representationalist will typically hold that the phenomenal properties of experience (that is, the “qualia” or “what it is like of experience” or “phenomenal character”) can be explained in terms of the experiences’ representational properties. Alternatively, conscious mental states have no mental properties other than their representational properties. Two conscious states with all the same representational properties will not differ phenomenally. For example, when I look at the blue sky, what it is like for me to have a conscious experience of the sky is simply identical with my experience’s representation of the blue sky.

i. First-Order Representationalism

A First-order representational (FOR) theory of consciousness is a theory that attempts to explain conscious experience primarily in terms of world-directed (or first-order) intentional states. Probably the two most cited FOR theories of consciousness are those of Fred Dretske (1995) and Michael Tye (1995, 2000), though there are many others as well (e.g., Harman 1990, Kirk 1994, Byrne 2001, Thau 2002, Droege 2003). Tye’s theory is more fully worked out and so will be the focus of this section. Like other FOR theorists, Tye holds that the representational content of my conscious experience (that is, what my experience is about or directed at) is identical with the phenomenal properties of experience. Aside from reductionistic motivations, Tye and other FOR representationalists often use the somewhat technical notion of the “transparency of experience” as support for their view (Harman 1990). This is an argument based on the phenomenological first-person observation, which goes back to Moore (1903), that when one turns one’s attention away from, say, the blue sky and onto one’s experience itself, one is still only aware of the blueness of the sky. The experience itself is not blue; rather, one “sees right through” one’s experience to its representational properties, and there is nothing else to one’s experience over and above such properties.

Whatever the merits and exact nature of the argument from transparency (see Kind 2003), it is clear, of course, that not all mental representations are conscious, so the key question eventually becomes: What exactly distinguishes conscious from unconscious mental states (or representations)? What makes a mental state a conscious mental state? Here Tye defends what he calls “PANIC theory.” The acronym “PANIC” stands for poised, abstract, non-conceptual, intentional content. Without probing into every aspect of PANIC theory, Tye holds that at least some of the representational content in question is non-conceptual (N), which is to say that the subject can lack the concept for the properties represented by the experience in question, such as an experience of a certain shade of red that one has never seen before. Actually, the exact nature or even existence of non-conceptual content of experience is itself a highly debated and difficult issue in philosophy of mind (Gunther 2003).  Gennaro (2012), for example, defends conceptualism and connects it in various ways to the higher-order thought theory of consciousness (see section 4b.ii). Conscious states clearly must also have “intentional content” (IC) for any representationalist. Tye also asserts that such content is “abstract” (A) and not necessarily about particular concrete objects. This condition is needed to handle cases of hallucinations, where there are no concrete objects at all or cases where different objects look phenomenally alike. Perhaps most important for mental states to be conscious, however, is that such content must be “poised” (P), which is an importantly functional notion. The “key idea is that experiences and feelings...stand ready and available to make a direct impact on beliefs and/or desires. For example…feeling hungry… has an immediate cognitive effect, namely, the desire to eat….States with nonconceptual content that are not so poised lack phenomenal character [because]…they arise too early, as it were, in the information processing” (Tye 2000: 62).

One objection to Tye’s theory is that it does not really address the hard problem of phenomenal consciousness (see section 3b.i). This is partly because what really seems to be doing most of the work on Tye’s PANIC account is the very functional sounding “poised” notion, which is perhaps closer to Block’s access consciousness (see section 1) and is therefore not necessarily able to explain phenomenal consciousness (see Kriegel 2002). In short, it is difficult to see just how Tye’s PANIC account might not equally apply to unconscious representations and thus how it really explains phenomenal consciousness.

Other standard objections to Tye’s theory as well as to other FOR accounts include the concern that it does not cover all kinds of conscious states. Some conscious states seem not to be “about” anything, such as pains, anxiety, or after-images, and so would be non-representational conscious states. If so, then conscious experience cannot generally be explained in terms of representational properties (Block 1996). Tye responds that pains, itches, and the like do represent, in the sense that they represent parts of the body. And after-images, hallucinations, and the like either misrepresent (which is still a kind of representation) or the conscious subject still takes them to have representational properties from the first-person point of view. Indeed, Tye (2000) admirably goes to great lengths and argues convincingly in response to a whole host of alleged counter-examples to representationalism. Historically among them are various hypothetical cases of inverted qualia (see Shoemaker 1982), the mere possibility of which is sometimes taken as devastating to representationalism. These are cases where behaviorally indistinguishable individuals have inverted color perceptions of objects, such as person A visually experiences a lemon the way that person B experience a ripe tomato with respect to their color, and so on for all yellow and red objects. Isn’t it possible that there are two individuals whose color experiences are inverted with respect to the objects of perception? (For more on the importance of color in philosophy, see Hardin 1986.)

A somewhat different twist on the inverted spectrum is famously put forth in Block’s (1990) Inverted Earth case. On Inverted Earth every object has the complementary color to the one it has here, but we are asked to imagine that a person is equipped with color-inverting lenses and then sent to Inverted Earth completely ignorant of those facts. Since the color inversions cancel out, the phenomenal experiences remain the same, yet there certainly seem to be different representational properties of objects involved. The strategy on the part of critics, in short, is to think of counter-examples (either actual or hypothetical) whereby there is a difference between the phenomenal properties in experience and the relevant representational properties in the world. Such objections can, perhaps, be answered by Tye and others in various ways, but significant debate continues (Macpherson 2005). Intuitions also dramatically differ as to the very plausibility and value of such thought experiments. (For more, see Seager 1999, chapters 6 and 7. See also Chalmers 2004 for an excellent discussion of the dizzying array of possible representationalist positions.)

ii. Higher-Order Representationalism

As we have seen, one question that should be answered by any theory of consciousness is: What makes a mental state a conscious mental state? There is a long tradition that has attempted to understand consciousness in terms of some kind of higher-order awareness. For example, John Locke (1689/1975) once said that “consciousness is the perception of what passes in a man’s own mind.” This intuition has been revived by a number of philosophers (Rosenthal, 1986, 1993b, 1997, 2000, 2004, 2005; Gennaro 1996a, 2012; Armstrong, 1968, 1981; Lycan, 1996, 2001). In general, the idea is that what makes a mental state conscious is that it is the object of some kind of higher-order representation (HOR). A mental state M becomes conscious when there is a HOR of M. A HOR is a “meta-psychological” state, i.e., a mental state directed at another mental state. So, for example, my desire to write a good encyclopedia entry becomes conscious when I am (non-inferentially) “aware” of the desire. Intuitively, it seems that conscious states, as opposed to unconscious ones, are mental states that I am “aware of” in some sense. This is sometimes referred to as the Transitivity Principle. Any theory which attempts to explain consciousness in terms of higher-order states is known as a higher-order (HO) theory of consciousness. It is best initially to use the more neutral term “representation” because there are a number of different kinds of higher-order theory, depending upon how one characterizes the HOR in question. HO theories, thus, attempt to explain consciousness in mentalistic terms, that is, by reference to such notions as “thoughts” and “awareness.” Conscious mental states arise when two unconscious mental states are related in a certain specific way; namely, that one of them (the HOR) is directed at the other (M). HO theorists are united in the belief that their approach can better explain consciousness than any purely FOR theory, which has significant difficulty in explaining the difference between unconscious and conscious mental states.

There are various kinds of HO theory with the most common division between higher-order thought (HOT) theories and higher-order perception (HOP) theories. HOT theorists, such as David M. Rosenthal, think it is better to understand the HOR as a thought of some kind. HOTs are treated as cognitive states involving some kind of conceptual component. HOP theorists urge that the HOR is a perceptual or experiential state of some kind (Lycan 1996) which does not require the kind of conceptual content invoked by HOT theorists. Partly due to Kant (1781/1965), HOP theory is sometimes referred to as “inner sense theory” as a way of emphasizing its sensory or perceptual aspect. Although HOT and HOP theorists agree on the need for a HOR theory of consciousness, they do sometimes argue for the superiority of their respective positions (such as in Rosenthal 2004, Lycan 2004, and Gennaro 2012). Some philosophers, however, have argued that the difference between these theories is perhaps not as important or as clear as some think it is (Güzeldere 1995, Gennaro 1996a, Van Gulick 2000).

A common initial objection to HOR theories is that they are circular and lead to an infinite regress. It might seem that the HOT theory results in circularity by defining consciousness in terms of HOTs. It also might seem that an infinite regress results because a conscious mental state must be accompanied by a HOT, which, in turn, must be accompanied by another HOT ad infinitum. However, the standard reply is that when a conscious mental state is a first-order world-directed state the higher-order thought (HOT) is not itself conscious; otherwise, circularity and an infinite regress would follow. When the HOT is itself conscious, there is a yet higher-order (or third-order) thought directed at the second-order state. In this case, we have introspection which involves a conscious HOT directed at an inner mental state. When one introspects, one's attention is directed back into one's mind. For example, what makes my desire to write a good entry a conscious first-order desire is that there is a (non-conscious) HOT directed at the desire. In this case, my conscious focus is directed at the entry and my computer screen, so I am not consciously aware of having the HOT from the first-person point of view. When I introspect that desire, however, I then have a conscious HOT (accompanied by a yet higher, third-order, HOT) directed at the desire itself (see Rosenthal 1986).

Peter Carruthers (2000) has proposed another possibility within HO theory; namely, that it is better for various reasons to think of the HOTs as dispositional states instead of the standard view that the HOTs are actual, though he also understands his “dispositional HOT theory” to be a form of HOP theory (Carruthers 2004). The basic idea is that the conscious status of an experience is due to its availability to higher-order thought. So “conscious experience occurs when perceptual contents are fed into a special short-term buffer memory store, whose function is to make those contents available to cause HOTs about themselves.” (Carruthers 2000: 228). Some first-order perceptual contents are available to a higher-order “theory of mind mechanism,” which transforms those representational contents into conscious contents. Thus, no actual HOT occurs. Instead, according to Carruthers, some perceptual states acquire a dual intentional content; for example, a conscious experience of red not only has a first-order content of “red,” but also has the higher-order content “seems red” or “experience of red.” Carruthers also makes interesting use of so-called “consumer semantics” in order to fill out his theory of phenomenal consciousness. The content of a mental state depends, in part, on the powers of the organisms which “consume” that state, e.g., the kinds of inferences which the organism can make when it is in that state. Daniel Dennett (1991) is sometimes credited with an earlier version of a dispositional account (see Carruthers 2000, chapter ten). Carruthers’ dispositional theory is often criticized by those who, among other things, do not see how the mere disposition toward a mental state can render it conscious (Rosenthal 2004; see also Gennaro 2004, 2012; for more, see Consciousness, Higher Order Theories of.)

It is worth briefly noting a few typical objections to HO theories (many of which can be found in Byrne 1997): First, and perhaps most common, is that various animals (and even infants) are not likely to have to the conceptual sophistication required for HOTs, and so that would render animal (and infant) consciousness very unlikely (Dretske 1995, Seager 2004). Are cats and dogs capable of having complex higher-order thoughts such as “I am in mental state M”? Although most who bring forth this objection are not HO theorists, Peter Carruthers (1989) is one HO theorist who actually embraces the conclusion that (most) animals do not have phenomenal consciousness. Gennaro (1993, 1996) has replied to Carruthers on this point; for example, it is argued that the HOTs need not be as sophisticated as it might initially appear and there is ample comparative neurophysiological evidence supporting the conclusion that animals have conscious mental states. Most HO theorists do not wish to accept the absence of animal or infant consciousness as a consequence of holding the theory. The debate continues, however, in Carruthers (2000, 2005, 2008) and Gennaro (2004, 2009, 2012, chapters seven and eight).

A second objection has been referred to as the “problem of the rock” (Stubenberg 1998) and the “generality problem” (Van Gulick 2000, 2004), but it is originally due to Alvin Goldman (Goldman 1993). When I have a thought about a rock, it is certainly not true that the rock becomes conscious. So why should I suppose that a mental state becomes conscious when I think about it? This is puzzling to many and the objection forces HO theorists to explain just how adding the HO state changes an unconscious state into a conscious. There have been, however, a number of responses to this kind of objection (Rosenthal 1997, Lycan, 1996, Van Gulick 2000, 2004, Gennaro 2005, 2012, chapter four). A common theme is that there is a principled difference in the objects of the HO states in question. Rocks and the like are not mental states in the first place, and so HO theorists are first and foremost trying to explain how a mental state becomes conscious. The objects of the HO states must be “in the head.”

Third, the above leads somewhat naturally to an objection related to Chalmers’ hard problem (section 3b.i). It might be asked just how exactly any HO theory really explains the subjective or phenomenal aspect of conscious experience. How or why does a mental state come to have a first-person qualitative “what it is like” aspect by virtue of the presence of a HOR directed at it? It is probably fair to say that HO theorists have been slow to address this problem, though a number of overlapping responses have emerged (see also Gennaro 2005, 2012, chapter four, for more extensive treatment). Some argue that this objection misconstrues the main and more modest purpose of (at least, their) HO theories. The claim is that HO theories are theories of consciousness only in the sense that they are attempting to explain what differentiates conscious from unconscious states, i.e., in terms of a higher-order awareness of some kind. A full account of “qualitative properties” or “sensory qualities” (which can themselves be non-conscious) can be found elsewhere in their work, but is independent of their theory of consciousness (Rosenthal 1991, Lycan 1996, 2001). Thus, a full explanation of phenomenal consciousness does require more than a HO theory, but that is no objection to HO theories as such. Another response is that proponents of the hard problem unjustly raise the bar as to what would count as a viable explanation of consciousness so that any such reductivist attempt would inevitably fall short (Carruthers 2000, Gennaro 2012). Part of the problem, then, is a lack of clarity about what would even count as an explanation of consciousness (Van Gulick 1995; see also section 3b). Once this is clarified, however, the hard problem can indeed be solved. Moreover, anyone familiar with the literature knows that there are significant terminological difficulties in the use of various crucial terms which sometimes inhibits genuine progress (but see Byrne 2004 for some helpful clarification).

A fourth important objection to HO approaches is the question of how such theories can explain cases where the HO state might misrepresent the lower-order (LO) mental state (Byrne 1997, Neander 1998, Levine 2001, Block 2011). After all, if we have a representational relation between two states, it seems possible for misrepresentation or malfunction to occur. If it does, then what explanation can be offered by the HO theorist? If my LO state registers a red percept and my HO state registers a thought about something green due, say, to some neural misfiring, then what happens? It seems that problems loom for any answer given by a HO theorist and the cause of the problem has to do with the very nature of the HO theorist’s belief that there is a representational relation between the LO and HO states. For example, if the HO theorist takes the option that the resulting conscious experience is reddish, then it seems that the HO state plays no role in determining the qualitative character of the experience. On the other hand, if the resulting experience is greenish, then the LO state seems irrelevant.  Rosenthal and Weisberg hold that the HO state determines the qualitative properties even in cases when there is no LO state at all (Rosenthal 2005, 2011, Weisberg 2008, 2011a, 2011b).  Gennaro (2012) argues that no conscious experience results in such cases and wonders, for example, how a sole (unconscious) HOT can result in a conscious state at all.  He argues that there must be a match, complete or partial, between the LO and HO state in order for a conscious state to exist in the first place. This important objection forces HO theorists to be clearer about just how to view the relationship between the LO and HO states. Debate is ongoing and significant both on varieties of HO theory and in terms of the above objections (see Gennaro 2004a). There is also interdisciplinary interest in how various HO theories might be realized in the brain (Gennaro 2012, chapter nine).

iii. Hybrid Representational Accounts

A related and increasingly popular version of representational theory holds that the meta-psychological state in question should be understood as intrinsic to (or part of) an overall complex conscious state. This stands in contrast to the standard view that the HO state is extrinsic to (that is, entirely distinct from) its target mental state. The assumption, made by Rosenthal for example, about the extrinsic nature of the meta-thought has increasingly come under attack, and thus various hybrid representational theories can be found in the literature. One motivation for this movement is growing dissatisfaction with standard HO theory’s ability to handle some of the objections addressed in the previous section. Another reason is renewed interest in a view somewhat closer to the one held by Franz Brentano (1874/1973) and various other followers, normally associated with the phenomenological tradition (Husserl 1913/1931, 1929/1960; Sartre 1956; see also Smith 1986, 2004). To varying degrees, these views have in common the idea that conscious mental states, in some sense, represent themselves, which then still involves having a thought about a mental state, just not a distinct or separate state. Thus, when one has a conscious desire for a cold glass of water, one is also aware that one is in that very state. The conscious desire both represents the glass of water and itself. It is this “self-representing” which makes the state conscious.

These theories can go by various names, which sometimes seem in conflict, and have added significantly in recent years to the acronyms which abound in the literature. For example, Gennaro (1996a, 2002, 2004, 2006, 2012) has argued that, when one has a first-order conscious state, the HOT is better viewed as intrinsic to the target state, so that we have a complex conscious state with parts. Gennaro calls this the “wide intrinsicality view” (WIV) and he also argues that Jean-Paul Sartre’s theory of consciousness can be understood in this way (Gennaro 2002). Gennaro holds that conscious mental states should be understood (as Kant might have today) as global brain states which are combinations of passively received perceptual input and presupposed higher-order conceptual activity directed at that input. Higher-order concepts in the meta-psychological thoughts are presupposed in having first-order conscious states. Robert Van Gulick (2000, 2004, 2006) has also explored the alternative that the HO state is part of an overall global conscious state. He calls such states “HOGS” (Higher-Order Global States) whereby a lower-order unconscious state is “recruited” into a larger state, which becomes conscious partly due to the implicit self-awareness that one is in the lower-order state. Both Gennaro and Van Gulick have suggested that conscious states can be understood materialistically as global states of the brain, and it would be better to treat the first-order state as part of the larger complex brain state. This general approach is also forcefully advocated by Uriah Kriegel (Kriegel 2003a, 2003b, 2005, 2006, 2009) and is even the subject of an entire anthology debating its merits (Kriegel and Williford 2006). Kriegel has used several different names for his “neo-Brentanian theory,” such as the SOMT (Same-Order Monitoring Theory) and, more recently, the “self-representational theory of consciousness.” To be sure, the notion of a mental state representing itself or a mental state with one part representing another part is in need of further development and is perhaps somewhat mysterious. Nonetheless, there is agreement among these authors that conscious mental states are, in some important sense, reflexive or self-directed. And, once again, there is keen interest in developing this model in a way that coheres with the latest neurophysiological research on consciousness. A point of emphasis is on the concept of global meta-representation within a complex brain state, and attempts are underway to identify just how such an account can be realized in the brain.

It is worth mentioning that this idea was also briefly explored by Thomas Metzinger who focused on the fact that consciousness “is something that unifies or synthesizes experience” (Metzinger 1995: 454). Metzinger calls this the process of “higher-order binding” and thus uses the acronym HOB. Others who hold some form of the self-representational view include Kobes (1995), Caston (2002), Williford (2006), Brook and Raymont (2006), and even Carruthers’ (2000) theory can be viewed in this light since he contends that conscious states have two representational contents. Thomas Natsoulas also has a series of papers defending a similar view, beginning with Natsoulas 1996. Some authors (such as Gennaro 2012) view this hybrid position to be a modified version of HOT theory; indeed, Rosenthal (2004) has called it “intrinsic higher-order theory.” Van Gulick also clearly wishes to preserve the HO is his HOGS. Others, such as Kriegel, are not inclined to call their views “higher-order” at all and call it, for example, the “same-order monitoring” or “self-representational” theory of consciousness. To some extent, this is a terminological dispute, but, despite important similarities, there are also key subtle differences between these hybrid alternatives. Like HO theorists, however, those who advocate this general approach all take very seriously the notion that a conscious mental state M is a state that subject S is (non-inferentially) aware that S is in. By contrast, one is obviously not aware of one’s unconscious mental states. Thus, there are various attempts to make sense of and elaborate upon this key intuition in a way that is, as it were, “in-between” standard FO and HO theory. (See also Lurz 2003 and 2004 for yet another interesting hybrid account.)

c. Other Cognitive Theories

Aside from the explicitly representational approaches discussed above, there are also related attempts to explain consciousness in other cognitive terms. The two most prominent such theories are worth describing here:

Daniel Dennett (1991, 2005) has put forth what he calls the Multiple Drafts Model (MDM) of consciousness. Although similar in some ways to representationalism, Dennett is most concerned that materialists avoid falling prey to what he calls the “myth of the Cartesian theater,” the notion that there is some privileged place in the brain where everything comes together to produce conscious experience. Instead, the MDM holds that all kinds of mental activity occur in the brain by parallel processes of interpretation, all of which are under frequent revision. The MDM rejects the idea of some “self” as an inner observer; rather, the self is the product or construction of a narrative which emerges over time. Dennett is also well known for rejecting the very assumption that there is a clear line to be drawn between conscious and unconscious mental states in terms of the problematic notion of “qualia.” He influentially rejects strong emphasis on any phenomenological or first-person approach to investigating consciousness, advocating instead what he calls “heterophenomenology” according to which we should follow a more neutral path “leading from objective physical science and its insistence on the third person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences.” (1991: 72)

Bernard Baars’ Global Workspace Theory (GWT) model of consciousness is probably the most influential theory proposed among psychologists (Baars 1988, 1997). The basic idea and metaphor is that we should think of the entire cognitive system as built on a “blackboard architecture” which is a kind of global workspace. According to GWT, unconscious processes and mental states compete for the spotlight of attention, from which information is “broadcast globally” throughout the system. Consciousness consists in such global broadcasting and is therefore also, according to Baars, an important functional and biological adaptation. We might say that consciousness is thus created by a kind of global access to select bits of information in the brain and nervous system. Despite Baars’ frequent use of “theater” and “spotlight” metaphors, he argues that his view does not entail the presence of the material Cartesian theater that Dennett is so concerned to avoid. It is, in any case, an empirical matter just how the brain performs the functions he describes, such as detecting mechanisms of attention.

Objections to these cognitive theories include the charge that they do not really address the hard problem of consciousness (as described in section 3b.i), but only the “easy” problems. Dennett is also often accused of explaining away consciousness rather than really explaining it. It is also interesting to think about Baars’ GWT in light of the Block’s distinction between access and phenomenal consciousness (see section 1). Does Baars’ theory only address access consciousness instead of the more difficult to explain phenomenal consciousness? (Two other psychological cognitive theories worth noting are the ones proposed by George Mandler 1975 and Tim Shallice 1988.)

d. Quantum Approaches

Finally, there are those who look deep beneath the neural level to the field of quantum mechanics, basically the study of sub-atomic particles, to find the key to unlocking the mysteries of consciousness. The bizarre world of quantum physics is quite different from the deterministic world of classical physics, and a major area of research in its own right. Such authors place the locus of consciousness at a very fundamental physical level. This somewhat radical, though exciting, option is explored most notably by physicist Roger Penrose (1989, 1994) and anesthesiologist Stuart Hameroff (1998). The basic idea is that consciousness arises through quantum effects which occur in subcellular neural structures known as microtubules, which are structural proteins in cell walls. There are also other quantum approaches which aim to explain the coherence of consciousness (Marshall and Zohar 1990) or use the “holistic” nature of quantum mechanics to explain consciousness (Silberstein 1998, 2001). It is difficult to assess these somewhat exotic approaches at present. Given the puzzling and often very counterintuitive nature of quantum physics, it is unclear whether such approaches will prove genuinely scientifically valuable methods in explaining consciousness. One concern is simply that these authors are trying to explain one puzzling phenomenon (consciousness) in terms of another mysterious natural phenomenon (quantum effects). Thus, the thinking seems to go, perhaps the two are essentially related somehow and other physicalistic accounts are looking in the wrong place, such as at the neuro-chemical level. Although many attempts to explain consciousness often rely of conjecture or speculation, quantum approaches may indeed lead the field along these lines. Of course, this doesn’t mean that some such theory isn’t correct. One exciting aspect of this approach is the resulting interdisciplinary interest it has generated among physicists and other scientists in the problem of consciousness.

5. Consciousness and Science: Key Issues

Over the past two decades there has been an explosion of interdisciplinary work in the science of consciousness. Some of the credit must go to the ground breaking 1986 book by Patricia Churchland entitled Neurophilosophy. In this section, three of the most important such areas are addressed.

a. The Unity of Consciousness/The Binding Problem

Conscious experience seems to be “unified” in an important sense; this crucial feature of consciousness played an important role in the philosophy of Kant who argued that unified conscious experience must be the product of the (presupposed) synthesizing work of the mind. Getting clear about exactly what is meant by the “unity of consciousness” and explaining how the brain achieves such unity has become a central topic in the study of consciousness. There are many different senses of “unity” (see Tye 2003; Bayne and Chalmers 2003, Dainton 2000, 2008, Bayne 2010), but perhaps most common is the notion that, from the first-person point of view, we experience the world in an integrated way and as a single phenomenal field of experience. (For an important anthology on the subject, see Cleeremans 2003.) However, when one looks at how the brain processes information, one only sees discrete regions of the cortex processing separate aspects of perceptual objects. Even different aspects of the same object, such as its color and shape, are processed in different parts of the brain. Given that there is no “Cartesian theater” in the brain where all this information comes together, the problem arises as to just how the resulting conscious experience is unified. What mechanisms allow us to experience the world in such a unified way? What happens when this unity breaks down, as in various pathological cases? The “problem of integrating the information processed by different regions of the brain is known as the binding problem” (Cleeremans 2003: 1). Thus, the so-called “binding problem” is inextricably linked to explaining the unity of consciousness. As was seen earlier with neural theories (section 4a) and as will be seen below on the neural correlates of consciousness (5b), some attempts to solve the binding problem have to do with trying to isolate the precise brain mechanisms responsible for consciousness. For example, Crick and Koch’s (1990) idea that synchronous neural firings are (at least) necessary for consciousness can also be viewed as an attempt to explain how disparate neural networks bind together separate pieces of information to produce unified subjective conscious experience. Perhaps the binding problem and the hard problem of consciousness (section 3b.i) are very closely connected. If the binding problem can be solved, then we arguably have identified the elusive neural correlate of consciousness and have, therefore, perhaps even solved the hard problem. In addition, perhaps the explanatory gap between third-person scientific knowledge and first-person unified conscious experience can also be bridged. Thus, this exciting area of inquiry is central to some of the deepest questions in the philosophical and scientific exploration of consciousness.

b. The Neural Correlates of Consciousness (NCCs)

As was seen earlier in discussing neural theories of consciousness (section 4a), the search for the so-called “neural correlates of consciousness” (NCCs) is a major preoccupation of philosophers and scientists alike (Metzinger 2000). Narrowing down the precise brain property responsible for consciousness is a different and far more difficult enterprise than merely holding a generic belief in some form of materialism. One leading candidate is offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons all fire in synchrony with one another (oscillations within the 35-75 hertz range or 35-75 cycles per second). Currently, one method used is simply to study some aspect of neural functioning with sophisticated detecting equipments (such as MRIs and PET scans) and then correlate it with first-person reports of conscious experience. Another method is to study the difference in brain activity between those under anesthesia and those not under any such influence. A detailed survey would be impossible to give here, but a number of other candidates for the NCC have emerged over the past two decades, including reentrant cortical feedback loops in the neural circuitry throughout the brain (Edelman 1989, Edelman and Tononi 2000), NMDA-mediated transient neural assemblies (Flohr 1995), and emotive somatosensory haemostatic processes in the frontal lobe (Damasio 1999). To elaborate briefly on Flohr’s theory, the idea is that anesthetics destroy conscious mental activity because they interfere with the functioning of NMDA synapses between neurons, which are those that are dependent on N-methyl-D-aspartate receptors. These and other NCCs are explored at length in Metzinger (2000). Ongoing scientific investigation is significant and an important aspect of current scientific research in the field.

One problem with some of the above candidates is determining exactly how they are related to consciousness. For example, although a case can be made that some of them are necessary for conscious mentality, it is unclear that they are sufficient. That is, some of the above seem to occur unconsciously as well. And pinning down a narrow enough necessary condition is not as easy as it might seem. Another general worry is with the very use of the term “correlate.” As any philosopher, scientist, and even undergraduate student should know, saying that “A is correlated with B” is rather weak (though it is an important first step), especially if one wishes to establish the stronger identity claim between consciousness and neural activity. Even if such a correlation can be established, we cannot automatically conclude that there is an identity relation. Perhaps A causes B or B causes A, and that’s why we find the correlation. Even most dualists can accept such interpretations. Maybe there is some other neural process C which causes both A and B. “Correlation” is not even the same as “cause,” let alone enough to establish “identity.” Finally, some NCCs are not even necessarily put forth as candidates for all conscious states, but rather for certain specific kinds of consciousness (e.g., visual).

c. Philosophical Psychopathology

Philosophers have long been intrigued by disorders of the mind and consciousness. Part of the interest is presumably that if we can understand how consciousness goes wrong, then that can help us to theorize about the normal functioning mind. Going back at least as far as John Locke (1689/1975), there has been some discussion about the philosophical implications of multiple personality disorder (MPD) which is now called “dissociative identity disorder” (DID). Questions abound: Could there be two centers of consciousness in one body? What makes a person the same person over time? What makes a person a person at any given time? These questions are closely linked to the traditional philosophical problem of personal identity, which is also importantly related to some aspects of consciousness research. Much the same can be said for memory disorders, such as various forms of amnesia (see Gennaro 1996a, chapter 9). Does consciousness require some kind of autobiographical memory or psychological continuity? On a related front, there is significant interest in experimental results from patients who have undergone a commisurotomy, which is usually performed to relieve symptoms of severe epilepsy when all else fails. During this procedure, the nerve fibers connecting the two brain hemispheres are cut, resulting in so-called “split-brain” patients (Bayne 2010).

Philosophical interest is so high that there is now a book series called Philosophical Psychopathology published by MIT Press. Another rich source of information comes from the provocative and accessible writings of neurologists on a whole host of psychopathologies, most notably Oliver Sacks (starting with his 1987 book) and, more recently, V. S. Ramachandran (2004; see also Ramachandran and Blakeslee 1998). Another launching point came from the discovery of the phenomenon known as “blindsight” (Weiskrantz 1986), which is very frequently discussed in the philosophical literature regarding its implications for consciousness. Blindsight patients are blind in a well defined part of the visual field (due to cortical damage), but yet, when forced, can guess, with a higher than expected degree of accuracy, the location or orientation of an object in the blind field.

There is also philosophical interest in many other disorders, such as phantom limb pain (where one feels pain in a missing or amputated limb), various agnosias (such as visual agnosia where one is not capable of visually recognizing everyday objects), and anosognosia (which is denial of illness, such as when one claims that a paralyzed limb is still functioning, or when one denies that one is blind). These phenomena raise a number of important philosophical questions and have forced philosophers to rethink some very basic assumptions about the nature of mind and consciousness. Much has also recently been learned about autism and various forms of schizophrenia. A common view is that these disorders involve some kind of deficit in self-consciousness or in one’s ability to use certain self-concepts. (For a nice review article, see Graham 2002.) Synesthesia is also a fascinating abnormal phenomenon, although not really a “pathological” condition as such (Cytowic 2003). Those with synesthesia literally have taste sensations when seeing certain shapes or have color sensations when hearing certain sounds. It is thus an often bizarre mixing of incoming sensory input via different modalities.

One of the exciting results of this relatively new sub-field is the important interdisciplinary interest that it has generated among philosophers, psychologists, and scientists (such as in Graham 2010, Hirstein 2005, and Radden 2004).

6. Animal and Machine Consciousness

Two final areas of interest involve animal and machine consciousness. In the former case it is clear that we have come a long way from the Cartesian view that animals are mere “automata” and that they do not even have conscious experience (perhaps partly because they do not have immortal souls). In addition to the obviously significant behavioral similarities between humans and many animals, much more is known today about other physiological similarities, such as brain and DNA structures. To be sure, there are important differences as well and there are, no doubt, some genuinely difficult “grey areas” where one might have legitimate doubts about some animal or organism consciousness, such as small rodents, some birds and fish, and especially various insects. Nonetheless, it seems fair to say that most philosophers today readily accept the fact that a significant portion of the animal kingdom is capable of having conscious mental states, though there are still notable exceptions to that rule (Carruthers 2000, 2005). Of course, this is not to say that various animals can have all of the same kinds of sophisticated conscious states enjoyed by human beings, such as reflecting on philosophical and mathematical problems, enjoying artworks, thinking about the vast universe or the distant past, and so on. However, it still seems reasonable to believe that animals can have at least some conscious states from rudimentary pains to various perceptual states and perhaps even to some level of self-consciousness. A number of key areas are under continuing investigation. For example, to what extent can animals recognize themselves, such as in a mirror, in order to demonstrate some level of self-awareness? To what extent can animals deceive or empathize with other animals, either of which would indicate awareness of the minds of others? These and other important questions are at the center of much current theorizing about animal cognition. (See Keenan et. al. 2003 and Beckoff et. al. 2002.) In some ways, the problem of knowing about animal minds is an interesting sub-area of the traditional epistemological “problem of other minds”: How do we even know that other humans have conscious minds? What justifies such a belief?

The possibility of machine (or robot) consciousness has intrigued philosophers and non-philosophers alike for decades. Could a machine really think or be conscious? Could a robot really subjectively experience the smelling of a rose or the feeling of pain? One important early launching point was a well-known paper by the mathematician Alan Turing (1950) which proposed what has come to be known as the “Turing test” for machine intelligence and thought (and perhaps consciousness as well). The basic idea is that if a machine could fool an interrogator (who could not see the machine) into thinking that it was human, then we should say it thinks or, at least, has intelligence. However, Turing was probably overly optimistic about whether anything even today can pass the Turing Test, as most programs are specialized and have very narrow uses. One cannot ask the machine about virtually anything, as Turing had envisioned. Moreover, even if a machine or robot could pass the Turing Test, many remain very skeptical as to whether or not this demonstrates genuine machine thinking, let alone consciousness. For one thing, many philosophers would not take such purely behavioral (e.g., linguistic) evidence to support the conclusion that machines are capable of having phenomenal first person experiences. Merely using words like “red” doesn’t ensure that there is the corresponding sensation of red or real grasp of the meaning of “red.” Turing himself considered numerous objections and offered his own replies, many of which are still debated today.

Another much discussed argument is John Searle’s (1980) famous Chinese Room Argument, which has spawned an enormous amount of literature since its original publication (see also Searle 1984; Preston and Bishop 2002). Searle is concerned to reject what he calls “strong AI” which is the view that suitably programmed computers literally have a mind, that is, they really understand language and actually have other mental capacities similar to humans. This is contrasted with “weak AI” which is the view that computers are merely useful tools for studying the mind. The gist of Searle’s argument is that he imagines himself running a program for using Chinese and then shows that he does not understand Chinese; therefore, strong AI is false; that is, running the program does not result in any real understanding (or thought or consciousness, by implication). Searle supports his argument against strong AI by utilizing a thought experiment whereby he is in a room and follows English instructions for manipulating Chinese symbols in order to produce appropriate answers to questions in Chinese. Searle argues that, despite the appearance of understanding Chinese (say, from outside the room), he does not understand Chinese at all. He does not thereby know Chinese, but is merely manipulating symbols on the basis of syntax alone. Since this is what computers do, no computer, merely by following a program, genuinely understands anything. Searle replies to numerous possible criticisms in his original paper (which also comes with extensive peer commentary), but suffice it to say that not everyone is satisfied with his responses. For example, it might be argued that the entire room or “system” understands Chinese if we are forced to use Searle’s analogy and thought experiment. Each part of the room doesn’t understand Chinese (including Searle himself) but the entire system does, which includes the instructions and so on. Searle’s larger argument, however, is that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

Despite heavy criticism of the argument, two central issues are raised by Searle which continue to be of deep interest. First, how and when does one distinguish mere “simulation” of some mental activity from genuine “duplication”? Searle’s view is that computers are, at best, merely simulating understanding and thought, not really duplicating it. Much like we might say that a computerized hurricane simulation does not duplicate a real hurricane, Searle insists the same goes for any alleged computer “mental” activity. We do after all distinguish between real diamonds or leather and mere simulations which are just not the real thing. Second, and perhaps even more important, when considering just why computers really can’t think or be conscious, Searle interestingly reverts back to a biologically based argument. In essence, he says that computers or robots are just not made of the right stuff with the right kind of “causal powers” to produce genuine thought or consciousness. After all, even a materialist does not have to allow that any kind of physical stuff can produce consciousness any more than any type of physical substance can, say, conduct electricity. Of course, this raises a whole host of other questions which go to the heart of the metaphysics of consciousness. To what extent must an organism or system be physiologically like us in order to be conscious? Why is having a certain biological or chemical make up necessary for consciousness? Why exactly couldn’t an appropriately built robot be capable of having conscious mental states? How could we even know either way? However one answers these questions, it seems that building a truly conscious Commander Data is, at best, still just science fiction.

In any case, the growing areas of cognitive science and artificial intelligence are major fields within philosophy of mind and can importantly bear on philosophical questions of consciousness. Much of current research focuses on how to program a computer to model the workings of the human brain, such as with so-called “neural (or connectionist) networks.”

7. References and Further Reading

 

  • Alter, T. and S.Walter, eds. Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism.  New York: Oxford University Press, 2007.
  • Armstrong, D. A Materialist Theory of Mind. London: Routledge and Kegan Paul, 1968.
  • Armstrong, D. "What is Consciousness?" In The Nature of Mind. Ithaca, NY: Cornell University Press, 1981.
  • Baars, B. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press, 1988.
  • Baars, B. In The Theater of Consciousness. New York: Oxford University Press, 1997.
  • Baars, B., Banks, W., and Newman, J. eds. Essential Sources in the Scientific Study of Consciousness. Cambridge, MA: MIT Press, 2003.
  • Balog, K. "Conceivability, Possibility, and the Mind-Body Problem." In Philosophical Review 108: 497-528, 1999.
  • Bayne, T. & Chalmers, D. “What is the Unity of Consciousness?” In Cleeremans, 2003.
  • Bayne, T. The Unity of Consciousness. New York: Oxford University Press, 2010.
  • Beckoff, M., Allen, C., and Burghardt, G. The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition. Cambridge, MA: MIT Press, 2002.
  • Blackmore, S. Consciousness: An Introduction. Oxford: Oxford University Press, 2004.
  • Block, N. "Troubles with Functionalism.” In Readings in the Philosophy of Psychology, Volume 1, Ned Block, ed., Cambridge, MA: Harvard University Press, 1980a.
  • Block, N. "Are Absent Qualia Impossible?" Philosophical Review 89: 257-74, 1980b.
  • Block, N. "Inverted Earth." In Philosophical Perspectives, 4, J. Tomberlin, ed., Atascadero, CA: Ridgeview Publishing Company, 1990.
  • Block, N. "On a Confusion about the Function of Consciousness." In Behavioral and Brain Sciences 18: 227-47, 1995.
  • Block, N. "Mental Paint and Mental Latex." In E. Villanueva, ed. Perception. Atascadero, CA: Ridgeview, 1996.
  • Block, N. "The higher order approach to consciousness is defunct.” Analysis 71: 419-431, 2011.
  • Block, N, Flanagan, O. & Guzeledere, G. eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
  • Block, N. & Stalnaker, R. "Conceptual Analysis, Dualism, and the Explanatory Gap." Philosophical Review 108: 1-46, 1999.
  • Botterell, A. “Conceiving what is not there.” In Journal of Consciousness Studies 8 (8): 21-42, 2001.
  • Boyd, R. "Materialism without Reductionism: What Physicalism does not entail." In N. Block, ed. Readings in the Philosophy of Psychology, Vol.1. Cambridge, MA: Harvard University Press, 1980.
  • Brentano, F. Psychology from an Empirical Standpoint. New York: Humanities, 1874/1973.
  • Brook, A. Kant and the Mind. New York: Cambridge University Press, 1994.
  • Brook, A. & Raymont, P. 2006. A Unified Theory of Consciousness. Forthcoming.
  • Byrne, A. "Some like it HOT: Consciousness and Higher-Order Thoughts." In Philosophical Studies 86:103-29, 1997.
  • Byrne, A. "Intentionalism Defended." In Philosophical Review 110: 199-240, 2001.
  • Byrne, A. “What Phenomenal Consciousness is like.” In Gennaro 2004a.
  • Campbell, N. A Brief Introduction to the Philosophy of Mind. Ontario: Broadview, 2004.
  • Carruthers, P. “Brute Experience.” In Journal of Philosophy 86: 258-269, 1989.
  • Carruthers, P. Phenomenal Consciousness. Cambridge, MA: Cambridge University Press, 2000.
  • Carruthers, P. “HOP over FOR, HOT Theory.” In Gennaro 2004a.
  • Carruthers, P. Consciousness: Essays from a Higher-Order Perspective. New York: Oxford University Press, 2005.
  • Carruthers, P. “Meta-cognition in animals: A skeptical look.”  Mind and Language 23: 58-89, 2008.
  • Caston, V. “Aristotle on Consciousness.” Mind 111: 751-815, 2002.
  • Chalmers, D.J. "Facing up to the Problem of Consciousness." In Journal of Consciousness Studies 2:200-19, 1995.
  • Chalmers, D.J. The Conscious Mind. Oxford: Oxford University Press, 1996.
  • Chalmers, D.J. “What is a Neural Correlate of Consciousness?” In Metzinger 2000.
  • Chalmers, D.J. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press, 2002.
  • Chalmers, D.J. “The Representational Character of Experience.” In B. Leiter ed. The Future for Philosophy. Oxford: Oxford University Press, 2004.
  • Churchland, P. S. "Consciousness: the Transmutation of a Concept." In Pacific Philosophical Quarterly 64: 80-95, 1983.
  • Churchland, P. S. Neurophilosophy. Cambridge, MA: MIT Press, 1986.
  • Cleeremans, A. The Unity of Consciousness: Binding, Integration and Dissociation. Oxford: Oxford University Press, 2003.
  • Crick, F. and Koch, C. "Toward a Neurobiological Theory of Consciousness." In Seminars in Neuroscience 2: 263-75, 1990.
  • Crick, F. H. The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Scribners, 1994.
  • Cytowic, R. The Man Who Tasted Shapes. Cambridge, MA: MIT Press, 2003.
  • Dainton, B. Stream of Consciousness. New York: Routledge, 2000.
  • Dainton, B. The Phenomenal Self. Oxford: Oxford University Press, 2008.
  • Damasio, A. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt, 1999.
  • Dennett, D. C. "Quining Qualia." In A. Marcel & E. Bisiach eds. Consciousness and Contemporary Science. New York: Oxford University Press, 1988.
  • Dennett, D.C. Consciousness Explained. Boston: Little, Brown, and Co, 1991.
  • Dennett, D. C. Sweet Dreams. Cambridge, MA: MIT Press, 2005.
  • Dretske, F. Naturalizing the Mind. Cambridge, MA: MIT Press, 1995.
  • Droege, P. Caging the Beast. Philadelphia & Amsterdam: John Benjamins Publishers, 2003.
  • Edelman, G. The Remembered Present: A Biological Theory of Consciousness. New York: Basic Books, 1989.
  • Edelman, G. & Tononi, G. “Reentry and the Dynamic Core: Neural Correlates of Conscious Experience.” In Metzinger 2000.
  • Flohr, H. "An Information Processing Theory of Anesthesia." In Neuropsychologia 33: 9, 1169-80, 1995.
  • Fodor, J. "Special Sciences.” In Synthese 28, 77-115, 1974.
  • Foster, J. The Immaterial Self: A Defence of the Cartesian Dualist Conception of Mind. London: Routledge, 1996.
  • Gendler, T. & Hawthorne, J. eds. Conceivability and Possibility. Oxford: Oxford University Press, 2002.
  • Gennaro, R.J. “Brute Experience and the Higher-Order Thought Theory of Consciousness.” In Philosophical Papers 22: 51-69, 1993.
  • Gennaro, R.J. Consciousness and Self-consciousness: A Defense of the Higher-Order Thought Theory of Consciousness. Amsterdam & Philadelphia: John Benjamins, 1996a.
  • Gennaro, R.J. Mind and Brain: A Dialogue on the Mind-Body Problem. Indianapolis: Hackett Publishing Company, 1996b.
  • Gennaro, R.J. “Leibniz on Consciousness and Self Consciousness.” In R. Gennaro & C. Huenemann, eds. New Essays on the Rationalists. New York: Oxford University Press, 1999.
  • Gennaro, R.J. “Jean-Paul Sartre and the HOT Theory of Consciousness.” In Canadian Journal of Philosophy 32: 293-330, 2002.
  • Gennaro, R.J. “Higher-Order Thoughts, Animal Consciousness, and Misrepresentation: A Reply to Carruthers and Levine,” 2004.  In Gennaro 2004a.
  • Gennaro, R.J., ed. Higher-Order Theories of Consciousness: An Anthology. Amsterdam and Philadelphia: John Benjamins, 2004a.
  • Gennaro, R.J. “The HOT Theory of Consciousness: Between a Rock and a Hard Place?” In Journal of Consciousness Studies 12 (2): 3-21, 2005.
  • Gennaro, R.J. “Between Pure Self-referentialism and the (extrinsic) HOT Theory of Consciousness.” In Kriegel and Williford 2006.
  • Gennaro, R.J. “Animals, consciousness, and I-thoughts.” In R. Lurz ed. Philosophy of Animal Minds. New York: Cambridge University Press, 2009.
  • Gennaro, R.J. The Consciousness Paradox: Consciousness, Concepts, and Higher-Order Thoughts.  Cambridge, MA: MIT Press, 2012.
  • Goldman, A. “Consciousness, Folk Psychology and Cognitive Science.” In Consciousness and Cognition 2: 264-82, 1993.
  • Graham, G. “Recent Work in Philosophical Psychopathology.” In American Philosophical Quarterly 39: 109-134, 2002.
  • Graham, G. The Disordered Mind. New York: Routledge, 2010.
  • Gunther, Y. ed. Essays on Nonconceptual Content. Cambridge, MA: MIT Press, 2003.
  • Guzeldere, G. “Is Consciousness the Perception of what passes in one’s own Mind?” In Metzinger 1995.
  • Hameroff, S. "Quantum Computation in Brain Microtubules? The Pemose-Hameroff "Orch OR" Model of Consciousness." In Philosophical Transactions Royal Society London A 356:1869-96, 1998.
  • Hardin, C. Color for Philosophers. Indianapolis: Hackett, 1986.
  • Harman, G. "The Intrinsic Quality of Experience." In J. Tomberlin, ed. Philosophical Perspectives, 4. Atascadero, CA: Ridgeview Publishing, 1990.
  • Heidegger, M. Being and Time (Sein und Zeit). Translated by J. Macquarrie and E. Robinson. New York: Harper and Row, 1927/1962.
  • Hill, C. S. "Imaginability, Conceivability, Possibility, and the Mind-Body Problem." In Philosophical Studies 87: 61-85, 1997.
  • Hill, C. and McLaughlin, B. "There are fewer things in Reality than are dreamt of in Chalmers' Philosophy." In Philosophy and Phenomenological Research 59: 445-54, 1998.
  • Hirstein, W. Brain Fiction. Cambridge, MA: MIT Press, 2005.
  • Horgan, T. and Tienson, J. "The Intentionality of Phenomenology and the Phenomenology of Intentionality." In Chalmers 2002.
  • Husserl, E. Ideas: General Introduction to Pure Phenomenology (Ideen au einer reinen Phänomenologie und phänomenologischen Philosophie). Translated by W. Boyce Gibson. New York: MacMillan, 1913/1931.
  • Husserl, E. Cartesian Meditations: an Introduction to Phenomenology. Translated by Dorian Cairns.The Hague: M. Nijhoff, 1929/1960.
  • Jackson, F. "Epiphenomenal Qualia." In Philosophical Quarterly 32: 127-136, 1982.
  • Jackson, F. "What Mary didn't Know." In Journal of Philosophy 83: 291-5, 1986.
  • James, W. The Principles of Psychology. New York: Henry Holt & Company, 1890.
  • Kant, I. Critique of Pure Reason. Translated by N. Kemp Smith. New York: MacMillan, 1965.
  • Keenan, J., Gallup, G., and Falk, D. The Face in the Mirror. New York: HarperCollins, 2003.
  • Kim, J. "The Myth of Non-Reductive Physicalism." In Proceedings and Addresses of the American Philosophical Association, 1987.
  • Kim, J. Supervenience and Mind. Cambridge, MA: Cambridge University Press, 1993.
  • Kim, J. Mind in Physical World. Cambridge: MIT Press, 1998.
  • Kind, A. “What’s so Transparent about Transparency?” In Philosophical Studies 115: 225-244, 2003.
  • Kirk, R. Raw Feeling. New York: Oxford University Press, 1994.
  • Kirk, R. Zombies and Consciousness. New York: Oxford University Press, 2005.
  • Kitcher, P. Kant’s Transcendental Psychology. New York: Oxford University Press, 1990.
  • Kobes, B. “Telic Higher-Order Thoughts and Moore’s Paradox.” In Philosophical Perspectives 9: 291-312, 1995.
  • Koch, C. The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and Company, 2004.
  • Kriegel, U. “PANIC Theory and the Prospects for a Representational Theory of Phenomenal Consciousness.” In Philosophical Psychology 15: 55-64, 2002.
  • Kriegel, U. “Consciousness, Higher-Order Content, and the Individuation of Vehicles.” In Synthese 134: 477-504, 2003a.
  • Kriegel, U. “Consciousness as Intransitive Self-Consciousness: Two Views and an Argument.” In Canadian Journal of Philosophy 33: 103-132, 2003b.
  • Kriegel, U. “Consciousness and Self-Consciousness.” In The Monist 87: 182-205, 2004.
  • Kriegel, U. “Naturalizing Subjective Character.” In Philosophy and Phenomenological Research, forthcoming.
  • Kriegel, U. “The Same Order Monitoring Theory of Consciousness.” In Kriegel and Williford 2006.
  • Kriegel, U. Subjective Consciousness. New York: Oxford University Press, 2009.
  • Kriegel, U. & Williford, K. Self-Representational Approaches to Consciousness. Cambridge, MA: MIT Press, 2006.
  • Kripke, S. Naming and Necessity. Cambridge, MA: Harvard University Press, 1972.
  • Leibniz, G. W. Discourse on Metaphysics. Translated by D. Garber and R. Ariew. Indianapolis: Hackett, 1686/1991.
  • Leibniz, G. W. The Monadology. Translated by R. Lotte. London: Oxford University Press, 1720/1925.
  • Levine, J. "Materialism and Qualia: the Explanatory Gap." In Pacific Philosophical Quarterly 64,354-361, 1983.
  • Levine, J. "On Leaving out what it's like." In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
  • Levine, J. Purple Haze: The Puzzle of Conscious Experience. Cambridge, MA: MIT Press, 2003.
  • Loar, B. "Phenomenal States." In Philosophical Perspectives 4, 81-108, 1990.
  • Loar, B. "Phenomenal States". In N. Block, O. Flanagan, and G. Guzeldere eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
  • Loar, B. “David Chalmers’s The Conscious Mind.” Philosophy and Phenomenological Research 59: 465-72, 1999.
  • Locke, J. An Essay Concerning Human Understanding. Ed. P. Nidditch. Oxford: Clarendon, 1689/1975.
  • Ludlow, P., Nagasawa, Y, & Stoljar, D. eds. There’s Something about Mary. Cambridge, MA: MIT Press, 2004.
  • Lurz, R. “Neither HOT nor COLD: An Alternative Account of Consciousness.” In Psyche 9, 2003.
  • Lurz, R. “Either FOR or HOR: A False Dichotomy.” In Gennaro 2004a.
  • Lycan, W.G. Consciousness and Experience. Cambridge, MA: MIT Press, 1996.
  • Lycan, W.G. “A Simple Argument for a Higher-Order Representation Theory of Consciousness.” Analysis 61: 3-4, 2001.
  • Lycan, W.G. "The Superiority of HOP to HOT." In Gennaro 2004a.
  • Macpherson, F. “Colour Inversion Problems for Representationalism.” In Philosophy and Phenomenological Research 70: 127-52, 2005.
  • Mandler, G. Mind and Emotion. New York: Wiley, 1975.
  • Marshall, J. and Zohar, D. The Quantum Self: Human Nature and Consciousness Defined by the New Physics. New York: Morrow, 1990.
  • McGinn, C. "Can we solve the Mind-Body Problem?" In Mind 98:349-66, 1989.
  • McGinn, C. The Problem of Consciousness. Oxford: Blackwell, 1991.
  • McGinn, C. "Consciousness and Space.” In Metzinger 1995.
  • Metzinger, T. ed. Conscious Experience. Paderbom: Ferdinand Schöningh, 1995.
  • Metzinger, T. ed. Neural Correlates of Consciousness: Empirical and Conceptual Questions. Cambridge, MA: MIT Press, 2000.
  • Moore, G. E. "The Refutation of Idealism." In G. E. Moore Philosophical Studies. Totowa, NJ: Littlefield, Adams, and Company, 1903.
  • Nagel, T. "What is it like to be a Bat?" In Philosophical Review 83: 435-456, 1974.
  • Natsoulas, T. “The Case for Intrinsic Theory I. An Introduction.” In The Journal of Mind and Behavior 17: 267-286, 1996.
  • Neander, K. “The Division of Phenomenal Labor: A Problem for Representational Theories of Consciousness.” In Philosophical Perspectives 12: 411-434, 1998.
  • Papineau, D. Philosophical Naturalism. Oxford: Blackwell, 1994.
  • Papineau, D. "The Antipathetic Fallacy and the Boundaries of Consciousness." In Metzinger 1995.
  • Papineau, D. “Mind the Gap.” In J. Tomberlin, ed. Philosophical Perspectives 12. Atascadero, CA: Ridgeview Publishing Company, 1998.
  • Papineau, D. Thinking about Consciousness. Oxford: Oxford University Press, 2002.
  • Perry, J. Knowledge, Possibility, and Consciousness. Cambridge, MA: MIT Press, 2001.
  • Penrose, R. The Emperor's New Mind: Computers, Minds and the Laws of Physics. Oxford: Oxford University Press, 1989.
  • Penrose, R. Shadows of the Mind. Oxford: Oxford University Press, 1994.
  • Place, U. T. "Is Consciousness a Brain Process?" In British Journal of Psychology 47: 44-50, 1956.
  • Polger, T. Natural Minds. Cambridge, MA: MIT Press, 2004.
  • Preston, J. and Bishop, M. eds. Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. New York: Oxford University Press, 2002.
  • Radden, J. Ed. The Philosophy of Psychiatry. New York: Oxford University Press, 2004.
  • Ramachandran, V.S. A Brief Tour of Human Consciousness. New York: PI Press, 2004.
  • Ramachandran, V.S. and Blakeslee, S. Phantoms in the Brain. New York: Harper Collins, 1998.
  • Revonsuo, A. Consciousness: The Science of Subjectivity.  New York: Psychology Press, 2010.
  • Robinson, W.S. Understanding Phenomenal Consciousness. New York: Cambridge University Press, 2004.
  • Rosenthal, D. M. “Two Concepts of Consciousness." In Philosophical Studies 49:329-59, 1986.
  • Rosenthal, D. M. "The Independence of Consciousness and Sensory Quality." In E. Villanueva, ed. Consciousness. Atascadero, CA: Ridgeview Publishing, 1991.
  • Rosenthal, D.M. “State Consciousness and Transitive Consciousness.” In Consciousness and Cognition 2: 355-63, 1993a.
  • Rosenthal, D. M. "Thinking that one thinks." In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993b.
  • Rosenthal, D. M. "A Theory of Consciousness." In N. Block, O. Flanagan, and G. Guzeldere, eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
  • Rosenthal, D. M. “Introspection and Self-Interpretation.” In Philosophical Topics 28: 201-33, 2000.
  • Rosenthal, D. M. “Varieties of Higher-Order Theory.” In Gennaro 2004a.
  • Rosenthal, D.M.  Consciousness and Mind.  New York: Oxford University Press, 2005.
  • Rosenthal, D.M.  “Exaggerated reports: reply to Block.” Analysis 71: 431-437, 2011.
  • Ryle, G. The Concept of Mind. London: Hutchinson and Company, 1949.
  • Sacks, 0. The Man who mistook his Wife for a Hat and Other Essays. New York: Harper and Row, 1987.
  • Sartre, J.P. Being and Nothingness. Trans. Hazel Barnes. New York: Philosophical Library, 1956.
  • Seager, W. Theories of Consciousness. London: Routledge, 1999.
  • Seager, W. “A Cold Look at HOT Theory.” In Gennaro 2004a.
  • Searle, J. “Minds, Brains, and Programs.” In Behavioral and Brain Sciences 3: 417-57, 1980.
  • Searle, J. Minds, Brains and Science. Cambridge, MA: Harvard University Press, 1984.
  • Searle, J. The Rediscovery of the Mind. Cambridge. MA: MIT Press, 1992.
  • Siewert, C. The Significance of Consciousness. Princeton, NJ: Princeton University Press, 1998.
  • Shallice, T. From Neuropsychology to Mental Structure. Cambridge: Cambridge University Press, 1988.
  • Shear, J. Explaining Consciousness: The Hard Problem. Cambridge, MA: MIT Press, 1997.
  • Shoemaker, S. "Functionalism and Qualia." In Philosophical Studies, 27, 291-315, 1975.
  • Shoemaker, S. "Absent Qualia are Impossible." In Philosophical Review 90, 581-99, 1981.
  • Shoemaker, S. "The Inverted Spectrum." In Journal of Philosophy, 79, 357-381, 1982.
  • Silberstein, M. "Emergence and the Mind-Body Problem." In Journal of Consciousness Studies 5: 464-82, 1998.
  • Silberstein, M. "Converging on Emergence: Consciousness, Causation and Explanation." In Journal of Consciousness Studies 8: 61-98, 2001.
  • Skinner, B. F. Science and Human Behavior. New York: MacMillan, 1953.
  • Smart, J.J.C. "Sensations and Brain Processes." In Philosophical Review 68: 141-56, 1959.
  • Smith, D.W. “The Structure of (self-)consciousness.” In Topoi 5: 149-56, 1986.
  • Smith, D.W. Mind World: Essays in Phenomenology and Ontology. Cambridge, MA: Cambridge University Press, 2004.
  • Stubenberg, L. Consciousness and Qualia. Philadelphia & Amsterdam: John Benjamins Publishers, 1998.
  • Swinburne, R. The Evolution of the Soul. Oxford: Oxford University Press, 1986.
  • Thau, M. Consciousness and Cognition. Oxford: Oxford University Press, 2002.
  • Titchener, E. An Outline of Psychology. New York: Macmillan, 1901.
  • Turing, A. “Computing Machinery and Intelligence.” In Mind 59: 433-60, 1950.
  • Tye, M. Ten Problems of Consciousness. Cambridge, MA: MIT Press, 1995.
  • Tye, M. Consciousness, Color, and Content. Cambridge, MA: MIT Press, 2000.
  • Tye, M. Consciousness and Persons. Cambridge, MA: MIT Press, 2003.
  • Van Gulick, R. "Physicalism and the Subjectivity of the Mental." In Philosophical Topics 13, 51-70, 1985.
  • Van Gulick, R. "Nonreductive Materialism and Intertheoretical Constraint." In A. Beckermann, H. Flohr, J. Kim, eds. Emergence and Reduction. Berlin and New York: De Gruyter, 1992.
  • Van Gulick, R. "Understanding the Phenomenal Mind: Are we all just armadillos?" In M. Davies and G. Humphreys, eds., Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
  • Van Gulick, R. "What would count as Explaining Consciousness?" In Metzinger 1995.
  • Van Gulick, R. "Inward and Upward: Reflection, Introspection and Self-Awareness." In Philosophical Topics 28: 275-305, 2000.
  • Van Gulick, R. "Higher-Order Global States HOGS: An Alternative Higher-Order Model of Consciousness." In Gennaro 2004a.
  • Van Gulick, R. “Mirror Mirror – is that all?” In Kriegel and Williford 2006.
  • Velmans, M. and S. Schneider eds. The Blackwell Companion to Consciousness.  Malden, MA: Blackwell, 2007.
  • Weisberg, J. “Same Old, Same Old: The Same-Order Representation Theory of Consciousness and the Division of Phenomenal Labor.” Synthese 160: 161-181, 2008.
  • Weisberg, J. “Misrepresenting consciousness.” Philosophical Studies 154: 409-433, 2011a.
  • Weisberg, J. “Abusing the Notion of What-it’s-like-ness: A Response to Block.” Analysis 71: 438-443, 2011b.
  • Weiskrantz, L. Blindsight. Oxford: Clarendon, 1986.
  • Wilkes, K. V. "Is Consciousness Important?" In British Journal for the Philosophy of Science 35: 223-43, 1984.
  • Wilkes, K. V. "Yishi, Duo, Us and Consciousness." In A. Marcel & E. Bisiach, eds., Consciousness in Contemporary Science. Oxford: Oxford University Press, 1988.
  • Williford, K. “The Self-Representational Structure of Consciousness.” In Kriegel and Williford 2006.
  • Wundt, W. Outlines of Psychology. Leipzig: W. Engleman, 1897.
  • Yablo, S. "Concepts and Consciousness." In Philosophy and Phenomenological Research 59: 455-63, 1999.
  • Zelazo, P,  M. Moscovitch, and E. Thompson. Eds. The Cambridge Handbook of Consciousness. Cambridge: Cambridge University Press, 2007.

Author Information

Rocco J. Gennaro
Email: rjgennaro@usi.edu
University of Southern Indiana
U. S. A.

The Hard Problem of Consciousness

The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious.  It is the problem of explaining why there is “something it is like” for a subject in conscious experience, why conscious mental states “light up” and directly appear to the subject.  The usual methods of science involve explanation of functional, dynamical, and structural properties—explanation of what a thing does, how it changes over time, and how it is put together.  But even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? This suggests that an explanation of consciousness will have to go beyond the usual methods of science.  Consciousness therefore presents a hard problem for science, or perhaps it marks the limits of what science can explain.  Explaining why consciousness occurs at all can be contrasted with so-called “easy problems” of consciousness:  the problems of explaining the function, dynamics, and structure of consciousness.  These features can be explained using the usual methods of science.  But that leaves the question of why there is something it is like for the subject when these functions, dynamics, and structures are present.  This is the hard problem.

In more detail, the challenge arises because it does not seem that the qualitative and subjective aspects of conscious experience—how consciousness “feels” and the fact that it is directly “for me”—fit into a physicalist ontology, one consisting of just the basic elements of physics plus structural, dynamical, and functional combinations of those basic elements.  It appears that even a complete specification of a creature in physical terms leaves unanswered the question of whether or not the creature is conscious.  And it seems that we can easily conceive of creatures just like us physically and functionally that nonetheless lack consciousness.  This indicates that a physical explanation of consciousness is fundamentally incomplete:  it leaves out what it is like to be the subject, for the subject.  There seems to be an unbridgeable explanatory gap between the physical world and consciousness.  All these factors make the hard problem hard.

The hard problem was so-named by David Chalmers in 1995.  The problem is a major focus of research in contemporary philosophy of mind, and there is a considerable body of empirical research in psychology, neuroscience, and even quantum physics.  The problem touches on issues in ontology, on the nature and limits of scientific explanation, and on the accuracy and scope of introspection and first-person knowledge, to name but a few.  Reactions to the hard problem range from an outright denial of the issue  to naturalistic reduction to panpsychism (the claim that everything is conscious to some degree) to full-blown mind-body dualism.

Table of Contents

  1. Stating the Problem
    1. Chalmers
    2. Nagel
    3. Levine
  2. Underlying Reasons for the Problem
  3. Responses to the Problem
    1. Eliminativism
    2. Strong Reductionism
    3. Weak Reductionism
    4. Mysterianism
    5. Interactionist Dualism
    6. Epiphenomenalism
    7. Dual Aspect Theory/Neutral Monism/Panpsychism
  4. References and Further Reading

1. Stating the Problem

a. Chalmers

David Chalmers coined the name “hard problem” (1995, 1996), but the problem is not wholly new, being a key element of the venerable mind-body problem.  Still, Chalmers is among those most responsible for the outpouring of work on this issue.  The problem arises because “phenomenal consciousness,” consciousness characterized in terms of “what it’s like for the subject,” fails to succumb to the standard sort of functional explanation successful elsewhere in psychology (compare Block 1995).   Psychological phenomena like learning, reasoning, and remembering can all be explained in terms of playing the right “functional role.”  If a system does the right thing, if it alters behavior appropriately in response to environmental stimulation, it counts as learning.  Specifying these functions tells us what learning is and allows us to see how brain processes could play this role.  But according to Chalmers,

What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question:  Why is the performance of these functions accompanied by experience? (1995, 202, emphasis in original).

Chalmers explains the persistence of this question by arguing against the possibility of a “reductive explanation” for phenomenal consciousness (hereafter, I will generally just use the term ‘consciousness’ for the phenomenon causing the problem).  A reductive explanation in Chalmers’s sense (following David Lewis (1972)), provides a form of deductive argument concluding with an identity statement between the target explanandum (the thing we are trying to explain) and a lower-level phenomenon that is physical in nature or more obviously reducible to the physical.  Reductive explanations of this type have two premises.  The first presents a functional analysis of the target phenomenon, which fully characterizes the target in terms of its functional role.  The second presents an empirically-discovered realizer of the functionally characterized target, one playing that very functional role.  Then, by transitivity of identity, the target and realizer are deduced to be identical.  For example, the gene may be reductively explained in terms of DNA as follows:

  1. The gene = the unit of hereditary transmission.  (By analysis.)
  2. Regions of DNA = the unit of hereditary transmission.  (By empirical investigation.)
  3. Therefore, the gene = regions of DNA. (By transitivity of identity, 1, 2.)

Chalmers contends that such reductive explanations are available in principle for all other natural phenomena, but not for consciousness.  This is the hard problem.

The reason that reductive explanation fails for consciousness, according to Chalmers, is that it cannot be functionally analyzed.  This is demonstrated by the continued conceivability of what Chalmers terms “zombies”—creatures physically (and so functionally) identical to us, but lacking consciousness—even in the face of a range of proffered functional analyses.  If we had a satisfying functional analysis of consciousness, zombies should not be conceivable.  The lack of a functional analysis is also shown by the continued conceivability of spectrum inversion (perhaps what it looks like for me to see green is what it looks like when you see red), the persistence of the “other minds” problem, the plausibility of the “knowledge argument” (Jackson 1982) and the manifest implausibility of offered functional characterizations.  If consciousness really could be functionally characterized, these problems would disappear.  Since they retain their grip on philosophers, scientists, and lay-people alike, we can conclude that no functional characterization is available.  But then the first premise of a reductive explanation cannot be properly formulated, and reductive explanation fails.  We are left, Chalmers claims, with the following stark choice:  either eliminate consciousness (deny that it exists at all) or add consciousness to our ontology as an unreduced feature of reality, on par with gravity and electromagnetism.  Either way, we are faced with a special ontological problem, one that resists solution by the usual reductive methods.

b. Nagel

Thomas Nagel sees the problem as turning on the “subjectivity” of conscious mental states (1974, 1986).  He argues that the facts about conscious states are inherently subjective—they can only be fully grasped from limited types of viewpoints.  However, scientific explanation demands an objective characterization of the facts, one that moves away from any particular point of view.  Thus, the facts about consciousness elude science and so make “the mind-body problem really intractable” (Nagel 1974, 435).

Nagel argues for the inherent subjectivity of the facts about consciousness by reflecting on the question of what it is like to be a bat—for the bat.  It seems that no amount of objective data will provide us with this knowledge, given that we do not share its type of point of view (the point of view of a creature able to fly and echolocate).  Learning all we can about the brain mechanisms, biochemistry, evolutionary history, psychophysics, and so forth, of a bat still leaves us unable to discover (or even imagine) what it’s like for the bat to hunt by echolocation on a dark night.  But it is still plausible that there are facts about what it’s like to be a bat, facts about how things seem from the bat’s perspective.  And even though we may have good reason to believe that consciousness is a physical phenomenon (due to considerations of mental causation, the success of materialist science, and so on), we are left in the dark about the bat’s conscious experience.  This is the hard problem of consciousness.

c. Levine

Joseph Levine argues that there is a special “explanatory gap” between consciousness and the physical (1983, 1993, 2001).  The challenge of closing this explanatory gap is the hard problem.  Levine argues that a good scientific explanation ought to deductively entail what it explains, allowing us to infer the presence of the target phenomenon from a statement of laws or mechanisms and initial conditions (Levine 2001, 74-76).  Deductive entailment is a logical relation where if the premises of an argument are true, the conclusion must be true as well.  For example, once we discover that lightning is nothing more than an electrical discharge, knowing that the proper conditions for a relevantly large electrical discharge existed in the atmosphere at time t allows us to deduce that lightning must have occurred at time t.  If such a deduction is not possible, there are three possible reasons, according to Levine.  One is that we have not fully specified the laws or mechanisms cited in our explanation.  Two is that the target phenomenon is stochastic in nature, and the best that can be inferred is a conclusion about the probability of the occurrence of the explanatory target.  The third is that there are as yet unknown factors at least partially involved in determining the phenomenon in question.  If we have adequately specified the laws and mechanisms in question, and if we have adjusted for stochastic phenomena, then we should possess a deductive conclusion about our explanatory target, or the third possibility is in effect.  But the third possibility is “precisely an admission that we don't have an adequate explanation” (2001, 76).

And this is the case with consciousness, according to Levine.  No matter how detailed our specification of brain mechanisms or physical laws, it seems that there is an open question about whether consciousness is present.  We can still meaningfully ask if consciousness occurred, even if we accept that the laws, mechanisms, and proper conditions are in place.  And it seems that any further information of this type that we add to our explanation will still suffer from the same problem.  Thus, there is an explanatory gap between the physical and consciousness, leaving us with the hard problem.

2. Underlying Reasons for the Problem

But what it is about consciousness that generates the hard problem?  It may just seem obvious that consciousness could not be physical or functional.  But it is worthwhile to try and draw a rough circle around the problematic features of conscious experience, if we can.  This both clarifies what we are talking about when we talk about consciousness and helps isolate the data a successful theory must explain.

Uriah Kriegel (2009; see also Levine 2001) offers a helpful conceptual division of consciousness into two components.  Starting with the standard understanding of conscious states as states there is something it’s like for the organism to be in, Kriegel notes that we can either focus on the fact that something appears for the organism or we can focus on what it is that appears, the something it’s like.  Focusing on the former, we find that subjects are aware of their conscious states in a distinctive way.  Kriegel labels this feature the subjective component of consciousness.    Focusing on the latter we find the experienced character of consciousness—the “redness of red” or the painfulness of pain— often termed “qualia” or “phenomenal character” in the literature (compare Crane 2000).  Kriegel labels this the qualitative component of consciousness.

Subdividing consciousness in this way allows us to concentrate on how we are conscious and what we are conscious of.  When we focus on the subjective “how” component, we find that conscious states are presented to the subject in a seemingly immediate way.  And when we focus on the qualitative “what” component, we find that consciousness presents us with seemingly indescribable qualities which in principle can vary independently of mental functioning.  These features help explain why consciousness generates the hard problem.

The first feature, which we can call immediacy, concerns the way we access consciousness from the first-person perspective.  Conscious states are accessed in a seemingly unmediated way.  It appears that nothing comes between us and our conscious states.  We seem to access them simply by having them—we do not infer their presence by way of any evidence or argument.  This immediacy creates the impression that there is no way we could be wrong about the content of our conscious states.  Error in perception or error in reasoning can be traced back to poor perceptual conditions or to a failure of rational inference.  But in the absence of such accessible sources of error, it seems that there is no room for inaccuracy in the introspective case.  And even if we come to believe we are in error in introspection, the evidence for this will be indirect and third-personal—it will lack the subjective force of immediacy.  Thus, there is an intuition of special accuracy or even infallibility when it comes to knowing our own conscious states.  We might be wrong that an object in the world is really red, but can we be wrong that it seems red to us?  But if we cannot be wrong about how things seem to us and conscious states seem inexplicable, then they really are inexplicable.  In this way, the immediacy of the subjective component of consciousness underwrites the hard problem.

But what we access may be even more problematic than how we access it:  we might, after all, have had immediate access to the physical nature of our conscious states (see P.M. Churchland 1985).  But conscious experience instead reveals various sensory qualities—the redness of the visual experience of an apple or the painfulness of a stubbed toe, for example.  But these qualities seem to defy informative description.  If one has not experienced them, then no amount of description will adequately convey what it’s like to have such an experience with these qualities.  We can call this feature of the qualitative component of consciousness indescribability. If someone has never seen red (a congenitally blind person, for example), it seems there is nothing informative we could say to convey to them the true nature of this quality.  We might mention prototypical red objects or explain that red is more similar to purple than it is to green, but such descriptions seem to leave the quality itself untouched.  And if experienced qualities cannot be informatively described, how could they be adequately captured in an explanatory theory?  It seems that by their very nature, conscious qualities defy explanation.  This difficulty lies at the heart of the hard problem.

A further problematic feature of what we access is that we can easily imagine our conscious mental processes occurring in conjunction with different conscious qualities or in the absence of consciousness altogether.  The particular qualities that accompany specific mental operations—like the reddish quality accompanying our detection and categorization of an apple, say—seem only contingently connected to the functional processes involved in detection and categorization.  We can call this feature of what is accessed independence.  Independence is the apparent lack of connection between conscious qualities and anything else, and it underwrites the inverted and absent qualia thought experiments used by Chalmers to establish the hard problem (compare Block 1980).  If conscious qualities really are independent in this way, then there seems to be no way to effectively tie them to the rest of reality.

The challenge of the hard problem, then, is to explain consciousness given that it seems to give us immediate access to indescribable and independent qualities.  If we can explain these underlying features, then we may see how to fit consciousness into a physicalist ontology.  Or it perhaps taking these features seriously motivates a rejection of physicalism and the acceptance of conscious qualities as fundamental features of our ontology.  The following section briefly surveys the range of responses to the hard problem, from eliminativism and reductionism to panpsychism and full-blown dualism.

3. Responses to the Problem

a. Eliminativism

Eliminativism holds that there is no hard problem of consciousness because there is no consciousness to worry about in the first place.  Eliminativism is most clearly defended by Rey 1997, but see also Dennett 1978, 1988, Wilkes 1984, and Ryle 1949.  On the face of it, this response sounds absurd:  how can one deny that conscious experience exists?  Consciousness might be the one thing that is certain in our epistemology.  But eliminativist views resist the idea that what we call experience is equivalent to consciousness, at least in the phenomenal, “what it’s like” sense.  They hold that consciousness so-conceived is a philosopher’s construction, one that can be rejected without absurdity.  If it is definitional of consciousness that it is nonfunctional, then holding that the mind is fully functional amounts to a denial of consciousness.  Alternately, if qualia are construed as nonrelational, intrinsic qualities of experience, then one might deny that qualia exist (Dennett 1988).  And if qualia are essential to consciousness, this, too, amounts to an eliminativism about consciousness.

What might justify consciousness eliminativism?  First, the very notion of consciousness, upon close examination, may not have well-defined conditions of application—there may be no single phenomenon that the term picks out (Wilkes 1984).  Or the term may serve no use at all in any scientific theory, and so may drop out of a scientifically-fixed ontology (Rey 1997).  If science tells us what there is (as some naturalists hold), and science has no place for nonfunctional intrinsic qualities, then there is no consciousness, so defined.  Finally, it might be that the term ‘consciousness’ gets its meaning as part of a falsifiable theory, our folk psychology. The entities posited by a theory stand or fall with the success of the theory.  If the theory is falsified, then the entities it posits do not exist (compare P.M. Churchland 1981).  And there is no guarantee that folk psychology will not be supplanted by a better theory of the mind, perhaps a neuroscientific or even quantum mechanical theory, at some point.  Thus, consciousness might be eliminated from our ontology.  If that occurs, obviously there is no hard problem to worry about.  No consciousness, no problem!

But eliminativism seems much too strong a reaction to the hard problem, one that throws the baby out with the bathwater.  First, it is highly counterintuitive to deny that consciousness exists.  It seems extremely basic to our conception of minds and persons.  A more desirable view would avoid this move.  Second, it is not clear why we must accept that consciousness, by definition, is nonfunctional or intrinsic.  Definitional, “analytic” claims are highly controversial at best, particularly with difficult terms like ‘consciousness’ (compare Quine 1951, Wittgenstein 1953).  A better solution would hold that consciousness still exists, but it is functional and relational in nature.  This is the strong reductionist approach.

b. Strong Reductionism

Strong reductionism holds that consciousness exists, but contends that it is reducible to tractable functional, nonintrinsic properties.  Strong reductionism further claims that the reductive story we tell about consciousness fully explains, without remainder, all that needs to be explained about consciousness.  Reductionism, generally, is the idea that complex phenomena can be explained in terms of the arrangement and functioning of simpler, better understood parts.  Key to strong reductionism, then, is the idea that consciousness can be broken down and explained in terms of simpler things.  This amounts to a rejection of the idea that experience is simple and basic, that it stands as a kind of epistemic or metaphysical “ground floor.”  Strong reductionists must hold that consciousness is not as it prima facie appears, that it only seems to be marked by immediacy, indescribability, and independence and therefore that it only seems nonfunctional and intrinsic.  Consciousness, according to strong reductionism, can be fully analyzed and explained in functional terms, even if it does not seem that way.

A number of prominent strongly reductive theories exist in the literature.  Functionalist approaches hold that consciousness is nothing more than a functional process.  A popular version of this view is the “global workspace” hypothesis, which holds that conscious states are mental states available for processing by a wide range of cognitive systems (Baars 1988, 1997; Dehaene & Naccache 2001).  They are available in this way by being present in a special network—the “global workspace.”  This workspace can be functionally characterized and it also can be given a neurological interpretation.  In answer to the question “why are these states conscious?” it can be replied that this is what it means to be conscious.  If a state is available to the mind in this way, it is a conscious state (see also Dennett 1991).  (For more neuroscientifically-focused versions of the functionalist approach, see P.S Churchland 1986; Crick 1994; and Koch 2004.)

Another set of views that can be broadly termed functionalist is “enactive” or “embodied” approaches (Hurley 1998, Noë 2005, 2009).  These views hold that mental processes should not be characterized in terms of strictly inner processes or representations.  Rather, they should be cashed out in terms of the dynamic processes connecting perception, bodily and environmental awareness, and behavior.  These processes, the views contend, do not strictly depend on processes inside the head; rather, they loop out into the body and the environment.  Further, the nature of consciousness is tied up with behavior and action—it cannot be isolated as a passive process of receiving and recording information.  These views are cataloged as functionalist because of the way they answer the hard problem:  these physical states (constituted in part by bodily and worldly things) are conscious because they play the right functional role, they do the right thing.

Another strongly reductive approach holds that conscious states are states representing the world in the appropriate way (Dretske 1995, Tye 1995, 2000).  This view, known as “first-order representationalism,” contends that conscious states make us aware of things in world by representing them.  Further, these representations are “nonconceptual” in nature:  they represent features even if the subject in question lacks the concepts needed to cognitively categorize those features.  But these nonconceptual representations must play the right functional role in order to be conscious.  They must be poised to influence the higher-level cognitive systems of a subject.  The details of these representations differ from theorist to theorist, but a common answer to the hard problem emerges.  First-order representational states are conscious because they do the right thing:  they make us aware of just the sorts of features that make up conscious experience, features like the redness of an apple, the sweetness of honey, or the shrillness of a trumpet.  Further, such representations are conscious because they are poised to play the right role in our understanding of the world—they serve as the initial layer of our epistemic contact  with reality, a layer we can then use as the basis of our more sophisticated beliefs and theories.

A further point serves to support the claims of first-order representationalism.  When we reflect on our experience in a focused way, we do not seem to find any distinctively mental properties.  Rather, we find the very things first-order representationalism claims we represent:  the basic sensory features of the world.  If I ask you to reflect closely on your experience of a tree, you do not find special mental qualities.  Rather, you find the tree, as it appears to you, as you represent it.  This consideration, known as “transparency,” seems to undermine the claim that we need to posit special intrinsic qualia, seemingly irreducible properties of our experiences (Harman 1990, though see Kind 2003).  Instead, we can explain all that we experience in terms of representation.  We have a red experience because we represent physical red in the right way.  It is then argued that representation can be given a reductive explanation.  Representation, even the sort of representation involved in experience, is no more than various functional/physical processes of our brains tracking the environment.  It follows that there is no further hard problem to deal with.

A third type of strongly reductive approach is higher-order representationalism (Armstrong 1968, 1981; Rosenthal 1986, 2005; Lycan 1987, 1996, 2001; Carruthers 2000, 2005).  This view starts with the question of what accounts for the difference between conscious and nonconscious mental states.  Higher-order theorists hold that an intuitive answer is that we are appropriately aware of our conscious states, while we are unaware of our nonconscious states.  The task of a theory of consciousness, then, is to explain the awareness accounting for this difference.  Higher-order representationalists contend that the awareness is a product of a specific sort of representation, a representation that picks out the subject’s own mental states.  These “higher-order” representations (representations of other representations) make the subject aware of her states, thus accounting for consciousness.  In answer to the hard problem, the higher-order theorist responds that these states are conscious because the subject is appropriately aware of them by way of higher-order representation.  The higher-order representations themselves are held to be nonconscious.  And since representation can plausibly be reduced to functional/physical processes, there is no lingering problem to explain (though see Gennaro 2005 for more on this strategy).

A final strongly reductive view, “self-representationalism,” holds that troubles with the higher-order view demand that we characterize the awareness subjects have of their conscious states as a kind of self-representation, where one complex representational state is about both the world and that very state itself (Gennaro 1996, Kriegel 2003, 2009, Van Gulick 2004, 2006, Williford 2006).  It may seem paradoxical to say that a state can represent itself, but this can dealt with by holding that the state represents itself in virtue of one part of the state representing another, and thereby coming to indirectly represent the whole.  Further, self-representationalism may provide the best explanation of the seemingly ubiquitous presence of self-awareness in conscious experience.  And, again, in answer to the question of why such states are conscious, the self-representationalist can respond that conscious states are ones the subject is aware of, and self-representationalism explains this awareness.  And since self-representation, properly construed, is reducible to functional/physical processes, we are left with a complete explanation of consciousness.  (For more details on how higher-order/self-representational views deal with the hard problem, see Gennaro 2012, chapter 4.)

However, there remains considerable resistance to strongly reductive views.  The main stumbling block is that they seem to leave unaddressed the pressing intuition that one can easily conceive of a system satisfying all the requirements of the strongly reductive views but still lacking consciousness (Chalmers 1996, chapter 3).  It is argued that an effective theory ought to close off such easy conceptions.  Further, strong reductivists seem committed to the claim that there is no knowledge of consciousness that cannot be grasped theoretically.  If a strongly reductive view is true, it seems that a blind person can gain full knowledge of color experience from a textbook.  But surely she still lacks some knowledge of what it’s like to see red, for example?  Strongly reductive theorists can contend that these recalcitrant intuitions are merely a product of lingering confused or erroneous views of consciousness.  But in the face of such worries, many have felt it better to find a way to respect these intuitions while still denying the potentially unpleasant ontological implications of the hard problem.  Hence, weak reductionism.

c. Weak Reductionism

Weak reductionism, in contrast to the strong version, holds that consciousness is a simple or basic phenomenon, one that cannot be informatively broken down into simpler nonconscious elements.  But according to the view we can still identify consciousness with physical properties if the most parsimonious and productive theory supports such an identity (Block 2002, Block & Stalnaker 1999, Hill 1997, Loar 1997, 1999, Papineau 1993, 2002, Perry 2001).  What’s more, once the identity has been established, there is no further burden of explanation.  Identities have no explanation:  a thing just is what it is.  To ask how it could be that Mark Twain is Sam Clemens, once we have the most parsimonious rendering of the facts, is to go beyond meaningful questioning.  And the same holds for the identity of conscious states with physical states.

But there remains the question of why the identity claim appears so counterintuitive and here weak reductionists generally appeal to the “phenomenal concepts strategy” (PCS) to make their case (compare Stoljar 2005).  The PCS holds that the hard problem is not the result of a dualism of facts, phenomenal and physical, but rather a dualism of concepts picking out fully physical conscious states.  One concept is the third-personal physical concept of neuroscience.  The other concept is a distinctively first-personal “phenomenal concept”—one that picks out conscious states in a subjectively direct manner.  Because of the subjective differences in these modes of conceptual access, consciousness does not seem intuitively to be physical.  But once we understand the differences in the two concepts, there is no need to accept this intuition.

Here is a sketch of how a weakly reductive view of consciousness might proceed.  First, we find stimuli that reliably trigger reports of phenomenally conscious states from subjects.  Then we find what neural processes are reliably correlated with those reported experiences.  It can then be argued on the basis of parsimony that the reported conscious state just is the neural state—an ontology holding that two states are present is less simple than one identifying the two states.  Further, accepting the identity is explanatorily fruitful, particularly with respect to mental causation.  Finally, the PCS is appealed to in order to explain why the identity remains counterintuitive.  And as to the question of why this particular neural state should be identical to this particular phenomenal state, the answer is that this is just the way things are.  Explanation bottoms out at this point and requests for further explanation are unreasonable.

But there are pressing worries about weak reductionism.  There seems to be an undischarged phenomenal element within the weakly reductive view (Chalmers 2006).  When we focus on the PCS, it seems that we lack a plausible story about how it is that phenomenal concepts reveal what it’s like for us in experience.  The direct access of phenomenal concepts seems to require that phenomenal states themselves inform us of what they are like. A common way to cash out the PCS is to say that the phenomenal properties themselves are embedded in the phenomenal concepts, and that alone makes them accessible in the seemingly rich manner of introspected experience.  When it is asked how phenomenal properties might underwrite this access, the answer given is that this is in the nature of phenomenal properties—that is just what they do.  Again, we are told that explanation must stop somewhere.  But at this point, it seems that there is little to distinguish that weak reductionist from the various forms of nonreductive and dualistic views cataloged below.  They, too, hold that it is in the nature of phenomenal properties to underwrite first-person access.  But they hold that there is no good reason to think that properties with this sort of nature are physical.  We know of no other physical property that possesses such a nature.  All that we are left with to recommend weak reductionism is a thin claim of parsimony and an overly-strong fealty to physicalism.  We are asked to accept a brute identity here, one that seems unprecedented in our ontology given that consciousness is a macro-level phenomenon.  Other examples of such brute identity—of electricity and magnetism into one force, say—occur at the foundational level of physics.  Neurological and phenomenal properties do not seem to be basic in this way.  We are left with phenomenal properties inexplicable in physical terms, “brutally” identified with neurological properties in a way that nothing else seems to be.  Why not take all this as an indication that phenomenal properties are not physical after all?

The weak reductionist can respond that the question of mental causation still provides a strong enough reason to hold onto physicalism.  A plausible scientific principal is that the physical world is causally closed:  all physical events have physical causes.  And since our bodies are physical, it seems that denying that consciousness is physical renders it epiphenomenal.  The apparent implausibility of epiphenomenalism may be enough to motivate adherence to weak reductionism, even with its explanatory short-comings.  Dualistic challenges to this claim will be discussed in later sections.

It is possible, however, to embrace weak reductionism and still acknowledge that some questions remain to be answered.  For example, it might be reasonable to demand some explanation of how particular neural states correlate with differences in conscious experience.  A weak reductionist might hold that this is a question we at present cannot answer.  It may be that one day we will be in a position to so, due to a radical shift in our understanding of consciousness or physical reality.  Or perhaps this will remain an unsolvable mystery, one beyond our limited abilities to decipher.  Still, there may be good reasons to hold at present that the most parsimonious metaphysical picture is the physicalist picture.  The line between weak reductionism and the next set of views to be considered, mysterianism, may blur considerably here.

d. Mysterianism

The mysterian response to the hard problem does not offer a solution; rather, it holds that the hard problem cannot be solved by current scientific method and perhaps cannot be solved by human beings at all.  There are two varieties of the view.  The more moderate version of the position, which can be termed “temporary mysterianism,” holds that given the current state of scientific knowledge, we have no explanation of why some physical states are conscious (Nagel 1974,  Levine 2001).  The gap between experience and the sorts of things dealt with in modern physics—functional, structural, and dynamical properties of basic fields and particles—is simply too wide to be bridged at present.  Still, it may be that some future conceptual revolution in the sciences will show how to close the gap.  Such massive conceptual reordering is certainly possible, given the history of science.  And, indeed, if one accepts the Kuhnian idea of shifts between incommensurate paradigms, it might seem unsurprising that we, pre-paradigm-shift, cannot grasp what things will be like after the revolution.  But at present we have no idea how the hard problem might be solved.

Thomas Nagel, in sketching his version of this idea, calls for a future “objective phenomenology” which will “describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences” (1974, 449).  Without such a new conceptual system, Nagel holds, we are left unable to bridge the gap between consciousness and the physical.  Consciousness may indeed be a physical, but we at present have no idea how this could be so.

It is of course open for both weak and strong reductionists to accept a version of temporary mysterianism.  They can agree that at present we do not know how consciousness fits into the physical world, but the possibility is open that future science will clear up the mystery.  The main difference between such claims by reductionists and by mysterians is that the mysterians reject the idea that current reductive proposals do anything at all to close the gap.  How different the explanatory structure must be to count as truly new and not merely an extension of the old is not possible to gauge with any precision.  So the difference between a very weak reductionist and a temporary, though optimistic mysterian may not amount to much.

The stronger version of the position, “permanent mysterianism,” argues that our ignorance in the face of the hard problem is not merely transitory, but is permanent, given our limited cognitive capacities (McGinn 1989, 1991).  We are like squirrels trying to understand quantum mechanics:  it just is not going to happen.  The main exponent of this view is Colin McGinn, who argues that a solution to the hard problem is “cognitively closed” to us.  He supports his position by stressing consequences of a modular view of the mind, inspired in part by Chomsky’s work in linguistics.  Our mind just may not be built to solve this sort of problem.  Instead, it may be composed of dedicated, domain-specific “modules” devoted to solving local, specific problems for an organism.  An organism without a dedicated “language acquisition device” equipped with “universal grammar” cannot acquire language.  Perhaps the hard problem requires cognitive apparatus we just do not possess as a species.  If that is the case, no further scientific or philosophical breakthrough will make a difference.  We are not built to solve the problem:  it is cognitively closed to us.

A worry about such a claim is that it is hard to establish just what sorts of problems are permanently beyond our ken.  It seems possible that the temporary mysterian may be correct here, and what looks unbridgeable in principle is really just a temporary roadblock.  Both the temporary and permanent mysterian agree on the evidence.  They agree that there is a real gap at present between consciousness and the physical and they agree that nothing in current science seems up to the task of solving the problem.  The further claim that we are forever blocked from solving the problem turns on controversial claims about the nature of the problem and the nature of our cognitive capacities.  Perhaps those controversial claims will be made good, but at present, it is hard to see why we should give up all hope, given the history of surprising scientific breakthroughs.

e. Interactionist Dualism

Perhaps, though, we know enough already to establish that consciousness is not a physical phenomenon.   This brings us to what has been, historically speaking, the most important response to the hard problem and the more general mind-body problem: dualism, the claim that consciousness is ontologically distinct from anything physical.  Dualism, in its various forms, reasons from the explanatory, epistemological, or conceptual gaps between the phenomenal and the physical to the metaphysical conclusion that the physicalist worldview is incomplete and needs to be supplemented by the addition of irreducibly phenomenal substance or properties.

Dualism can be unpacked in a number of ways.  Substance dualism holds that consciousness makes up a distinct fundamental “stuff” which can exist independently of any physical substance.  Descartes’ famous dualism was of this kind (Descartes 1640/1984).  A more popular modern dualist option is property dualism, which holds that the conscious mind is not a separate substance from the physical brain, but that phenomenal properties are nonphysical properties of the brain.  On this view, it is metaphysically possible that the physical substrate occurs without the phenomenal properties, indicating their ontological independence, but phenomenal properties cannot exist on their own.  The properties might emerge from some combination of nonphenomenal properties (emergent dualism—compare Broad 1925) or they might be present as a fundamental feature of reality, one that necessarily correlates with physical matter in our world, but could in principle come apart from the physical in another possible world.

A key question for dualist views concerns the relationship between consciousness and the physical world, particularly our physical bodies.  Descartes held that conscious mental properties can have a causal impact upon physical matter—this is known as interactionist dualism.  Recent defenders of interactionist dualism include Foster 1991, Hodgson 1991, Lowe 1996, Popper and Eccles 1977, H. Robinson 1982, Stapp 1993, and Swinburne 1986.  However, interactionist dualism requires rejecting the “causal closure” of the physical domain, the claim that every physical event is fully determined by a physical cause.  Causal closure is a long-held principle in the sciences, so its rejection marks a strong break from current scientific orthodoxy (though see Collins 2011).  Another species of dualism accepts the causal closure of physics, but still holds that phenomenal properties are metaphysically distinct from physical properties.  This compatibilism is achieved at the price of consciousness epiphenomenalism, the view that conscious properties can be caused by physical events, but they cannot in turn cause physical events.  I will discuss interactionist dualism in this section, including a consideration of how quantum mechanics might open up a workable space for an acceptable dualist interactionist view.   I will discuss epiphenomenalism in the following section.

Interactionist dualism, of both the substance and property type, holds that consciousness is causally efficacious in the production of bodily behavior.  This is certainly a strongly intuitive position to take with regard to mental causation, but it requires rejecting the causal closure of the physical.  It is widely thought that the principle of causal closure is central to modern science, on par with basic conservation principles like the conservation of energy or matter in a physical reaction (see, for example, Kim 1998).  And at macroscopic scales, the principle appears well-supported by empirical evidence.  However, at the quantum level it is more plausible to question causal closure.  On one reading of quantum mechanics, the progression of quantum-level events unfolds in a deterministic progression until an observation occurs.  At that point, some views hold that the progression of events becomes indeterminstic.  If so, there may be room for consciousness to influence how such “decoherence” occurs—that is, how the quantum “wave function” collapses into the classical, observable macroscopic world we experience.  How such a process occurs is the subject of speculative theorizing in quantum theories of consciousness.  It may be that such views are better cataloged as physicalist:  the properties involved might well be labeled as physical in a completed science (see, for example, Penrose 1989, 1994; Hameroff 1998).  If so, the quantum view is better seen as strongly or weakly reductive.

Still, it might be that the proper cashing out of the idea of “observation” in quantum theory requires positing consciousness as an unreduced primitive.  Observation may require something intrinsically conscious, rather than something characterized in the relational terms of physical theory.   In that case, phenomenal properties would be metaphysically distinct from the physical, traditionally characterized, while playing a key role in physical theory—the role of collapsing the wave function by observation.  Thus, there seems to be theoretical space for a dualist view which rejects closure but maintains a concordance with basic physical theory.

Sill, such views face considerable challenges.  They are beholden to particular interpretations of quantum mechanics and this is far from a settled field, to put it mildly.  It may well be that the best interpretation of quantum mechanics rejects the key assumption of indeterminacy here (see Albert 1993 for the details of this debate).  Further, the kinds of indeterminacies discoverable at the quantum level may not correspond in any useful way to our ordinary idea of mental causes.  The pattern of decoherence may have little to do with my conscious desire to grab a beer causing me to go to the fridge.  Finally, there is the question of how phenomenal properties at the quantum level come together to make up the conscious experience we have.  Our conscious mental lives are not themselves quantum phenomenon—how, then, do micro-phenomenal quantum-level properties combine to constitute our experiences?  Still, this is an alluring area of investigation, bringing together the mysteries of consciousness and quantum mechanics.  But such a mix may only compound our explanatory troubles!

f. Epiphenomenalism

A different dualistic approach accepts the causal closure of physics by holding that phenomenal properties have no causal influence on the physical world (Campbell 1970, Huxley 1874, Jackson 1982, and W.S. Robinson 1988, 2004).  Thus, any physical effect, like a bodily behavior, will have a fully physical cause.  Phenomenal properties merely accompany causally efficacious physical properties, but they are not involved in making the behavior happen.  Phenomenal properties, on this view, may be lawfully correlated with physical properties, thus assuring that whenever a brain event of a particular type occurs, a phenomenal property of a particular type occurs.  For example, it may be that bodily damage causes activity in the amygdala, which in turn causes pain-appropriate behavior like screaming or cringing.   The activity in the amygdala will also cause the tokening of phenomenal pain properties.  But these properties are out of the causal chain leading to the behavior.  They are like the activity of a steam whistle relative to the causal power of the steam engine moving a train’s wheels.

Such a view has no obvious logical flaw, but it is in strong conflict with our ordinary notions of how conscious states are related to behavior.  It is extremely intuitive that our pains at times cause us to scream or cringe.  But on the epiphenomenalist view, that cannot be the case.  What’s more, our knowledge of our conscious states cannot be caused by the phenomenal qualities of our experiences.  On the epiphenomenalist view, my knowledge that I’m in pain is not caused by the pain itself.  This, too, seems absurd:  surely, the feeling of pain is causally implicated in my knowledge of that pain!  But the epiphenomenalist can simply bite the bullet here and reject the commonsense picture.  We often discover odd things when we engage in serious investigation, and this may be one of them.  Denying commonsense intuition is better than denying a basic scientific principle like causal closure, according to epiphenomenalists.  And it may be that experimental results in the sciences undermine the causal efficacy of consciousness as well, so this is not so outrageous a claim (See Libet, 2004; Wegner 2002, for example).  Further, the epiphenomenalist can deny that we need a causal theory of first-person knowledge.  It may be that our knowledge of our conscious states is achieved by a unique kind of noncausal acquaintance.  Or maybe merely having the phenomenal states is enough for us to know of them—our knowledge of consciousness may be constituted by phenomenal states, rather than caused by them.  Knowledge of causation is a difficult philosophical area in general, so it may reasonable to offer alternatives to the causal theory in this context.  But despite these possibilities, epiphenomenalism remains a difficult view to embrace because of its strongly counterintuitive nature.

g. Dual Aspect Theory/Neutral Monism/Panpsychism

A final set of views, close in spirit to dualism, hold that phenomenal properties cannot be reduced to more basic physical properties, but might reduce to something more basic still, a substance that is both physical and phenomenal or that underwrites both.  Defenders of such views agree with dualists that the hard problem forces a rethinking of our basic ontology, but they disagree that this entails dualism.  There are several variations of the idea.  It may be that there is a more basic substance underlying all physical matter and this basic substance possesses phenomenal as well as physical properties (dual aspect theory:  Spinoza 1677/2005, P. Strawson 1959, Nagel 1986).  Or it may be that this more basic substance is “neutral”—neither phenomenal nor physical, yet somehow underlying both (neutral monism:  Russell 1926, Feigl 1958, Maxwell 1979, Lockwood 1989, Stubenberg 1998, Stoljar 2001, G. Strawson 2008).  Or it may be that phenomenal properties are the intrinsic categorical bases for the relational, dispositional properties described in physics and so everything physical has an underlying phenomenal nature (panpsychism:  Leibniz 1714/1989, Whitehead 1929, Griffin 1998, Rosenberg 2005, Skrbina 2007).  These views have all received detailed elaboration in past eras of philosophy, but they have seen a distinct revival as responses to the hard problem.

There is considerable variation in how theorists unpack these kinds of views, so it is only possible here to give generic versions of the ideas.  All three views make consciousness more basic or as basic as physical properties; this is something they share with dualism.  But they disagree about the right way to spell out the metaphysical relations between the phenomenal, the physical, and any more basic substance there might be.  The true differences between the views are not always clear even to the views’ defenders, but we can try to tease them apart here.

A dual-aspect view holds that there is one basic underlying stuff that possesses both physical and phenomenal properties.  These properties may only be instantiated when the right combinations of the basic substance are present, so panpsychism is not a necessary entailment of the view.  For example, when the basic substance is configured in the form of a brain, it then realizes phenomenal as well as physical properties.  But that need not be the case when the fundamental stuff makes up a table.  But in any event, phenomenal properties are not themselves reducible to physical properties.  There is a fine line between such views and dualist views, mainly turning on the difference between constitution and lawful correlation.

Neutral monist views hold that there is a more basic neutral substance underlying both the phenomenal and the physical.  ‘Neutral’ here means that the underlying stuff really is neither phenomenal nor physical, so there is a good sense in which such a position is reductive:  it explains the presence of the phenomenal by reference to something else more basic.  This distinguishes it from the dual-aspect approach—on the dual-aspect view, the underlying stuff already possesses phenomenal (and physical) properties, while on neutral monism it does not.  That leaves neutral monism with the challenge of explaining this reductive relationship, as well as explaining how the neutral substance underlies physical reality without itself being physical.

Panpsychism holds that the phenomenal is basic to all matter.  Such views hold that the phenomenal somehow underwrites the physical or is potentially present at all times as a property of a more basic substance.  This view must explain what it means to say that everything is conscious in some sense.  Further, it must explain how it is that the basic phenomenal (or “protophenomenal”) elements combine to form the sorts of properties we are acquainted with in consciousness.  Why is it that some combinations form the experiences we enjoy and others (presumably) do not?

One line of support for these types of views comes from the way that physical theory defines its basic properties in terms of their dispositions to causally interact with each other.  For example, what it is to be a quark of a certain type is just to be disposed to behave in certain ways in the presence of other quarks.  Physical theory is silent about what stuff might underlie or constitute the entities with these dispositions—it deals only in extrinsic or relational properties, not in intrinsic properties.  At the same time, there is reason to hold that consciousness possesses nonrelational intrinsic qualities.  Indeed, this may explain why we cannot know what it’s like to be a bat—that requires knowledge of an intrinsic quality not conveyable by relational description.  Putting these two ideas together, we find a motivation for the sorts of views canvassed here.  Basic physics is silent about the intrinsic categorical bases underlying the dispositional properties described in physical theory.  But it seems plausible that there must be such bases—how could there be dispositions to behave thus-and-so without some categorical base to ground the disposition?  And since we already have reason to believe that conscious qualities are intrinsic, it makes sense to posit phenomenal properties as the categorical bases of basic physical matter. Or we can posit a neutral substance to fill this role, one also realizing phenomenal properties when in the right circumstances.

These views all seem to avoid epiphenomenalism.  Whenever there is a physical cause of behavior, the underlying phenomenal (or neutral) basis will be present to do the work.  But that cause might itself be constituted by the phenomenal, in the senses laid out here.  What’s more, there is nothing in conflict with physics—the properties posited appear at a level below the range of relational physical description.  And they do not conflict with or preempt anything present in physical theory.

But we are left with several worries.  First, it is again the case that phenomenal properties are posited at an extreme micro-level.  How it is that such micro-phenomenal properties cohere into the sorts of experiential properties present in consciousness is unexplained.  What’s more, if we take the panpsychic route, we are faced with the claim that every physical object has a phenomenal nature of some kind.  This may not be incoherent, but it is a counterintuitive result.  But if we do not accept panpsychism, we must explain how the more basic underlying substance differs from the phenomenal and yet instantiates it in the right circumstances.  Simply saying that this just is the nature of the neutral substance is not an informative answer.  Finally, it is unclear how these views really differ from a weakly reductionist account.  Both hold that there is a basic and brute connection between the physical brain and phenomenal consciousness.  On the weakly reductionist account, the connection is one of brute identity.  On the dual-aspect/neutral monist/panpsychic account, it is one of brute constitution, where two properties, the physical and the phenomenal, constantly co-occur (because the one constitute the categorical base of the other, or they are aspects of a more basic stuff, etc.), though they are held to be metaphysically distinct.  Is there any evidence that could decide between the views?  The apparent differences here may be more one of style than of substance, despite the intricacies of these metaphysical debates.

4. References and Further Reading

  • Albert, D. Z. Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press, 1993.
  • Armstrong, D. A Materialist Theory of Mind. London: Routledge and Kegan Paul, 1968.
  • Armstrong, D. “What is Consciousness?” In The Nature of Mind. Ithaca, NY: Cornell University Press, 1981.
  • Baars, B. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press, 1988.
  • Baars, B. In The Theater of Consciousness. New York: Oxford University Press, 1997.
  • Block, N. “Are Absent Qualia Impossible?” Philosophical Review 89: 257-74, 1980.
  • Block, N. “On a Confusion about the Function of Consciousness.” Behavioral and Brain Sciences 18: 227-47, 1995.
  • Block, N. “The Harder Problem of Consciousness.” The Journal of Philosophy, XCIX, 8, 391-425, 2002.
  • Block, N. & Stalnaker, R. “Conceptual Analysis, Dualism, and the Explanatory Gap.” Philosophical Review 108: 1-46, 1999.
  • Broad, C.D. The Mind and its Place in Nature. Routledge and Kegan Paul, London, 1925.
  • Campbell, K. K. Body and Mind. London: Doubleday, 1970.
  • Carruthers, P. Phenomenal Consciousness. Cambridge, MA: Cambridge University Press, 2000.
  • Carruthers, P. Consciousness: Essays from a Higher-Order Perspective. New York: Oxford University Press, 2005.
  • Chalmers, D.J. “Facing up to the Problem of Consciousness.” In Journal of Consciousness Studies 2: 200-19, 1995.
  • Chalmers, D.J. The Conscious Mind:  In Search of a Fundamental Theory. Oxford: Oxford University Press, 1996.
  • Chalmers, D.J. “Phenomenal Concepts and the Explanatory Gap.” In T. Alter & S. Walter, eds. Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism Oxford:  Oxford University Press, 2006.
  • Churchland, P.M. “Eliminative Materialism and the Propositional Attitudes.” Journal of Philosophy, 78, 2, 1981.
  • Churchland, P. M. “Reduction, qualia, and the direct introspection of brain states.” Journal of Philosophy, 82, 8–28, 1985.
  • Churchland, P. S. Neurophilosophy. Cambridge, MA: MIT Press, 1986.
  • Collins, R. “Energy of the soul.” In M.C. Baker & S. Goetz eds. The Soul Hypothesis, London: Continuum, 2011.
  • Crane, T. “The origins of qualia.” In T. Crane & S. Patterson, eds. The History of the Mind-Body Problem. London: Routledge, 2000.
  • Crick, F. H. The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Scribners, 1994.
  • Dehaene, S. & Naccache, L. “Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework.” Cognition 79, 1-37, 2001.
  • Dennett, D.C. “Why You Can’t Make a Computer that Feels Pain.” Synthese 38, 415-456, 1978.
  • Dennett, D.C. “Quining Qualia.” In A. Marcel & E. Bisiach eds. Consciousness and Contemporary Science. New York: Oxford University Press, 1988.
  • Dennett, D.C. Consciousness Explained. Boston: Little, Brown, and Co, 1991.
  • Descartes, R. Meditations on first philosophy. In J. Cottingham, R. Stoothoff, & D. Murdoch, Trans. The philosophical writings of Descartes: Vol. 2, Cambridge:  Cambridge University Press, 1-50, 1640/1984.
  • Dretske, F. Naturalizing the Mind. Cambridge, MA: MIT Press, 1995.
  • Farrell, B.A. “Experience.” Mind 59 (April):170-98, 1950.
  • Feigl, H. “The ‘Mental’ and the ‘Physical.’” In H. Feigl, M. Scriven & G. Maxwell, eds.  Minnesota Studies in the Philosophy of Science. Minneapolis: University of Minnesota Press, 1958.
  • Foster, J. The Immaterial Self: A Defence of the Cartesian Dualist Conception of Mind. London: Routledge, 1991.
  • Gennaro, R.J. Consciousness and Self-consciousness: A Defense of the Higher-Order Thought Theory of Consciousness. Amsterdam & Philadelphia: John Benjamins, 1996.
  • Gennaro, R.J. The HOT theory of consciousness: Between a rock and a hard place? Journal of Consciousness Studies 12 ( 2 ): 3 – 21, 2005..
  • Gennaro, R.J. The Consciousness Paradox. Cambridge, MA:  MIT Press, 2012..
  • Griffin, D. R. Unsnarling the World-Knot: Consciousness, Freedom, and the Mind Body Problem. Berkeley:  University of California Press, 1998.
  • Hameroff, S. “Quantum Computation in Brain Microtubules? The Penrose-Hameroff “Orch OR” Model of Consciousness.” In Philosophical Transactions Royal Society London A 356:1869-96, 1998.
  • Harman, G. “The Intrinsic Quality of Experience.” In J. Tomberlin, ed. Philosophical Perspectives, 4. Atascadero, CA: Ridgeview Publishing, 1990.
  • Hill, C. S. “Imaginability, conceivability, possibility, and the mind-body problem.”
  • Philosophical Studies 87: 61-85, 1997.
  • Hodgson, D. The Mind Matters: Consciousness and Choice in a Quantum World. Oxford: Oxford University Press, 1991.
  • Hurley, S. Consciousness in Action.  Cambridge, MA:  Harvard University Press, 1998.
  • Huxley, T. “On the hypothesis that animals are automata, and its history.” Fortnightly Review 95: 555-80, 1874.
  • Jackson, F. “Epiphenomenal Qualia.” In Philosophical Quarterly 32: 127-136, 1982.
  • Jackson, F. “What Mary didn’t Know.” In Journal of Philosophy 83: 291-5, 1986.
  • Kim, J. Mind in Physical World. Cambridge: MIT Press, 1998.
  • Kind, A. “What’s so Transparent about Transparency?” In Philosophical Studies 115: 225-244, 2003.
  • Koch, C. The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and Company, 2004.
  • Kriegel, U. “Consciousness as intransitive self-consciousness: Two views and an argument.” Canadian Journal of Philosophy, 33, 103–132, 2005.
  • Kriegel, U. Subjective Consciousness:  A Self-Representational Theory.  Oxford:  Oxford University Press, 2009.
  • Leibniz, G. Monadology. In G. W. Leibniz: Philosophical Essays, R. Ariew & D. Garber eds. and trans., Indianapolis:  Hackett Publishing Company, 1714/1989.
  • Levine, J. “Materialism and Qualia: the Explanatory Gap.” In Pacific Philosophical Quarterly 64,354-361, 1983.
  • Levine, J. “On Leaving out what it’s like.” In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993.
  • Levine, J. Purple Haze: The Puzzle of Conscious Experience. Cambridge, MA: MIT Press, 2001.
  • Lewis, C.I. Mind and the World Order.  London:  Constable, 1929.
  • Lewis, D.K. “Psychophysical and Theoretical Identifications.”  Australasian Journal of Philosophy. L, 3, 249-258, 1972.
  • Libet, B. Mind Time: The Temporal Factor in Consciousness.  Cambridge, MA:  Harvard University Press, 2004.
  • Loar, B. “Phenomenal States”. In N. Block, O. Flanagan, and G. Güzeldere eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.
  • Loar, B. “David Chalmers’s The Conscious Mind.” Philosophy and Phenomenological Research 59: 465-72, 1999.
  • Lockwood, M. Mind, Brain and the Quantum. The Compound ‘I’. Oxford: Basil Blackwell, 1989.
  • Lowe, E.J. Subjects of Experience. Cambridge:  Cambridge University Press, 1996.
  • Lycan, W.G.  Consciousness. Cambridge, MA:  MIT Press, 1987.
  • Lycan, W.G. Consciousness and Experience. Cambridge, MA: MIT Press, 1996.
  • Lycan, W.G. “A Simple Argument for a Higher-Order Representation Theory of Consciousness.” Analysis 61: 3-4, 2001.
  • Maxwell, G. “Rigid designators and mind-brain identity.” Minnesota Studies in the Philosophy of Science 9: 365-403, 1979.
  • McGinn, C. “Can we solve the Mind-Body Problem?” In Mind 98:349-66, 1989.
  • McGinn, C. The Problem of Consciousness. Oxford: Blackwell, 1991.
  • Nagel, T. “What is it like to be a Bat?” In Philosophical Review 83: 435-456, 1974.
  • Nagle, T. The View from Nowhere. Oxford University Press, 1986.
  • Noë, A. Action in Perception.  Cambridge, MA; The MIT Press, 2005.
  • Noë, A. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness.  New York:  Hill & Wang, 2009.
  • Papineau, D. “Physicalism, consciousness, and the antipathetic fallacy.” Australasian Journal of Philosophy 71, 169-83, 1993.
  • Papineau, D. Thinking about Consciousness. Oxford: Oxford University Press, 2002.
  • Perry, J. Knowledge, Possibility, and Consciousness. Cambridge, MA: MIT Press, 2001.
  • Penrose, R. The Emperor’s New Mind: Computers, Minds and the Laws of Physics. Oxford: Oxford University Press, 1989.
  • Penrose, R. Shadows of the Mind. Oxford: Oxford University Press, 1994.
  • Popper, K. & Eccles, J. The Self and Its Brain: An Argument for Interactionism.  Berlin, Heidelberg: Springer, 1977.
  • Quine, W.V.O. “Two Dogmas of Empiricism.” Philosophical Review, 60: 20-43, 1951.
  • Rey, G. “A Question About Consciousness.” In N. Block, O. Flanagan, and G. Güzeldere eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 461-482, 1997.
  • Robinson, H. Matter and Sense, Cambridge:  Cambridge University Press, 1982.
  • Robinson, W. S. Brains and People: An Essay on Mentality and its Causal Conditions.  Philadelphia: Temple University Press, 1988.
  • Robinson, W.S.  Understanding Phenomenal Consciousness. New York : Cambridge University Press, 2004.
  • Rosenberg, G. A Place for Consciousness: Probing the Deep Structure of the Natural World. Oxford: Oxford University Press, 2005.
  • Rosenthal, D. M. “Two Concepts of Consciousness.” In Philosophical Studies 49:329-59, 1986.
  • Rosenthal, D.M. Consciousness and Mind.  Oxford:  Clarendon Press, 2005.
  • Russell, B. The Analysis of Matter. London: Kegan Paul, 1927.
  • Ryle, G. The Concept of Mind.   London:  Hutchinson, 1949.
  • Shear, J. ed. Explaining Consciousness: The Hard Problem. Cambridge, MA: MIT Press, 1997.
  • Skrbina, D. Panpsychism in the West. Cambridge MA:  MIT/Bradford Books, 2007.
  • Spinoza, B. Ethics. E. Curley, trans. New York:  Penguin, 1677/2005.
  • Stapp, H. Mind, Matter, and Quantum Mechanics. Berlin: Springer-Verlag, 1993.
  • Stoljar, D. “Two Conceptions of the Physical.” Philosophy and Phenomenological Research, 62: 253–281, 2001.
  • Stoljar, D. “Physicalism and phenomenal concepts.” Mind and Language 20, 5, 469–494, 2005.
  • Strawson, G. Real Materialism and Other Essays.  Oxford:  Oxford University Press, 2008.
  • Strawson, P. Individuals. An Essay in Descriptive Metaphysics. London: Methuen, 1959.
  • Stubenberg, L. Consciousness and Qualia. Philadelphia & Amsterdam: John Benjamins Publishers, 1998.
  • Swinburne, R. The Evolution of the Soul. Oxford: Oxford University Press, 1986.
  • Tye, M. Ten Problems of Consciousness. Cambridge, MA: MIT Press, 1995.
  • Tye, M. Consciousness, Color, and Content. Cambridge, MA: MIT Press, 2000.
  • Van Gulick, R. “Higher-Order Global States HOGS: An Alternative Higher-Order Model of Consciousness.” In R. Gennaro ed. Higher-Order Theories of Consciousness: An Anthology. Amsterdam and Philadelphia: John Benjamins, 2004.
  • Van Gulick, R. “Mirror Mirror – is that all?” In U. Kriegel & K. Williford Self-Representational Approaches to Consciousness. Cambridge, MA:  MIT Press, 2006.
  • Wegner, D. The Illusion of Conscious Will.  Cambridge, MA:  MIT Press, 2002.
  • Whitehead, A.N. Process and Reality: an Essay in Cosmology, New York: Macmillan, 1929.
  • Wilkes, K. V. “Is Consciousness Important?” In British Journal for the Philosophy of Science 35: 223-43, 1984.
  • Williford, K. “The Self-Representational Structure of Consciousness.” In U. Kriegel & K. Williford, eds. Self-Representational Approaches to Consciousness. Cambridge, MA:  MIT Press, 2006.
  • Wittgenstein, L. Philosophical Investigations. Oxford: Blackwell, 1953.

 

Author Information

Josh Weisberg
Email: jweisberg@uh.edu
University of Houston
U. S. A.

The Lucas-Penrose Argument about Gödel's Theorem

In 1961, J.R. Lucas published “Minds, Machines and Gödel,” in which he formulated a controversial anti-mechanism argument.  The argument claims that Gödel’s first incompleteness theorem shows that the human mind is not a Turing machine, that is, a computer.  The argument has generated a great deal of discussion since then.  The influential Computational Theory of Mind, which claims that the human mind is a computer, is false if Lucas’s argument succeeds.  Furthermore, if Lucas’s argument is correct, then “strong artificial intelligence,” the view that it is possible at least in principle to construct a machine that has the same cognitive abilities as humans, is false.  However, numerous objections to Lucas’s argument have been presented.  Some of these objections involve the consistency or inconsistency of the human mind; if we cannot establish that human minds are consistent, or if we can establish that they are in fact inconsistent, then Lucas’s argument fails (for reasons made clear below).  Others object to various idealizations that Lucas’s argument makes.  Still others find some other fault with the argument.  Lucas’s argument was rejuvenated when the physicist R. Penrose formulated and defended a version of it in two books, 1989’s The Emperor's New Mind and 1994’s Shadows of the Mind. Although there are similarities between Lucas’s and Penrose’s arguments, there are also some important differences.  Penrose argues that the Gödelian argument implies a number of claims concerning consciousness and quantum physics; for example, consciousness must arise from quantum processes and it might take a revolution in physics for us to obtain a scientific explanation of consciousness.  There have also been objections raised to Penrose’s argument and the various claims he infers from it: some question the anti-mechanism argument itself, some question whether it entails the claims about consciousness and physics that he thinks it does, while others question his claims about consciousness and physics, apart from his anti-mechanism argument.

Section one discusses Lucas’s version of the argument.  Numerous objections to the argument – along with Lucas’s responses to these objections – are discussed in section two. Penrose’s version of the argument, his claims about consciousness and quantum physics, and various objections that are specific to Penrose’s claims are discussed in section three. Section four briefly addresses the question, “What did Gödel himself think that his theorem implied about the human mind?”  Finally, section five mentions two other anti-mechanism arguments.

Table of Contents

  1. Lucas’s Original Version of the Argument
  2. Some Possible Objections to Lucas
    1. Consistency
    2. Benacerraf’s Criticism
    3. The Whiteley Sentence
    4. Issues Involving “Idealizations”
    5. Lewis’s Objection
  3. Penrose’s New Version of the Argument
    1. Penrose’s Gödelian Argument
    2. Consciousness and Physics
  4. Gödel’s Own View
  5. Other Anti-Mechanism Arguments
  6. References and Further Reading

1. Lucas’s Original Version of the Argument

Gödel’s (1931) first incompleteness theorem proves that any consistent formal system in which a “moderate amount of number theory” can be proven will be incomplete, that is, there will be at least one true mathematical claim that cannot be proven within the system (Wang 1981: 19).  The claim in question is often referred to as the “Gödel sentence.”  The Gödel sentence asserts of itself: “I am not provable in S,” where “S” is the relevant formal system.  Suppose that the Gödel sentence can be proven in S.  If so, then by soundness the sentence is true in S.  But the sentence claims that it is not provable, so it must be that we cannot prove it in S.  The assumption that the Gödel sentence is provable in S leads to contradiction, so if S is consistent, it must be that the Gödel sentence is unprovable in S, and therefore true, because the sentence claims that it is not provable.  In other words, if consistent, S is incomplete, as there is a true mathematical claim that cannot be proven in S. For an introduction to Gödel’s theorem, see Nagel and Newman (1958).

Gödel’s proof is at the core of Lucas’s (1961) argument, which is roughly the following.  Consider a machine constructed to produce theorems of arithmetic.  Lucas argues that the operations of this machine are analogous to a formal system.  To explain, “if there are only a definite number of types of operation and initial assumptions built into the [machine], we can represent them all by suitable symbols written down on paper” (Lucas 1961: 115).  That is, we can associate specific symbols with specific states of the machine, and we can associate “rules of inference” with the operations of the machine that cause it to go from one state to another.  In effect, “given enough time, paper, and patience, [we could] write down an analogue of the machine’s operations,” and “this analogue would in fact be a formal proof” (ibid).  So essentially, the arithmetical claims that the machine will produce as output, that is, the claims the machine proves to be true, will “correspond to the theorems that can be proved in the corresponding formal system” (ibid).  Now suppose that we construct the Gödel sentence for this formal system.  Since the Gödel sentence cannot be proven in the system, the machine will be unable to produce this sentence as a truth of arithmetic.  However, a human can look and see that the Gödel sentence is true.  In other words, there is at least one thing that a human mind can do that no machine can.  Therefore, “a machine cannot be a complete and adequate model of the mind” (Lucas 1961: 113).  In short, the human mind is not a machine.

Here is how Lucas (1990: paragraph 3) describes the argument:

I do not offer a simple knock-down proof that minds are inherently better than machines, but a schema for constructing a disproof of any plausible mechanist thesis that might be proposed.  The disproof depends on the particular mechanist thesis being maintained, and does not claim to show that the mind is uniformly better than the purported mechanist representation of it, but only that it is one respect better and therefore different.  That is enough to refute that particular mechanist thesis.

Further, Lucas (ibid) believes that a variant of his argument can be formulated to refute any future mechanist thesis.  To explain, Lucas seems to envision the following scenario:  a mechanist formulates a particular mechanistic thesis by claiming, for example, that the human mind is a Turing machine with a given formal specification S.  Lucas then refutes this thesis by producing S’s Gödel sentence, which we can see is true, but the Turing machine cannot.  Then, a mechanist puts forth a different thesis by claiming, for example, that the human mind is a Turing machine with formal specification S’.  But then Lucas produces the Gödel sentence for S’, and so on, until, presumably, the mechanist simply gives up.

One who has not studied Gödel’s theorem in detail might be wondering: why can’t we simply add the Gödel sentence to the list of theorems a given machine “knows” thereby giving the machine the ability Lucas claims it does not have?  In Lucas’s argument, we consider some particular Turing machine specification S, and then we note that “S-machines” (that is, those machines that have formal specification S) cannot see the truth of the Gödel sentence while we can, so human minds cannot be S-machines, at least.  But why can’t we simply add the Gödel sentence to the list of theorems that S-machines can produce?  Doing so will presumably give the machines in question the ability that allegedly separates them from human minds, and Lucas’s argument falters.  The problem with this response is that even if we add the Gödel sentence to S-machines, thereby producing Turing machines that can produce the initial Gödel sentence as a truth of arithmetic, Lucas can simply produce a new Gödel sentence for these updated machines, one which allegedly we can see is true but the new machines cannot, and so on ad infinitum.  In sum, as Lucas (1990: paragraph 9) states,  “It is very natural…to respond by including the Gödelian sentence in the machine, but of course that makes the machine a different machine with a different Gödelian sentence all of its own.”  This issue is discussed further below.

One reason Lucas’s argument has received so much attention is that if the argument succeeds, the widely influential Computational Theory of Mind is false.  Likewise, if the argument succeeds, then “strong artificial intelligence” is false; it is impossible to construct a machine that can perfectly mimic our cognitive abilities.  But there are further implications; for example, a view in philosophy of mind known as Turing machine functionalism claims that the human mind is a Turing machine, and of course, if Lucas is right, this form of functionalism is false. (For more on Turing machine functionalism, see Putnam (1960)).  So clearly there is much at stake.

2. Some Possible Objections to Lucas

Lucas’s argument has been, and still is, very controversial.  Some objections to the argument involve consistency; if we cannot establish our own consistency, or if we are in fact inconsistent, then Lucas’s argument fails (for reasons made clear below).  Furthermore, some have objected that the algorithm the human mind follows is so complex we might be forever unable to formulate our own Gödel sentence; if so, then maybe we cannot see the truth of our own Gödel sentence and therefore we might not be different from machines after all.  Others object to various idealizations that Lucas’s argument makes.  Still others find some other fault with the argument.  In this section, some of the more notable objections to Lucas’s argument are discussed.

a. Consistency

Lucas’s argument faces a number of objections involving the issue of consistency; there are two related though distinct lines of argument on this issue.  First, some claim that we cannot establish our own consistency, whether we are consistent or not.  Second, some claim that we are in fact inconsistent.  The success of either of these objections would be sufficient to defeat Lucas’s argument.  But first, to see why these objections (if successful) would defeat Lucas’s argument, recall that Gödel’s first incompleteness theorem states that if a formal system (in which we can prove a suitable amount of number theory) is consistent, the Gödel sentence is true but unprovable in the system.  That is, the Gödel sentence will be true and unprovable only in consistent systems.  In an inconsistent system, one can prove any claim whatsoever because in classical logic, any and all claims follow from a contradiction; that is, an inconsistent system will not be incomplete.  Now, suppose that a mechanist claims that we are Turing machines with formal specification S, and this formal specification is inconsistent (so the mechanist is essentially claiming that we are inconsistent).  Lucas’s argument simply does not apply in such a situation; his argument cannot defeat this mechanist.  Lucas claims that any machine will be such that there is a claim that is true but unprovable for the machine, and since we can see the truth of the claim but the machine cannot, we are not machines.  But if the machine in question is inconsistent, the machine will be able to prove the Gödel sentence, and so will not suffer from the deficiency that Lucas uses to separate machines from us.  In short, for Lucas’s argument to succeed, human minds must be consistent.

Consequently, if one claims that we cannot establish our own consistency, this is tantamount to claiming that we cannot establish the truth of Lucas’s conclusion.  Furthermore, there are some good reasons for thinking that even if we are consistent, we cannot establish this.  For example, Gödel’s second incompleteness theorem, which quickly follows from his first theorem, claims that one cannot prove the consistency of a formal system S from within the system itself, so, if we are formal systems, we cannot establish our own consistency.  In other words, a mechanist can avoid Lucas’s argument by simply claiming that we are formal systems and therefore, in accordance with Gödel’s second theorem, cannot establish our own consistency.  Many have made this objection to Lucas’s argument over the years; in fact, Lucas discusses this objection in his original (1961) and attributes it to Rogers (1957) and Putnam.  Putnam made the objection in a conversation with Lucas even before Lucas’s (1961) (see also Putnam (1960)).  Likewise, Hutton (1976) argues from various considerations drawn from Probability Theory to the conclusion that we cannot assert our own consistency.  For example, Hutton claims that the probability that we are inconsistent is above zero, and that if we claim that we are consistent, this “is a claim to infallibility which is insensitive to counter-arguments to the point of irrationality” (Lucas 1976: 145).  In sum, for Lucas’s argument to succeed, we must be assured that humans are consistent, but various considerations, including Gödel’s second theorem, imply that we can never establish our own consistency, even if we are consistent.

Another possible response to Lucas is simply to claim that humans are in fact inconsistent Turing machines.  Whereas the objection above claimed that we can never establish our own consistency (and so cannot apply Gödel’s first theorem to our own minds with complete confidence), this new response simply outright denies that we are consistent.  If humans are inconsistent, then we might be equivalent to inconsistent Turing machines, that is, we might be Turing machines.  In short, Lucas concludes that since we can see the truth of the Gödel sentence, we cannot be Turing machines, but perhaps the most we can conclude from Lucas’s argument is that either we are not Turing machines or we are inconsistent Turing machines.  This objection has also been made many times over the years; Lucas (1961) considers this objection too in his original article and claims that Putnam also made this objection to him in conversation.

So, we see two possible responses to Lucas: (1) we cannot establish our own consistency, whether we are consistent or not, and (2) we are in fact inconsistent.  However, Lucas has offered numerous responses to these objections.  For example, Lucas thinks it is unlikely that an inconsistent machine could be an adequate representation of a mind.  He (1961: 121) grants that humans are sometimes inconsistent, but claims that “it does not follow that we are tantamount to inconsistent systems,” as “our inconsistencies are mistakes rather than set policies.”  When we notice an inconsistency within ourselves, we generally “eschew” it, whereas “if we really were inconsistent machines, we should remain content with our inconsistencies, and would happily affirm both halves of a contradiction” (ibid).  In effect, we are not inconsistent machines even though we are sometimes inconsistent; we are fallible but not systematically inconsistent.   Furthermore, if we were inconsistent machines, we would potentially endorse any proposition whatsoever (ibid).  As mentioned above, one can prove any claim whatsoever from a contradiction, so if we are inconsistent Turing machines, we would potentially believe anything.  But we do not generally believe any claim whatsoever (for example, we do not believe that we live on Mars), so it appears we are not inconsistent Turing machines.  One possible counter to Lucas is to claim that we are inconsistent Turing machines that reason in accordance with some form of paraconsistent logic (in paraconsistent logic, the inference from a contradiction to any claim whatsoever is blocked); if so, this explains why we do not endorse any claim whatsoever given our inconsistency (see Priest (2003) for more on paraconsistent logic).  One could also argue that perhaps the inconsistency in question is hidden, buried deep within our belief system; if we are not aware of the inconsistency, then perhaps we cannot use the inconsistency to infer anything at all (Lucas himself mentions this possibility in his (1990)).

Lucas also argues that even if we cannot prove the consistency of a system from within the system itself, as Gödel’s second theorem demonstrates, there might be other ways to determine if a given system is consistent or not.  Lucas (1990) points out that there are finitary consistency proofs for both the propositional calculus and the first-order predicate calculus, and there is also Gentzen’s proof of the consistency of Elementary Number Theory.  Discussing Gentzen’s proof in more detail, Lucas (1996) argues that while Gödel's second theorem demonstrated that we cannot prove the consistency of a system from within the system itself, it might be that we can prove that a system is consistent with considerations drawn from outside the system.  One very serious problem with Lucas’s response here, as Lucas (ibid) himself notes, is that the wider considerations that such a proof uses must be consistent too, and this can be questioned.  Another possible response is the following: maybe we can “step outside” of, say, Peano arithmetic and argue that Peano arithmetic is consistent by appealing to considerations that are outside of Peano arithmetic; however, it isn’t clear that we can “step outside” of ourselves to show that we are consistent.

Lucas (1976: 147) also makes the following “Kantian” point:

[perhaps] we must assume our own consistency, if thought is to be possible at all.  It is, perhaps like the uniformity of nature, not something to be established at the end of a careful chain of argument, but rather a necessary assumption we must make if we are to start on any thinking at all.

A possible reply is that assuming we are consistent (because this assumption is a necessary precondition for thought) and our actually being consistent are two different things, and even if we must assume that we are consistent to get thought off of the ground, we might be inconsistent nevertheless.  Finally, Wright (1995) has argued that an intuitionist, at least, who advances Lucas’s argument, can overcome the worry over our consistency.

b. Benacerraf’s Criticism

Benacerraf (1967) makes a well-known criticism of Lucas’s argument.  He points out that it is not easy to construct a Gödel sentence and that in order to construct a Gödel sentence for any given formal system one must have a solid understanding of the algorithm at work in the system.  Further, the formal system the human mind might implement is likely to be extremely complex, so complex, in fact, that we might never obtain the insight into its character needed to construct our version of the Gödel sentence.  In other words, we understand some formal systems, such as the one used in Russell and Whitehead’s (1910) Principia, well enough to construct and see the truth of the Gödel sentence for these systems, but this does not entail that we can construct and see the truth of our own Gödel sentence.  If we cannot, then perhaps we are not different from machines after all; we might be very complicated Turing machines, but Turing machines nevertheless.  To rephrase this objection, suppose that a mechanist produces a complex formal system S and claims that human minds are S.  Of course, Lucas will then try to produce the Gödel sentence for S to show that we are not S.  But S is extremely complicated, so complicated that Lucas cannot produce S’s Gödel sentence, and so cannot disprove this particular mechanistic thesis.  In sum, according to Benacerraf, the most we can infer from Lucas’s argument is a disjunction: “either no (formal system) encodes all human arithmetical capacity – the Lucas-Penrose thought – or any system which does has no axiomatic specification which human beings can comprehend” (Wright, 1995, 87).  One response Lucas (1996) makes is that he [Lucas] could be helped in the effort to produce the Gödel sentence for any given formal system/machine.  Other mathematicians could help and so could computers.  In short, at least according to Lucas, it might be difficult, but it seems that we could, at least in principle, determine what the Gödelian formula is for any given system.

c. The Whiteley Sentence

Whiteley (1962) responded to Lucas by arguing that humans have similar limitations to the one that Lucas’s argument attributes to machines; if so, then perhaps we are not different from machines after all.  Consider, for example, the “Whiteley sentence,” that is, “Lucas cannot consistently assert this formula.”  If this sentence is true, then it must be that asserting the sentence makes Lucas inconsistent.  So, either Lucas is inconsistent or he cannot utter the sentence on pain of inconsistency, in which case the sentence is true and so Lucas is incomplete.  Hofstadter (1981) also argues against Lucas along these lines, claiming that we would not even believe the Whiteley sentence, while Martin and Engleman (1990) defend Lucas on this point by arguing against Hofstadter (1981).

d. Issues Involving “Idealizations”

A number of objections to Lucas’s argument involve various “idealizations” that the argument makes (or at least allegedly makes).  Lucas’s argument sets up a hypothetical scenario involving a mind and a machine, “but it is an idealized mind and an idealized machine,” neither of which are subject to limitations arising from, say, human mortality or the inability of some humans to understand Gödel’s theorem, and some believe that once these idealizations are rejected, Lucas’s argument falters (Lucas 1990: paragraph 6).  Several specific instances of this line of argument are considered in successive paragraphs.

Boyer (1983) notes that the output of any human mind is finite.  Since it is finite, it could be programmed into and therefore simulated by a machine.  In other words, once we stop ignoring human finitude, that is, once we reject one of the idealizations in Lucas’s argument, we are not different from machines after all.  Lucas (1990: paragraph 8) thinks this objection misses the point: “What is in issue is whether a computer can copy a living me, when I have not yet done all that I shall do, and can do many different things.  It is a question of potentiality rather than actually that is in issue.”  Lucas’s point seems to be that what is really at issue is what can be done by a human and a machine in principle; if, in principle, the human mind can do something that a machine cannot, then the human mind is not a machine, even if it just so happens that any particular human mind could be modeled by a machine as a result of human finitude.

Lucas (1990: paragraph 9) remarks, “although some degree of idealization seems allowable in considering a mind untrammeled by mortality…, doubts remain about how far into the infinite it is permissible to stray.”    Recall the possible objection discussed above (in section 1) in which the mechanist, when faced with Lucas’s argument, responds by simply producing a new machine that is just like the last except it contains the Gödel sentence as a theorem.  As Lucas points out, this will simply produce a new machine that has a different Gödel sentence, and this can go on forever.  Some might dispute this point though.  For example, some mechanists might try “adding a Gödelizing operator, which gives, in effect a whole denumerable infinity of Gödelian sentences” (Lucas 1990: paragraph 9).  That is, some might try to give a machine a method to construct an infinite number of Gödel sentences; if this can be done, then perhaps any Gödel sentence whatsoever can be produced by the machine.  Lucas (1990) argues that this is not the case, however; a machine with such an operator will have its own Gödel sentence, one that is not on the initial list produced by the operator.  This might appear impossible: how, if the initial list is infinite, can there be an additional Gödel sentence that is not on the list?  It is not impossible though: the move from the initial infinite list of Gödel sentences to the additional Gödel sentence will simply be a move into the “transfinite,” a higher level of infinity than that of the initial list.  It is widely accepted in mathematics, and has been for quite some time, that there are different levels of infinity.

Coder (1969) argues that Lucas has an overly idealized view of the mathematical abilities of many people; to be specific, Coder thinks that Lucas overestimates the degree to which many people can understand Gödel’s theorem and this somehow creates a problem for Lucas’s argument.  Coder holds that since many people cannot understand Gödel’s theorem, all Lucas has shown is that a handful of competent mathematical logicians are not machines (the idea is that Lucas’s argument only shows that those who can produce and see the truth of the Gödel sentence are not machines, but not everyone can do this).  Lucas (1970a) responds by claiming, for example, that the only difference between those who can understand Gödel’s theorem and those who cannot is that, in the case of the former, it is more obvious that they are not machines; it isn’t, say, that some people are machines and others are not.

Dennett (1972) has claimed there is something odd about Lucas’s argument insofar as it seems to treat humans as creatures that simply wander around asserting truths of first-order arithmetic.  Dennett (1972: 530) remarks,

Men do not sit around uttering theorems in a uniform vocabulary, but say things in earnest and jest, makes slips of the tongue, speak several languages…, and – most troublesome for this account – utter all kinds of nonsense and contradictions….

Lucas’s (1990: paragraph 7) response is that these differences between humans and machines that Dennett points to are sufficient for some philosophers to reject mechanism, and that he [Lucas] is simply giving mechanism the benefit of the doubt by assuming that they can explain these differences.  Furthermore, humans can, and some actually do, produce theorems of elementary number theory as output, so any machine that cannot produce all of these theorems cannot be an adequate model of the human mind.

e. Lewis’s Objection

Lewis (1969) has also formulated an objection to Lucas’s argument:

Lewis argues that I [that is, Lucas] have established that there is a certain Lucas arithmetic which is clearly true and cannot be the output of some Turing machine. If I could produce the whole of Lucas arithmetic, then I would certainly not be a Turing machine. But there is no reason to suppose that I am able in general to verify theoremhood in Lucas arithmetic (Lucas 1970: 149).

To clarify, “Peano arithmetic” is the arithmetic that machines can produce and “Lucas arithmetic” is the arithmetic that humans can produce, and Lucas arithmetic will contain Gödel sentences while Peano arithmetic will not, so humans are not machines, at least according to Lucas’s argument.  But Lewis (1969) claims that Lucas has not shown us that he (or anyone else, for that matter) can in fact produce Lucas arithmetic in its entirety, which he must do if his argument is to succeed, so Lucas’s argument is incomplete.   Lucas responds that he does not need to produce Lucas arithmetic in its entirety for his argument to succeed.  All he needs to do to disprove mechanism is produce a single theorem that a human can see is true but a machine cannot; this is sufficient.  Lucas (1970: 149) holds that “what I have to do is to show that a mind can produce not the whole of Lucas arithmetic, but only a small, relevant part.  And this I think I can show, thanks to Gödel's theorem.”

3. Penrose’s New Version of the Argument

Penrose has formulated and defended versions of the Gödelian argument in two books, 1989’s The Emperor’s New Mind and 1994’s Shadows of the Mind. Since the latter is at least in part an attempt to improve upon the former, this discussion will focus on the latter.  Penrose’s (1994) consists of two main parts: (a) a Gödelian argument to show that humans minds are non-computable and (b) an attempt to infer a number of claims involving consciousness and physics from (a).  (a) and (b) are discussed in successive sections.

a. Penrose’s Gödelian Argument

Penrose has defended different versions of the Gödelian argument.  In his earlier work, he defended a version of the argument that was relatively similar to Lucas’s (although there were some minor differences (for example, in his argument, Penrose used Turing’s theorem, which is closely related to Gödel’s first incompleteness theorem)).  Insofar as this version of the argument overlaps with Lucas’s, this version faces many of the same objections as Lucas’s argument.  In his (1994) though, Penrose formulates a version of the argument that has some more significant differences from Lucas’s version.  Penrose regards this version “as the central (new) core argument against the computational modelling of mathematical understanding” offered in his (1994) and notes that some commentators seem to have completely missed the argument (Penrose 1996: 1.3).

Here is a summary of the new argument (this summary closely follows that given in Chalmers (1995: 3.2), as this is the clearest and most succinct formulation of the argument I know of): (1) suppose that “my reasoning powers are captured by some formal system F,” and, given this assumption, “consider the class of statements I can know to be true.”  (2) Since I know that I am sound, F is sound, and so is F’, which is simply F plus the assumption (made in (1)) that I am F (incidentally, a sound formal system is one in which only valid arguments can be proven).  But then (3) “I know that G(F’) is true, where this is the Gödel sentence of the system F’” (ibid).  However, (4) Gödel’s first incompleteness theorem shows that F’ could not see that the Gödel sentence is true.  Further, we can infer that (5) I am F’ (since F’ is merely F plus the assumption made in (1) that I am F), and we can also infer that I can see the truth of the Gödel sentence (and therefore given that we are F’, F’ can see the truth of the Gödel sentence). That is, (6) we have reached a contradiction (F’ can both see the truth of the Gödel sentence and cannot see the truth of the Gödel sentence).  Therefore, (7) our initial assumption must be false, that is, F, or any formal system whatsoever, cannot capture my reasoning powers.

Chalmers (1995: 3.6) thinks the “greatest vulnerability” with this version of the argument is step (2); specifically, he thinks the claim that we know that we are sound is problematic (he attempts to show that it leads to a contradiction (see Chalmers 1995: section 3)).  Others aside from Chalmers also reject the claim that we know that we are sound, or else they reject the claim that we are sound to begin with (in which case we do not know that we are sound either since one cannot know a falsehood).  For example, McCullough (1995: 3.2) claims that for Penrose’s argument to succeed, two claims must be true: (1) “Human mathematical reasoning is sound.  That is, every statement that a competent human mathematician considers to be “unassailably true” actually is true,” and (2) “The fact that human mathematical reasoning is sound is itself considered to be “unassailably true.””  These claims seem implausible to McCullough (1995: 3.4) though, who remarks, “For people (such as me) who have a more relaxed attitude towards the possibility that their reasoning might be unsound, Penrose's argument doesn't carry as much weight.”  In short, McCullough (1995) thinks it is at least possible that mathematicians are unsound so we do not definitively know that mathematicians are sound.  McDermott (1995) also questions this aspect (among others) of Penrose’s argument.  Looking at the way that mathematicians actually work, he (1995: 3.4) claims, “it is difficult to see how thinkers like these could even be remotely approximated by an inference system that chugs to a certifiably sound conclusion, prints it out, then turns itself off.”  For example, McDermott points out that in 1879 Kempe published a proof of the four-color theorem which was not disproved until 1890 by Heawood; that is, it appears there was an 11 year period where many competent mathematicians were unsound.

Penrose attempts to overcome such difficulties by distinguishing between individual, correctable mistakes that mathematicians sometimes make and things they know are “unassailably” true.  He (1994: 157) claims “If [a] robot is…like a genuine mathematician, although it will still make mistakes from time to time, these mistakes will be correctable…according to its own internal criteria of “unassailable truth.””  In other words, while mathematicians are fallible, they are still sound because their mistakes can be distinguished from things they know are unassailably true and can also be corrected (and any machine, if it is to mimic mathematical reasoning, must be the same way).  The basic idea is that mathematicians can make mistakes and still be sound because only the unassailable truths are what matter; these truths are the output of a sound system, and we need not worry about the rest of the output of mathematicians.  McDermott (1995) remains unconvinced; for example, he wonders what “unassailability” means in this context and thinks Penrose is far too vague on the subject.  For more on these issues, including further responses to these objections from Penrose, see Penrose (1996).

b. Consciousness and Physics

One significant difference between Lucas’s and Penrose’s discussions of the Gödelian argument is that, as alluded to above, Penrose infers a number of further claims from the argument concerning consciousness and physics.  Penrose thinks the Gödelian argument implies, for example, that consciousness must somehow arise from the quantum realm (specifically, from the quantum properties of “microtubules”) and that we “will have no chance…[of understanding consciousness]… until we have a much more profound appreciation of the very nature of time, space, and the laws that govern them” (Penrose 1994: 395).  Many critics focus their attention on defeating Penrose’s Gödelian argument, thinking that if it fails, we have little or no reason to endorse Penrose’s claims about consciousness and physics.  McDermott (1995: 2.2) remarks, “all the plausibility of Penrose's theory of “quantum consciousness” in Part II of the book depends on the Gödel argument being sound,” so, if we can refute the Gödelian argument, we can easily reject the rest.  Likewise, Chalmers (1995: 4.1) claims that the “reader who is not convinced by Penrose’s Gödelian arguments is left with little reason to accept his claims that physics is non-computable and that quantum processes are essential to cognition...”  While there is little doubt that Penrose’s claims about consciousness and physics are largely motivated by the Gödelian argument, Penrose thinks that one might be led to such views in the absence of the Gödelian argument (for example, Penrose (1994) appeals to Libet’s (1992) work in an effort to show that consciousness cannot be explained by classical physics).  Some (such as Maudlin (1995)) doubt that there even is a link between the Gödelian argument and Penrose’s claims about consciousness and physics; therefore, even if the Gödelian argument is sound, this might not imply that Penrose’s views about consciousness and physics are true.  Still others have offered objections that directly and specifically attack Penrose’s claims about consciousness and physics, apart from his Gödelian argument; some of these objections are now briefly discussed.

Some have expressed doubts over whether quantum effects can influence neural processes.  Klein (1995: 3.4) states “it will be difficult to find quantum effects in pre-firing neural activity” because the brain operates at too high of temperature and “is made of floppy material (the neural proteins can undergo an enormously large number of different types of vibration).”  Furthermore, Penrose “discusses how microtubules can alter synaptic strengths…but nowhere is there any discussion of the nature of synaptic modulations that can be achieved quantum-mechanically but not classically” (Klein 1995: 3.6).  Also, “the quantum nature of neural activity across the brain must be severely restricted, since Penrose concedes that neural firing is occurring classically” (Klein 1995: 3.6).  In sum, at least given what we know at present, it is far from clear that events occurring at the quantum level can have any effect, or at least much of an effect, on events occurring at the neural level.  Penrose (1994) hopes that the specific properties of microtubules can help overcome such issues.

As mentioned above, the Gödelian argument, if successful, would show that strong artificial intelligence is false, and of course Penrose thinks strong A.I. is false.   However, Chalmers (1995: 4.2) argues that Penrose’s skepticism about artificial intelligence is driven largely by the fact that “it is so hard to see how the mere enaction of a computation should give rise to an inner subjective life.”  But it isn’t clear how locating the origin of consciousness in quantum processes that occur in microtubules is supposed to help: “Why should quantum processes in microtubules give rise to consciousness, any more than computational processes should?  Neither suggestion seems appreciably better off than the other” (ibid).  According to Chalmers, Penrose has simply replaced one mystery with another.  Chalmers (1995: 4.3) feels that “by the end of the book the “Missing Science of Consciousness” seems as far off as it ever was.”

Baars (1995) has doubts that consciousness is even a problem in or for physics (of course, some philosophers have had similar doubts).  Baars (1995: 1.3) writes,

The…beings we see around us are the products of billions of years of biological evolution. We interact with them – with each other – at a level that is best described as psychological. All of our evidence regarding consciousness …would seem to be exclusively psychobiological.

Furthermore, Baars cites much promising current scientific work on consciousness, points out that some of these current theories have not yet been disproven, that, relatively speaking, our attempt to explain consciousness scientifically is still in its infancy, and concludes that “Penrose's call for a scientific revolution seems premature at best” (Baars 1995: 2.3).  Baars is also skeptical of the claim that the solution to the problem of consciousness will come from quantum mechanics specifically.  He claims “there is no precedent for physicists deriving from [quantum mechanics] any macro-level phenomenon such as a chair or a flower…much less a nervous system with 100 billion neurons” (section 4.2) and remarks that it seems to be a leap of faith to think that quantum mechanics can unravel the mystery of consciousness.

4. Gödel’s Own View

One interesting question that has not yet been addressed is: what did Gödel think his first incompleteness theorem implied about mechanism and the mind in general?  Gödel, who discussed his views on this issue in his famous “Gibbs lecture” in 1951, stated,

So the following disjunctive conclusion is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified . . . (Gödel 1995: 310).

That is, his result shows that either (i) the human mind is not a Turing machine or (ii) there are certain unsolvable mathematical problems.  However, Lucas (1998: paragraph 1) goes even further and argues “it is clear that Gödel thought the second disjunct false,” that is Gödel “was implicitly denying that any Turing machine could emulate the powers of the human mind.”  So, perhaps the first thinker to endorse a version of the Lucas-Penrose argument was Gödel himself.

5. Other Anti-Mechanism Arguments

Finally, there are some alternative anti-mechanism arguments to Lucas-Penrose.  Two are briefly mentioned.  McCall (1999) has formulated an interesting argument.  A Turing machine can only know what it can prove, and to a Turing machine, provability would be tantamount to truth.  But Gödel’s theorem seems to imply that truth is not always provability.  The human mind can handle cases in which truth and provability diverge.  A Turing machine, however, cannot.  But then we cannot be Turing machines.  A second alternative anti-mechanism argument is formulated in Cogburn and Megill (2010).  They argue that, given certain central tenets of Intuitionism, the human mind cannot be a Turing machine.

6. References and Further Reading

  • Benacerraf, P. (1967). “God, the Devil, and Gödel,” Monist 51:9-32.
    • Makes a number of objections to Lucas’s argument; for example, the complexity of the human mind implies that we might be unable to formulate our own Gödel sentence.
  • Boyer, D. (1983). “J. R. Lucas, Kurt Godel, and Fred Astaire,” Philosophical Quarterly 33:147-59.
    • Argues, among other things, that human output is finite and so can be simulated by a Turing machine.
  • Chalmers, D. J. (1996). “Minds, Machines, and Mathematics,” Psyche 2:11-20.
    • Contra Penrose, we cannot know that we are sound.
  • Coder, D. (1969). “Gödel’s Theorem and Mechanism,” Philosophy 44:234-7.
    • Not everyone can understand Gödel, so Lucas’s argument does not apply to everyone.
  • Cogburn, J. and Megill, J. (2010).  “Are Turing machines Platonists?  Inferentialism and the Philosophy of Mind,” Minds and Machines 20(3): 423-40.
    • Intuitionism and Inferentialism entail the falsity of the Computational Theory of Mind.
  • Dennett, D.C. (1972). “Review of The Freedom of the Will,” The Journal of Philosophy 69: 527-31.
    • Discusses Lucas’s The Freedom of the Will, and specifically his Gödelian argument.
  • Feferman, S. (1996). “Penrose's Godelian argument,” Psyche 2(7).
    • Points out some technical mistakes in Penrose’s discussion of Gödel’s first theorem.  Penrose responds in his (1996).
  • Gödel, K. (1931). “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monash. Math. Phys. 38: 173-198.
    • Gödel’s first incompleteness theorem.
  • Gödel, K. (1995). Collected Works III (ed. S. Feferman). New York: Oxford University Press.
    • Gödel discusses his first theorem and the human mind.
  • Dennett, D.C. and Hofstadter, D. R. (1981).  The Mind's I: Fantasies and Reflections on Self and Soul.  New York: Basic Books.
    • Contains Hofstadter’s discussion of the Whiteley sentence.
  • Hutton, A. (1976). “This Gödel is Killing Me,” Philosophia 3:135-44.
    • Probabilistic arguments that show that we can’t know we are consistent.
  • Klein, S.A.  “Is Quantum Mechanics Relevant to Understanding Consciousness,” Psyche 2(3).
    • Questions Penrose’s claims about consciousness arising from the quantum mechanical realm.
  • Lewis, D. (1969). “Lucas against Mechanism,” Philosophy 44:231-3.
    • Lucas cannot produce all of “Lucas Arithmetic.”
  • Libet, B. (1992). “The Neural Time-factor in Perception, volition and free will,” Review de Metaphysique et de Morale 2:255-72.
    • Penrose appeals to Libet to show that classical physics cannot account for consciousness.
  • Lucas, J. R. (1961). “Minds, Machines and Gödel,” Philosophy 36:112-127.
    • Lucas’s first article on the Gödelian argument.
  • Lucas, J. R. (1968). “Satan Stultified: A Rrejoinder to Paul Benacerraf,” Monist 52:145-58.
    • A response to Benacerraf’s (1967).
  • Lucas, J. R. (1970a). “Mechanism: A Rejoinder,” Philosophy 45:149-51.
    • Lucas’s response to Coder (1969) and Lewis (1969).
  • Lucas, J. R. (1970b). The Freedom of the Will. Oxford: Oxford University Press.
    • Discusses and defends the Gödelian argument.
  • Lucas, J. R. (1976). “This Gödel is killing me: A rejoinder,” Philosophia 6:145-8.
    • Lucas’s reply to Hutton (1976).
  • Lucas, J. R. (1990). “Mind, machines and Gödel: A retrospect.”  A paper read to the Turing Conference at Brighton on April 6th.
    • Overview of the debate; Lucas considers numerous objections to his argument.
  • Lucas, J. R. (1996).  “The Godelian Argument: Turn Over the Page.”  A paper read at a BSPS conference in Oxford.
    • Another overview of the debate.
  • Lucas, J. R. (1998).  “The Implications of Gödel’s Theorem.”  A paper read to the Sigma Club.
    • Another overview.
  • Nagel, E. and Newman J.R. (1958).  Gödel’s Proof.  New York: New York University Press.
    • Short and clear introduction to Gödel’s first incompleteness theorem.
  • Martin, J. and Engleman, K. (1990). “The Mind’s I Has Two Eyes,” Philosophy 510-16.
    • More on the Whiteley sentence.
  • Maudlin, T. (1996).  “Between the Motion and the Act…” Psyche 2:40-51.
    • There is no connection between Penrose’s Gödelian argument and his views on consciousness and physics.
  • McCall, S. (1999).  “Can a Turing Machine Know that the Gödel Sentence is True?”  Journal of Philosophy 96(10): 525-32.
    • An anti-mechanism argument.
  • McCullough, D. (1996). “Can Humans Escape Gödel?” Psyche 2:57-65.
    • Among other things, doubts that we know we are sound.
  • McDermott, D. (1996). “*Penrose is wrong,” Psyche 2:66-82.
    • Criticizes Penrose on a number of issues, including the soundness of mathematicians.
  • Penrose, R. (1989). The Emperor's New Mind. Oxford: Oxford University Press.
    • Penrose’s first book on the Gödelian argument and consciousness.
  • Penrose, R. (1994).  Shadows of the Mind.  Oxford: Oxford University Press.
    • Human reasoning cannot be captured by a formal system; consciousness arises from the quantum realm; we need a revolution in physics to fully understand consciousness.
  • Penrose, R. (1996). “Beyond the Doubting of a Shadow,” Psyche 2(23).
    • Responds to various criticisms of his (1994).
  • Priest, G. (2003). “Inconsistent Arithmetic: Issues Technical and Philosophical,” in Trends in Logic: 50 Years of Studia Logica (eds. V. F. Hendricks and J. Malinowski), Dordrecht: Kluwer Academic Publishers.
    • Discusses paraconistent logic.
  • Putnam, H. (1960). “Minds and Machines,” Dimensions of Mind. A Symposium (ed. S. Hook). London: Collier-Macmillan.
    • Raises the consistency issue for Lucas.
  • Rogers, H. (1957). Theory of Recursive Functions and Effective Computability (mimeographed).
    • Early mention of the issue of consistency for Gödelian arguments.
  • Whitehead, A. N. and Russell, B. (1910, 1912, 1913). Principia Mathematica, 3 vols, Cambridge: Cambridge University Press.
    • An attempt to base mathematics on logic.
  • Wang, H. (1981).  Popular Lectures on Mathematical Logic. Mineolam NY: Dover.
    • Textbook on formal logic.
  • Whiteley, C. (1962). “Minds, Machines and Gödel: A Reply to Mr. Lucas,” Philosophy 37:61-62.
    • Humans are limited in ways similar to machines.
  • Wright, C. (1995).  “Intuitionists are Not Turing Machines,” Philosophia Mathematica 3(1):86-102.
    • An intuitionist who advances the Lucas-Penrose argument can overcome the worry over our consistency.

Author Information

Jason Megill
Email: jmegill@carroll.edu
Carroll College
U. S. A.

Synesthesia

The word “synesthesia” or “synaesthesia,” has its origin in the Greek roots, syn, meaning union, and aesthesis, meaning sensation: a union of the senses.  Many researchers use the term “synesthesia” to refer to a perceptual anomaly in which a sensory stimulus associated with one perceptual modality automatically triggers another insuppressible sensory experience which is usually, but not always, associated with a different perceptual modality as when musical tones elicit the visual experience of colors (“colored-hearing”).  Other researchers consider additional unusual correspondences under the category of synesthesias, including the automatic associations of specific objects with genders, ascriptions of unique personalities to numbers, and the involuntary assignment of spatial locations to months or days of the week.  Many synesthetes experience more than one cross-modal correspondence, and others who have unusual cross-modal sensory experiences also have some non-sensory correspondences such as those mentioned above.

Researchers from fields as varied as neurology, neuroscience, psychology and aesthetics have taken an interest in the phenomenon of synesthesia.  Consideration of synesthesia has also shed light on important subjects in philosophy of mind and cognitive science.  For instance, one of the most widely discussed problems in recent philosophy of mind has been to determine how consciousness fits with respect to physical descriptions of the world.  Consciousness refers to the seemingly irreducible subjective feel of ongoing experience, or the character of what it is like.  Philosophers have attempted to reduce consciousness to properties that will ultimately be more amenable to physical characterizations such as representational or functional properties of the mind.  Some philosophers have argued that reductive theories such as representationalism and functionalism cannot account for synesthetic experience.

Another metaphysical project is to provide an account of the nature of color.  There are two main types of views on the nature of color.  Color objectivists take color to be a real feature of the external world.  Color subjectivists take color to be a mind-dependent feature of the subject (or the subject’s experience).  Synesthesia has been used as a counter-example to color objectivism.  Not everyone agrees, however, that synesthesia can be employed to this end.  Synesthesia has also been discussed in regards to the issue of what properties perceptual experiences can represent objects as having (for example, colors).  The standard view is that color experiences represent objects as having color properties, but a special kind of grapheme-color synesthesia may show that color experience can signify numerical value.  If this is right, it shows that perceptual experiences can represent so-called “high-level” properties.

Synesthesia may also be useful in arbitrating the question of how mental processing can be so efficient given the abundance of mentally stored information and the wide variety of problems that we encounter, which must each require highly specific albeit different, processing solutions.  The modular theory of mind is a theory about mental architecture and processing aimed at solving these problems.  On the modular theory, at least some processing is performed in informationally encapsulated sub-units that evolved to perform unique processing tasks.  Synesthesia has been used as support for mental modularity in several different ways.  While some argue that synesthesia is due to an extra module, others argue that synesthesia is better explained as a breakdown in the barrier that keeps information from being shared between modules.

This article begins with an overview of synesthesia followed by a discussion of synesthesia as it has been relevant to philosophers and cognitive scientists in their discussions of the nature of consciousness, color, mental architecture, and perceptual representation, as well as several other topics.

Table of Contents

  1. Synesthesia
  2. Consciousness
    1. Representationalism
    2. Functionalism
  3. Modularity
  4. Theories of Color
  5. An Extraordinary Feature of Color-Grapheme Synesthesia
  6. Wittgenstein’s Philosophical Psychology
  7. Individuating the Senses
  8. Aesthetics and “Literary Synesthesia”
  9. Synesthesia and Creativity
  10. References and Further Reading

1. Synesthesia

Most take synesthesia to be a relatively rare perceptual phenomenon. Reports of prevalence vary, however, from 1 in 25,000 (Cytowic, 1997) to 1 in 200 (Galton, 1880), to even 1 in 20 (Simner et al., 2006).  It typically involves inter-modal experiences such as when a sound triggers a concurrent color experience (a photism), but it can also occur within modalities.  For example, in grapheme-color synesthesia the visual experience of alpha-numeric graphemes such as of a “4” or a “g,” induces color photisms.  These color photisms may appear to the synesthete as located inside the mind, in the peri-personal space surrounding the synesthete’s body (Grossenbacher & Lovelace, 2001), or as being projected right where the inducing grapheme is situated perhaps as if a transparency were placed on top of the grapheme (Dixon, et al., 2004).  Reported cross-modal synesthesias also include olfactory-tactile (where a scent induces a tactile experience such as of smoothness), tactile-olfactory, taste-color, taste-tactile and visual-olfactory, among others.  It is not clear which of these is most common.  Some researchers report that colored-hearing is the most commonly occurring form of synesthesia (Cytowic, 1989; Harrison & Baron-Cohen, 1997), and others report that approximately 68% of synesthetes have the grapheme-color variety (Day, 2005).  Less common forms include sound-olfactory, taste-tactile and touch-olfactory.  In recent years, synesthesia researchers have increasingly been attending to associations that don’t fit the typical synesthesia profile of cross activations between sensory modalities, such as associations of specific objects with genders, ascriptions of unique personalities to particular numbers, and the involuntary assignment of spatial locations to months or days of the week.  Many synesthetes report having these unusual correspondences in addition to cross-modal associations.

Most studied synesthesias are assumed to have genetic origins (Asher et al., 2009).  It has long been noted that synesthesia tends to run in families (Galton, 1883) and the higher proportion of female synesthetes has led some to speculate that it is carried by the X chromosome (Cytowic, 1997; Ward & Simner, 2005).  However, there are also reports of acquired synesthesias induced by drugs such as LSD or mescaline (Rang & Dale, 1987) or resulting from neurologic conditions such as epilepsy, trauma or other lesion (Cytowic, 1997; Harrison & Baron-Cohen, 1997; Critchley, 1997).  Recent studies suggest it may even be brought on through training (Meier & Rothen, 2009; Proulx, 2010) or post-hypnotic suggestion (Kadosh et al., 2009).  Another hypothesis is that synesthesia may have both genetic and developmental origins.  Additionally, some researches propose that synesthesia may arise in genetically predisposed children in response to demanding learning tasks such as the development of literacy.

Up until very recently, the primary evidence for synesthesia has come from introspectively based verbal reports.  According to Harrison and Baron-Cohen (1997), synesthesia is late in being a subject of scientific interest because the previously prevailing behaviorists rejected the importance of subjective phenomena and introspective report.  Some other researchers continue to downplay the reality of synesthesia, claiming that triggered concurrents are likely ideational in character rather than perceptual (for discussion and criticism of this view see Cytowic, 1989; Harrison, 2001; Ramachandran & Hubbard, 2001a).  One hypothesis is that synesthetic ideas result from learned associations that are so vivid in the minds of synesthetes that subjects mistakenly construe them to be perceptual phenomena.  As psychologists swung from physicalism back to mentalism, however, subjective experience became more accepted as an area of scientific inquiry.  In recent years, scientists have begun to study aspects of subjectivity, such as the photisms of synesthetes, using third person methods of science.

Recent empirical work on synesthesia suggests its perceptual reality.  For example, synesthesia is thought to influence attention (Smilek et al., 2003). Moreover, synesthetes have long reported that photisms can aid with memory (Luria, 1968).  And indeed, standard memory tests show synesthetes to be better with recall where photisms would be involved (Cytowic 1997; Smilek et al., 2002).

Other studies aimed at confirming the legitimacy of synesthesia have demonstrated that genuine synesthesia can be distinguished from other common types of learned associations in that it is remarkably consistent; over time synesthetes’ sensation pairings (for example, the grapheme 4 with the color blue) remain stable across multiple testings whereas most learned associations do not.  Synesthetes tested and retested to confirm consistency of pairings on multiple occasions, at an interval of years and without warning, exhibit consistency as high as 90% (Baron-Cohen, et al., 1987).  Non-synesthete associators are not nearly as consistent.

Grouping experiments are used to distinguish between perceptual and non-perceptual features of experience (Beck, 1966; Treisman, 1982).  In common grouping experiments, subjects view a scene comprised of vertical and tilted lines.  In perception, the tilted and vertical lines appear as grouped independently.  Studies seem to show some grapheme-color synesthetes to be subject to pop-out and grouping effects based on colored photisms (Ramachandran & Hubbard, 2001a, b; Edquist et al., 2006).  If an array of 2’s in the form of a triangle are hidden within a field of distracter graphemes such as 5’s, the 2’s may “pop-out” or appear immediately and saliently in experience as forming a triangle so long as the color ascribed to the 2’s is incongruent with the color of the 5’s (Ramachandran and Hubbard, 2001b).

synesthesia graphic

Some take these studies to show that, for at least some synesthetes, the concurrent colors are genuinely perceptual phenomena arising at a relatively early pre-conscious stage of visual processing, rather than associated ideas, which would arise later in processing.

Another study often cited as substantiating the perceptual reality of synesthetic photisms shows that synesthetes are subject to Stroop effects on account of color photisms.  When synesthetes were shown a hand displaying several fingers colored to match the color photism the synesthetes typically projected onto things signifying that quantity, they were quicker at identifying the actual quantity of fingers displayed than when the fingers were painted a color that was incongruent with the photism typically associated with things significant of that quantity (Ward and Sagiv, 2007).

Finally, Smilek et al. (2001) have conducted a study with a synesthete they refer to as “C,” that suggests the perceptual reality of synesthesia.  In the study, significant graphemes are presented individually against backgrounds that are either congruent or incongruent with the photism associated with the grapheme.  If graphemes really are experienced as colored, then they should be more difficult to discern by synesthetes when they are presented against congruent backgrounds.  C did indeed have difficulty discerning the grapheme on congruent but not incongruent trials.  In a similar study, C was shown a digit “2” or “4” hidden in a field of other digits.  Again, the background was either congruent or incongruent with the photism C associated with the target digit.  C had difficulty locating the target digit when the background was congruent with the target’s photism color, but not when it was incongruent.

Nevertheless, another set of recent studies could be seen as calling into question whether some of the above studies really demonstrate the perceptual reality of synesthesia.  Meier and Rothen (2009) have shown that non-synesthetes trained over several weeks to associate specific numbers and colors behave similarly to synesthetes on synesthetic Stroop studies.  The colors that the non-synesthetes were taught to associate with certain graphemes interfered with their ability to identify target graphemes.  Moreover, Kadosh et al. (2009) have shown that highly suggestible non-synesthetes report abnormal cross-modal experiences similar to congenital synesthetes and behave similarly to Smilek’s synesthete C on target identification after receiving post-hypnotic suggestions aimed to trigger grapheme-color pairings.  Some researchers conclude from these studies that genuine synesthetic experiences can be induced through training or hypnosis.  But it isn’t clear that the evidence warrants this conclusion as the results are consistent with the presence of merely strong non-perceptual associations.  In the cases of post-hypnotic suggestion, participants may simply be behaving as if they experienced genuine synesthesia.  An alternative conclusion to draw from these studies might be that Stroop and the identification studies conducted with C do not demonstrate the perceptual reality of synesthesia.  Nonetheless, it has not been established that training and hypnotism can replicate all the effects, such as the longevity of associations in “natural” synesthetes, and few doubt that synesthetes experience genuine color photisms in the presence of inducing stimuli.

For most grapheme-color synesthetes, color photisms are induced by the formal properties of the grapheme (lower synesthesia).  In some, however, color photisms can be correlated with high-level cognitive representations specifying what the grapheme is taken to represent (higher synesthesia).  Higher synesthesia can be distinguished from lower synesthesia by several testable behaviors.

First, individuals with higher synesthesia frequently have the same synesthetic experiences (for example, see the same colors) in response to multiple inducers that share meaning—for instance, 5, V, and an array of five dots may all induce a green photism (Ramachandran & Hubbard, 2001b; Ward & Sagiv, 2007).  Second, some higher-grapheme-color synesthetes will experience color photisms both when they are veridically perceiving an external numeral, and also when they are merely imagining or thinking about the numerical concept.  Dixon et al. (2000) showed one synesthete the equation “4 + 3” followed by a color patch.  Their participant was slower at naming the color of the patch when it was incongruent with the photism normally associated with the number that is the solution to the equation.  If thinking about the numerical concept alone induces a photism then we should expect that the photism would interfere with identifying the patch color.

Moreover, when an individual with higher synesthesia sees a grapheme that is ambiguous, for example a shape that resembles both a 13 and a B, he or she may mark it with different colors when it is presented in different contexts.  For instance, when the grapheme is presented in the series, “12, 13, 14,” it may induce one photism, but it may induce a different photism when it is presented in the series, A, 13, C.  This suggests that it isn’t merely the shape of the grapheme that induces the photism here, but also the ascribed semantic value (Dixon et al., 2006).  Similarly, if an array of smaller “3”s are arranged in the form of a larger “5,” an individual with higher-grapheme synesthesia may mark the figure with one color photism when attending to it as an array of “3”s, but mark it with a different color photism when attending to it as a single number “5” (Ramachandran & Hubbard, 2000).

2. Consciousness

Some contend that synesthesia presents difficulties for certain theories of mind when it comes to conscious experience, such as representationalism (Wager, 1999, 2001; Rosenberg, 2004) and functionalism (J.A. Gray, 1998, 2003, 2004, J.A. Gray et al.; 1997, 2002, 2006).  These claims are controversial and discussed in some depth in the following two sections.

a. Representationalism

Representationalism is the view that the phenomenal character of experience (or the properties responsible for “what it is like” to undergo an experience) is exhausted by, or at least supervenes on, its representational content (Chalmers, 2004).  This means that there can be no phenomenal difference in the absence of a representational difference, and, if two experiential states are indiscernible with respect to representational content, then they must have the same phenomenal character.  Reductive brands of representationalism say that the qualitative aspects of consciousness are just the properties represented in perceptual experience (that is, the representational contents).  For instance, perhaps the conscious visual sensation of a faraway aircraft travelling across the sky is just the representation of a silver object moving across a blue background (Tye, 1995, p.93).

According to Wager (1999, 2001) and Rosenberg (2004) synesthesia shows that phenomenal character does not always depend on representational content because mental states can be the same representationally, but differ when it comes to experiential character.  Wager dubs this the “extra qualia” problem (1999, p.268) noting that his objection specifically targets externalist versions of representationalism (p.276) contending that phenomenal content depends on what the world is like (such that perfect physical duplicates could differ in experiential character given that their environments differ).  Meanwhile, Rosenberg (2004, p.101) employs examples of synesthetes who see colors when feeling pain, or hearing loud noises.  According to Rosenberg, there is no difference between the representational content of the synesthete and the ordinary person: in the case of pain, they could both be representing damage to the body of, let us suppose, a certain intensity, location and duration.  Again, the examples are claimed to show that mental states with the same representational content can differ experientially.  However, others reject this sort of argument.

Alter (2006, p.4) argues that Rosenberg’s analysis overlooks plausible differences between the representational contents in question.  A synesthete who is consciously representing bodily damage as, say, orange, is representing pain differently than an ordinary person.  The nature of this representational difference might be understood in more than one way: perhaps the manner in which they represent their intentional objects differs, or, perhaps their intentional objects differ (or both).  In short, it is suggested that the synesthete and the ordinary person are not representationally the same, and it is no threat to representationalism that different kinds of experience represent differently.  To take a trivial case, the conscious difference between touching and seeing a snowball is accounted for in that they represent differently (only one represents the snowball as cold).

Turning to Wager, he considers three cases which all concern a synesthete named Cynthia who experiences extra visual qualia in the form of a red rectangle when she hears the note Middle C.  The cases vary according to the version of externalism in question.  Case 1 examines a simple casual co-variation theory of phenomenal content, case 2 a theory that mixes co-variation and teleology (such as Tye’s, 1995), while case 3 concerns a purely teleological account, (such as Dretske’s, 1995).  These cases purportedly show that synesthetic and ordinary experience can share the same contents despite the differences in qualitative character.  R. Gray’s (2001a, 2004, pp.68-9) general reply is that synesthetic experience does indeed differ representationally in that it misrepresents.

For example, instead of attributing the redness and rectangularity to Middle C, why not attribute these to a misrepresentation of a red rectangle triggered by the auditory stimulus?  Whether representationalism can supply a plausible account of misrepresentation is an open question, however, perhaps its problems with synesthesia can be resolved by discharging this explanatory debt.

Regarding case 1, perhaps there is no extra representational content had by Cynthia.  If content is determined by the co-variation of the representation and the content it tracks, then since there is no red triangle in the external world, perhaps her experience only represents Middle C, just as it does in the case of an ordinary person (Wager, 1999, p.269).  If so, then there would be a qualitative difference in the absence of a representational difference, and this version of representationalism would be refuted.  On the other hand, Wager concedes that the objection might fail if Cynthia has visually experienced red bars in the past, for then her synesthetic experience is arguably not representationally the same as that of an ordinary person hearing Middle C.  This is because it would be open to the externalist to reply that Cynthia’s experience represents the disjunction “red bar or Middle C” (p.270) thus differing from an ordinary person’s.   However, Wager then argues that a synesthete who has never seen red bars because she is congenitally blind (Blind Cynthia) would have the same representational contents as an ordinary person (they would both just represent Middle C) and yet since she would also experience extra qualia, the objection goes through after all.

In reply, R. Gray (2001a, p.342) points out that this begs the question against the externalist, since it assumes that synesthetic color experience does not depend on a background of ordinary color experience.  If this is so, there could not be a congenitally blind synesthete, since whatever internal states Blind Cynthia had would not be representing colors.  Wager has in turn acknowledged this point (2001, p.349) though he maintains that it is more natural to suppose that Blind Cynthia’s experience would nevertheless be very different.  Support for Wager’s view might be found in such examples as color blind synesthetes who report “Martian” colors inaccessible to ordinary visual perception (Ramachandran and Hubbard, 2003a).

Wager also acknowledges that case 1 overlooks theories allowing representational contents to depend on evolutionary functions, and so the possibility that the blind synesthete functions differently when processing Middle C needs to be examined.  This leads to the second and third cases.

Case 2 is designed around Tye’s hybrid theory according to which phenomenal character depends on evolutionary functions for beings that evolved, and causal co-variation for beings that did not--such as Swampman (your perfect physical duplicate who just popped into existence as a result of lightening striking in swamp material).  Wager argues that on Tye’s view Middle C triggers an internal state with the teleological function of tracking red in the congenitally blind synesthete.  Hence Tye can account for the idea that Blind Cynthia would be representing differently than an ordinary person.

However, now the problem is that it seems the externalist must, implausibly, distinguish between the phenomenal contents of the hypothetical blind synesthete and a blind Swampsynesthete (Blind Swamp Cynthia) when they each experience Middle C.  Recall that Tye’s theory does not allow teleology to be used to account for representational contents in Swampperson cases.  But if Tye falls back on causal co-variation the problem, discussed in the first case, returns.  Since the blind Swampsynesthete’s causal tracking of Middle C does not differ from that of an ordinary person, externalism seems committed to saying that their contents and experiences do not differ—that is, since Blind Swamp Cynthia’s state reliably co-varies with Middle C, not red, it cannot be a phenomenal experience of red.

This, however, is not the end of the matter.  R. Gray could try to recycle his reply that there could not be a blind synesthete (whether of swampy origins or not) since synesthesia is parasitic on ordinary color experience.  Still another response offered on behalf of Tye (Gray, 2001a, p.343) is that Wager fails to take note of the role played by “optimal” conditions in Tye’s theory.  Where optimal conditions fail to obtain, co-variation is mere misrepresentation.  But what counts as optimal and how do we know it?  Perhaps optimal conditions would fail to obtain if the co-varying relationships are one-many (that is, if an internal state co-varies with many stimuli, or, a stimulus co-varies with many internal states, Gray, 2001a, p.343).  Such may be the case for synesthetes, and if so, then synesthetic experience would misrepresent and so differ in content.  On the other hand, Wager disputes Gray’s conception of optimal conditions (2001, p.349) arguing that Tye himself accepts they can obtain in situations where co-variation is one-many.  In addition, Wager (2001, p.349) contends Blind Swamp Cynthia’s co-varying relationship is not one-many since her synesthetic state co-varies only with Middle C.  As for Gray’s claim that optimal conditions fail for the Blind Swamp Cynthia because Middle-C co-varies with too many internal states, Wager (2001, p.349) responds that optimal conditions should indeed obtain—for it is plausible that a creature with a backup visual system could have multiple independent states co-varying with, and bearing content about, a given stimulus.  To this, however, it can be replied that having primary and backup states with content says nothing about whether the content of the backup state is auditory or visual; in other words, does Blind Swamp Cynthia both hear and synesthetically see Middle C, or, does she just hear it by way of multiple brain states (cf. Gray, 2001a, pp.343-344)?  While this summary does not exhaust the debate between Wager and Gray, the upshot for case 2 seems to turn on contentious questions about optimal conditions: what are they, and how do we know when they obtain or fail to obtain?

Finally, Case 3 considers the view that phenomenal content always depends on the state’s content tracking function as determined by natural selection.  Hence, an externalist such as Dretske could maintain that the blind synesthete undergoes a misfiring of a state that is supposed to indicate the presence of red, not Middle C.  Wager’s criticism here concerns a hypothetical case whereby synesthesia comes to acquire the evolutionary function of representing Middle C while visual perception has faded from the species though audition remains normal.  This time the problem is that it seems plausible that two individuals with diverging evolutionary histories could undergo the same synesthetic experience, but according to the externalist their contents would differ (Wager, 1999, p.273).  Perhaps worse, it follows from externalism that a member of this new synesthetic species listening to Middle C would have the very same content and experience as an ordinary member of our own species.

R. Gray replies that he does not see why the externalist must agree that synesthesia has acquired an evolutionary function just because it is adaptive (2001a, p.344).  Returning to his point about cases 1 & 2, synesthesia might well result from a breakdown in the visual system, and saying that it has no function is compatible with saying that it is fitness-enhancing.  If synesthesia does not have a teleological function, then a case 3 externalist can deny that the mutated synesthete’s contents are indiscernible with respect to those of an ordinary person.

And yet even if R. Gray is right that the case for counting synesthesia as functional is inconclusive, it seems at least possible some being evolves such that it has states with the function of representing Middle C synesthetically.  Whether synesthesia is a bug or a feature depends on, as Gray acknowledges, evolutionary considerations (p.345, see also Gray, 2001b), so Wager need only appeal to the possible world in which those considerations favor his interpretation and he can have his counterexample to externalist representationalism (cf. Wager, 2001, p.348).

On the other hand, and as R. Gray notices, Wager’s strongest cases are not drawn from the real world – and so his objections likewise turn on the very sort of controversial, “thought experiments and intuitions about possibility” he aims to distance his own arguments from (Wager, 1999, p.264).  Consider that for case 3 externalists, since Swamppeople don’t have evolutionary functions, they are unconscious zombies.  Anybody who is willing to accept that outcome will probably not be troubled by Wager’s imagined examples about synesthetes.  After all, someone who thinks having no history makes one a zombie already believes that differing evolutionary histories can have a dramatic impact on the qualitative character of experience.  In short, a lot rides on whether synethesia in fact is the result of malfunction, or, the workings of a separate teleofunctional module.

Finally, the suggestion that representational properties can explain the “extra-qualia” in synesthesia courts controversy given worries about whether this is consilient with synesthetes’ self-reports (that is, would further scrutiny of the self-reports strongly support claims about additional representational content?).  There is also general uncertainty as to what evidential weight these reports ought to be granted.  Despite Ramachandran and Hubbard’s enthusiasm for the method of, “probing the introspective phenomenological reports of these subjects” (2001b, p.7, n.3), they acknowledge skepticism on the part of many psychologists about this approach.

b. Functionalism

Synesthesia might present difficulties for the functionalist theory of mind’s account of conscious experience.  Functionalism defines mental states in terms of their functions or causal roles within cognitive systems, as opposed to their intrinsic character (that is, regardless of how they are physically realized or implemented).  Here, mental states are characterized in terms of their mediation of causal relationships obtaining between sensory input, behavioral output, and each other.  For example, an itch is a state caused by, inter alia, mosquito bites, and which results in, among other things, a tendency to scratch the affected area.  As a theory of consciousness, functionalism claims that the qualitative aspects of experience are constituted by (or at least determined by) functional roles (for example, Lycan, 1987).

In a series of articles, J.A. Gray has argued that synesthesia serves as a counter-example to functionalism, as well as to Hurley and Noë’s (2003a) specific hypothesis that sensorimotor patterns best explain variations in phenomenal experience.

Hurley and Noë’s theory employs a distinction between what they call “deference” and “dominance.”  Sensory deference occurs when experiential character conforms to cortical role rather than sensory input, and dominance the reverse.  Sometimes,

nonstandard sensory inputs “defer” to cortical activity, as when the stimulation of a patient’s cheek is felt as a touch on a missing arm.  Here cortex “dominates,” in the sense that it produces the feel of the missing limb, despite the unusual input.  One explanation is that nerve impulses arriving at the cortical region designated for producing the feel of a touch on the cheek “spill over” triggering a neighboring cortical region assigned to producing sensation of the arm.  But the cortex can also “defer” to nonstandard input, as in the case of tactile qualia experienced by Braille readers corresponding to activity in the visual cortex. J.A. Gray (2003, p.193) observes that cortical deference, not dominance, is expected given functionalism, since the character of a mental state is supposed to depend on its role in mediating inputs and outputs. If that efferent-afferent mediating role changes, then the sensory character of the state should change with it.

Hurley and Noë (2003a) propose that cortical regions implicated in one sensory modality can shift to another (and, thus be dominated by input) if there are novel sensorimotor relationships available for exploitation.  For support they point out that the mere illusion of new sensorimotor relationships can trigger cortical deference.  Such is the case with phantom limb patients who can experience the illusion of seeing and moving a missing limb with the help of an appropriately placed mirror.  In time, the phantom often disappears, leading to the conjecture that the restored sensory-motor feedback loop dominates the cortex, forcing it to give up its old role of producing sensation of the missing limb.

Hurley and Noë (2003a, p.160) next raise a worry for their theory concerning synesthesia.   Perceptual inputs are “routed differently” in synesthetes, as in the case of an auditory input fed to both auditory and visual cortex in colored hearing (p.137). This is a case of intermodal cortical dominance, since the nonstandard auditory input “defers” to the visual cortex’s ordinary production of color experience.  But theirs is a theory assuming intermodal deference, that is, qualia is supposed to be determined by sensory inputs, not cortex (pp.140, 160).  It would appear that the visual cortex should not be stuck in the role of producing extra color qualia if their account is correct.

Hurley & Noë believe synesthesia raises a puzzle for any account of color experience, namely, why color experience defers to the colors of the world in some cases but not others.  For example, subjects wearing specially tinted goggles devised by Kohler at first see one side of the world as yellow, the other, blue.  However, color experience adapts and the subjects eventually report that the world looks normal once more (so a white object would still look white even as it passes through the visual field from yellow to blue).  On the other hand, synesthetic colors differ in that they “persist instead of adapting away.”

J.A. Gray points out that since colored hearing emerges early in life, there should be many opportunities for synesthetes to explore novel sensorimotor contingencies, such as conflicts between heard color names and the elicited “alien” qualia--a phenomenon reminiscent of the Stroop effect in which it takes longer to say “blue” if it’s written in red ink (Gray, et al., 2006; see also Hurley and Noë, 2003a, p.164, n.27).  Once again, why isn’t the visual cortex dominated by these sensory-motor loops and forced to cease producing the alien colors?  Gray (2003, p.193) calls this a “major obstacle” to Hurley and Noë’s theory since the visual cortex stubbornly refuses to yield to sensorimotor dominance.

In reply, Hurley and Noë have suggested that synesthetes are relatively impoverished with respect to their sensorimotor contingencies (2003a, pp.160, 165, n.27).  For example, unlike the case of normal subjects, where unconsciously processed stimuli can influence subsequent judgment, synesthetic colors need to be consciously perceived for there to be priming effects.  In short, the input-output relationships might not be robust enough to trigger cortical deference.  Elsewhere, Noë and Hurley (2003, p.195) propose that deference might fail to occur because the synesthetic function of the visual cortex is inextricably dependent on normal cortex functioning.  Whether sensorimotor accounts of experience can accommodate synesthesia is a matter of ongoing debate and cannot be decided here.

J.A. Gray, as mentioned earlier, also thinks synesthesia (specifically, colored hearing) poses a broader challenge to functionalism, since it shows that function and qualia come apart in two ways (2003, p.194).  His first argument contends that a single quale is compatible with different functions: seeing and hearing are functionally different, and yet either modality can result in exactly the same color experience (see also Gray, et al., 2002, 2006).  A second argument claims that different qualia are compatible with the same function.  Hearing is governed by only one set of input-output relationships, but gives rise to both auditory and visual qualia in the colored-hearing synesthete (Gray, 2003, p.194).

Functionalist replies to J.A. Gray et al.’s first argument (that is, that there can be functional differences in the absence of qualia differences) are canvassed by MacPherson (2007) and R. Gray (2004).  Macpherson points out (p.71) that a single quale associated with multiple functions is no threat to a “weak” functionalism not committed to the claim that functional differences necessarily imply qualia differences—qualia might be “multiply realizable” at the functional, as well as implementational level (note that qualia differences could still imply functional differences).  She continues arguing that even for “strong” functionalisms that do assert the same type of qualitative state cannot be implemented by different functions, the counter-example still fails.  Token mental states of the same type will inevitably differ in terms of some fine-grained causes and effects (for example, two persons can each have the same green visual experience even though the associated functional roles will tend to be somewhat different, for example, as green might lead to thoughts of Islam in one person, Ireland in another, ecology in still another, or envy, and so on).  In light of this, a natural way to interpret claims about functional role indiscernibility is to restrict the experience type individuating function to a “core” or perhaps “typical” or even “normal” role.  Perhaps a core role operates at a particular explanatory level—sort of as a MAC and a PC can be functionally indiscernible at the user-level while running a web browser, despite differing in terms of their underlying operating systems.  An alternative is to argue that the synesthetic “role” is really a malfunction, and so no threat to the claim that qualia differences imply normal role differences (R. Gray 2004, pp.67-8 offers a broadly similar response).

As for the other side of J.A. Gray’s challenge, namely that synesthesia shows functional indiscernibility does not imply qualia indiscernibility, Macpherson questions whether there really is qualia indiscernibility between normal and synesthetic experience (2007, p.77).  Perhaps synesthetes only imagine, rather than perceptually experience colors (Macpherson, 2007, pp.73ff.).  She also expresses doubts about experimental tests utilizing pop-out, and questions the interpretation of brain imaging studies (p.75)—for example, is an active “visual” cortex in colored hearing evidence of visual experience, or, evidence that this part of the brain has a non-visual role in synesthetes (cf. Hardcastle, 1997, p.387)?  In short, she contends there are grounds for questioning whether there is a clear case in which the experience of a synesthetic color is just like some non-synesthetic color.

Finally, although MacPherson does not make the point, J.A. Gray’s second argument is vulnerable to a response fashioned from her reply to his first argument.  Perhaps the qualia differences aren’t functionally indiscernible because core roles are not duplicated, or because the synesthetic “role” is really just a malfunction.  To make this more concrete, consider Gray’s example in which hearing the word “train” results in both hearing sound and seeing color (2003, p.194).  He claims that this shows that one-and-the-same function can have divergent qualia.  But this is a hasty inference, and conflates the local auditory uptake of a signal with divergent processing further downstream. Perhaps there are really two quite different input-output sets involved--the auditory signal is fed to both auditory and visual cortexes, after all, and so perhaps a single signal is fed into functionally distinct subsystems one of which is malfunctioning.  Malfunction or not, the functionalist could thus argue that Gray has not offered an example of a single function resulting in divergent qualia.

3. Modularity

The modular theory of mind, most notably advanced by Jerry Fodor (1983), holds that the mind is comprised of multiple sub-units or modules within which representations are processed in a manner akin to the processing of a classical computer.  Processing begins with input to a module, which is transformed into a representational output by inductive or deductive inferences called “computations.”  Modules are individuated by the functions they perform.  The mental processing underlying visual perception, auditory perception, and the like, take place in individual modules that are specially suited to performing the unique processing tasks relevant to each.  One of the main benefits of modularity is thought to be processing efficiency.  The time-cost involved if computations were to have access to all of the information stored in the mind would be considerable.  Moreover, since an organism encounters a wide variety of problems, it would have been economical for independent systems to have evolved for performing different tasks.  Some argue that synesthesia supports the modular theory.  Before discussing how synesthesia is taken as evidence for modularity, it will help to understand a bit more precisely, the important role that the concept of modularity plays in psychology.

Many, including Fodor, believe that scientific disciplines reveal the nature of natural kinds.  Natural kinds are thought to be mind-independent natural classes of phenomena that, “have many scientifically interesting properties in common over and above whatever properties define the class” (Fodor, 1983, p.46).  Those who believe that there are natural kinds commonly take things such as water, gold, zebras and penicillin to be instances of natural kinds.  If scientific disciplines reveal the nature of natural kinds, then for psychology to be a bona fide science, the mental phenomena that it takes as its objects of study would also have to be natural kinds.  For those like Fodor, who are interested in categorically delineating special sciences like psychology from more basic sciences, it must be that the laws of the special science cannot be reduced to those of the basic science.  This means that the natural kind terms used in a particular science to articulate that science’s laws cannot be replaced with terms for other more fundamental natural phenomena.  From this perspective, it is highly desirable to see whether modules meet the criteria for natural kinds.

According to Fodor, in addition to the properties that define specific types of modules, all modules share most, if not all, of the following nine scientifically interesting characteristics:  1. They are subserved by a dedicated neural architecture, that is, specific brain regions and neural structures uniquely perform each module’s task.  2. Their operations are mandatory, once a module receives a relevant input the subject cannot override or stop its processing.  3. Modules are informationally encapsulated, their processing cannot utilize information from outside of that module.  4. The information from inside the module cannot be accessed by external processing areas.  5. The processing in modules is very quick.  6. Outputs of modules are shallow and conceptually impoverished, requiring only limited expenditure of computational resources.  7. Modules have a fixed pattern of development that, like physical attributes, may most naturally be attributed to a genetic property.  8. The processing in modules is domain specific, it only responds to certain types of inputs.  9. When modules break down, they tend to do so in characteristic ways.

It counts in favor of a theory if it is able to accommodate, predict and explain some natural phenomena, including anomalous phenomena.  In this vein, some argue that the modular theory is particularly useful for explaining the perceptual anomaly of synesthesia.  But there are competing accounts for how modularity is implicated in synesthesia.  Some think that insofar synesthesia has all the hallmarks of modularity, it likely results from the presence of an extra cognitive module (Segal, 1997).  According to the extra-module thesis, synesthetes possess an extra module whose function is the mapping of, for example, sounds or graphemes (input) to color representations (output).  This grapheme-color module would, according to Segal, possess at least most of the nine scientifically interesting characteristics of modules identified by Fodor:

1. There seems to be a dedicated neural architecture, as lexical-color synesthesia appears uniquely associated with multimodal areas of the brain including the posterior inferior temporal cortex and parieto-occipital junctions (Pausenu et al., 1995).  2. Processing is mandatory, once synesthetes are presented with a lexical or grapheme stimulus the induction of a color photism is automatic and insuppressible.  3. Processing in synesthesia seems encapsulated, information that is available to the subject which might negate the effect has no effect on processing in the color-grapheme module.  4. The information and processing in the module is not made available outside of the module, for example, the synesthete does not know how the system affects mapping.  5. Since the processing in synesthesia happens pre-consciously, it meets the rapid speed requirement.  6. The outputs are shallow, they don’t involve any higher-order theoretically inferred features, just color.  7. Since synesthesia runs in families, is dominant in females, and subjects report having had it for as long as they can remember, synesthesia seems to be heritable, and this suggests that it would have a fixed pattern of development.  The features 8 and 9, domain specificity and characteristic pattern of breakdown, are the only two that Segal cannot easily attribute to the grapheme-color module.  Segal doesn’t doubt that a grapheme-color module could be found to have domain specific processing.  But on account of the rarity of synesthesia, he suspects that it may be too hard to find cases where the lexical or grapheme-color module breaks down.  Harrison and Baron-Cohen (1997) and Cytowic (1997) among others, however, note that for some, synesthesia fades with age and has been reported to disappear with stroke or trauma.

Another explanation for synesthesia that draws on the modular framework is that synesthesia is caused by a breakdown in the barriers that ordinarily keep modules and their information and processing separate (Baron-Cohen et al., 1993; Paulesu et al., 1995).  This failure of encapsulation would allow information from one module to be shared with others.  Perhaps in lexical or grapheme-color synesthesia, information is shared between the speech or text processing module and the color-processing module.  There are two hypotheses for how this might occur.  One hypothesis is that the failure of encapsulation originates with a faulty inhibitory mechanism that normally prevents information from leaking out of a module (Grossenbacher & Lovelace, 2001; Harrison & Baron-Cohen, 1997).  Alternatively, some propose that we are born without modules but sensory processes are pre-programmed to become modularized.  So infants are natural synesthetes, but during the process of normal development extra dendritic connections are paired off, resulting in the modular encapsulation typical of adult cognition (Maurer, 1993; Maurer and Mondloch 2004; see Baron-Cohen 1996 for discussion).  In synesthetes, the normal process of pairing off of extra dendritic connections fails to occur.  Kadosh et al. (2009) claim that the fact that synesthesia can be induced in non-synesthetes post-hypnotically, demonstrates that a faulty inhibitory mechanism is responsible for synesthesia rather than excessive dendritic connections; given the time frame of their study, new cortical connections could not have been established.

The modular breakdown theory may also be able to explain why synesthesia has the appearance of the nine scientifically interesting characteristics that Fodor identifies with mental modules (R. Gray, 2001b).  If this is right, then what reason is there to prefer either the breakdown theory or the extra module theory over the other?  Gray (2001b) situates this problem within the larger debate between computational and biological frameworks in psychology; he argues that the concept of function is central to settling the issue over which account of synesthesia we should prefer.  His strategy is to first determine what the most desirable view of function is.  Based on this, we can then use empirical means to arbitrate between the extra-module theory and the modular breakdown theory.

On the classical view of modularity developed by Fodor, function is elaborated in purely computational terms.  Computers are closed symbol-manipulating devices that perform tasks merely on account of the dispositions of their physical components.  We can describe the module’s performance of a task by appealing to just the local causal properties of the underlying physical mechanisms.  R. Gray thinks that it is desirable for a functional description to allow for the possibility of a breakdown.  To describe something as having broken down seems to mean understanding it as having failed to achieve its proper goal.  The purely computational/causal view of function does not seem to easily accommodate the possibility of there being a breakdown in processing.

R. Gray promotes an alternative conception of function that he feels better allows for the possibility of breakdown.  Gray’s alternative understanding is compatible with traditional local causal explanations.  But it also considers the role that a trait such as synesthesia would have in facilitating the organism’s ability to thrive in its particular external environment, its fitness utility.  Crucially, Gray finds the elaboration of modules using this theory of function to be compatible with Fodor’s requirement that a science’s kind predicates “are ones whose terms are the bound variables of proper laws” (1974, p. 506).  Assuming such an account, whether synesthesia is the result of an extra module or a breakdown in modularity will ultimately depend on how it contributes to the fitness of individuals.  According to Baron-Cohen, in order to establish that synesthesia results from a breakdown in modularity, it would have to be shown that it detracts from overall fitness.  The problem is that synesthesia has not been shown to compromise the bearer of the trait.  In contrast, Gray claims that the burden of proof lies with those who propose that synesthesia results from the presence of an extra-module to show that synesthesia is useful in a particular environment.  But at present, according to Gray, we have no reason to think that it is.  For instance, one indicator that something has some positive fitness benefit for organisms possessing it is the proliferation of that trait in a population.  But synesthesia is remarkably rare (Gray, 2001b).  Gray admits, however, that whether or not synesthesia has such a utility is an open empirical question.

4. Theories of Color

Visual perception seems to, at the very least, provide us with information about colored shapes existing in various spatial locations.  An account of the visual perception of objects should therefore include some account of the nature of color.  Some theorists working on issues pertaining to the nature of color and color experience draw on evidence from synesthesia.

Theories about the nature of color fall broadly into two categories.  On the one hand, color objectivism is the view that colors are mind-independent properties residing out in the world, for example, in objects, surfaces or the ambient light.  Typically, objectivists identify color with a physical property.  The view that color is a mind-independent physical property of the perceived world is motivated both by commonsense considerations and the phenomenology of color experience.  It is part of our commonsense or folk understanding of color, as reflected in ordinary language, that color is a property of objects.  Moreover, the experience of color is transparent, which is to say that colors appear to the subject as belonging to external perceptual objects; one doesn’t just see red, one sees a red fire hydrant or a yellow umbrella.  Color objectivism vindicates both the commonsense view of color and the phenomenology of color experience.  But some take it to be an unfortunate implication of the theory that colors are physical properties of objects, since it seems to entail that each color will be identical to a very long disjunctive chain of physical properties.  Multiple external physical conditions can all cause the same color experience both within and across individuals.  This means that popular versions of objectivism cannot identify a single unifying property behind all instances of a single color.

Subjectivist views, on the other hand, take colors to be mind-dependent properties of the subject or of his or her experience, rather than properties of the distal causal stimulus.  Subjectivist theories of color include the sense-data theory, adverbialism and certain varieties of representationalism.  The primary motivation for color subjectivism is to accommodate various types of non-veridical color experience where perceivers have the subjective experience of color in the absence of an external distal stimulus to which the color could properly be attributed.  One commonly cited example is the after-image. Some claim that the photisms of synesthetes provide another example of non-veridical non-referring color experiences (Fish, 2010; Lycan, 2006; Revonsuo, 2001).  But others argue that the door is open to regarding at least some cases of synesthesia as veridical perceptual experiences rather than hallucinations since photisms are often:  i) perceptually and cognitively beneficial, ii) subjectively like non-synesthetic experiences, and iii) fitness-enhancing.

Still, synesthesia may pose additional difficulties for objectivism.  Consider the implications for objectivism if color synesthesias were to become the rule rather than the exception.  How then would objectivism account for color photisms in cases where they are caused by externally produced sounds?  Revonsuo (2001) suggests that the view that colors can be identified with the objective disjunctive collections of physical properties that cause color experiences would have to add the changes of air pressure that produce sounds to that disjunctive collection of color properties.  This means that if synesthesia became the rule, despite the fact that nothing else about the world would have changed, physical properties that weren’t previously colored would suddenly become colored.  Revonsuo (2001) takes this to be an undesirable consequence for a theory of color.

Enactivism is a theory of perception that takes active engagement with perceptual objects along with other contextual relations to be highly relevant to perception.  Typically, enactivists take perception to consist in a direct relation between perceivers and objective properties.  Ward uses synesthesia in an argument for enactivism about color, proposing that the enactivist theory of color actually combines elements of both objectivism and subjectivism, and is therefore the only theory of color that can account for various facts about anomalous color experiences like synesthesia.

For instance, Kohler fitted normal perceivers with goggles, each of whose lenses were vertically bisected with yellow tinting on one side and blue on the other (Kohler, 1964).  When perceivers first donned the goggles, they reported anomalous color experiences consistent with the lens colors; the world appeared to be tinted yellow and blue.  But after a few weeks of wear, subjects reported that the abnormal tint adapted away.  Ward proposes that synesthetic photisms are somewhat similar to the tinted experiences of Kohler’s goggle wearers.  In both cases, the subject is aware of the fact that their anomalous color experiences are not a reliable guide to the actual colors of things around them.  The two cases are not alike, however, in one important respect.  Whereas goggle wearers’ color experiences adapt to fall in line with what they know to be true about their color experiences, synesthetes’ experiences do not.  This asymmetry calls for explanation and Ward demonstrates that the enactive theory of color provides an elegant explanation for this asymmetry.

According to Ward’s enactive view of color, “An object’s color is its property of modifying incident reflected light in a certain way.”  This is an objective property.  But, “we perceive this [objective] property by understanding the way [subjective] color appearances systematically vary with lighting conditions.”  This view explains the asymmetry noted above in the following way.  Kohler’s goggles interfere with regular color perception.  According to the enactive view of color, the tinted goggles introduce, “a complex new set of relationships between apparent colors, viewing conditions and objective color properties.”  So it is necessary for them to adapt away.  As perceivers acclimate to the fact that their color appearances no longer refer to the colors they had previously indicated, their ability to normally perceive color returns.  Ward assumes that synesthetes do not experience their color photisms as attributed to perceived objects, so they do not impact the synesthetes’ ability to veridically perceive color.  Synesthetes’ photisms fail to adapt away because they do not need to.

Another philosophical problem having to do with the nature of color concerns whether or not phenomenal color experiences are intentional.  If they are, we might wonder what sorts of properties they are capable of representing.  A popular view is that color experiences can only represent objects to have specific color or spectral reflectance properties. Matey draws on synesthesia to support the view that perceptual experiences can represent objects to have high-level properties such as having a specific  semantic value (roughly, as representing some property, thing or concept). This argument for high-level representational contents from synesthesia, it is argued, withstands several objections that can be lodged against other popular arguments such as arguments from phenomenal contrast.  The basic idea is that a special category of grapheme-color synesthesia depends on high-level properties.  In higher-grapheme-color synesthesia, perceivers mark with a particular color, graphemes that share conceptual significance such as the property of representing a number.  Matey argues that these high-level properties penetrate color experiences, and infect their contents so that the color-experiences of these synesthetes represent the objects they are projected onto as being representative of certain numbers or letters.  Matey  demonstrates that the conclusions of the argument from synesthesia may generalize to the common perceptual experiences of ordinary perceivers as well.

5. An Extraordinary Feature of Color-Grapheme Synesthesia

What the subject says about his or her own phenomenal experience usually carries great weight.  However, in the case of color-grapheme synesthesia, Macpherson urges caution (2007, p.76).  A striking and odd aspect of color-grapheme synesthesia is that it may seem to involve the simultaneous experience of different colors in exactly the same place at exactly the same time.  Consider synesthetes who claim to see both colors simultaneously: What could it be like for someone to see the grapheme 5 printed in black ink, but see it as red as well?  How are we to characterize their experience?  To Macpherson this “extraordinary feature” suggests that synesthetic colors are either radically unlike ordinary experience, or perhaps more likely, not experiences at all.  A third possibility would be to find an interpretation compatible with ordinary color experience.  For example, perhaps the synesthetic colors are analogous to a colored-transparency laid over ink (as suggested by Kim et al. 2006, p.196;  see also Cytowic 1989, pp.41, 51 and Cytowic & Eagleman 2009, p.72).  However, this analogy is unsatisfying and gives rise to further puzzlement.

One might expect that the colors would interfere with each other, for example, they should see a darker red when the 5 is printed in black ink, and a lighter red when in white.  And yet synesthetes tend to insist that the colors do not blend (Ramachandran & Hubbard 2001b, p.7, n.3) although if the ink is in the “wrong” color this can result in task performance delays analogous to Stroop-test effects and even induce discomfort (Ramachandran & Hubbard, 2003b, p.50).  Another possibility is that the overlap is imperfect, despite the denials, for example, perhaps splotches of black ink can be distinguished from the red (as proposed by Ramachandran & Hubbard 2001b, p.7, n.3).  Or, maybe there can be a “halo” or edge where the synesthetic and ordinary colors do not overlap—this might make sense of the claims of some that the synesthetic color is not “on” the number, but, as it were, “floating” somewhere between the shape and the subject.  But against these suggestions are other reports that the synesthetic and regular colors match up perfectly (Macpherson, 2007, p.76).

A second analogy from everyday experience is simultaneously seeing what is both ahead of and behind oneself by observing a room’s reflection in a window.  This, however, only recycles the problem.  In seeing a white lamp reflected in a window facing a blue expanse of water, the colors mix (for example, the reflected lamp looks to be a pale blue). Moreover, one does not undergo distinct impressions of the lamp and the region occupied by the waves overlapping with the reflected image (though of course one can alter the presentation by either focusing on the lamp or on the waves).

A third explanation draws on the claim mentioned earlier that the extra qualia can depend on top-down processing, appearing only when the shape is recognized as a letter, or as a number (as in seeing an ambiguous shape in FA5T versus 3456).  There is some reason to think that the synesthetic color can “toggle” on and off depending on whether it is recognized and attended to, as opposed to appearing as a meaningless shape in the subject’s peripheral vision (Ramachandran & Hubbard 2001a, 2001b).  Toggling might also explain reports that emphasize seeing the red, as opposed to (merely?) knowing the ink is black (cf. Ramachandran & Hubbard, 2001b, p.7, n.3).  Along these lines, Kim et al. tentatively suggest that the “dual experience” phenomenon might be explained by rapid switching modulated by changes in attention (2006, p.202).

Cytowic and Eagleman (2009, p.73), in contrast to these ruminations, deny there is anything mysterious or conceptually difficult about the dual presentation of imagined and real objects sharing exactly the same location in physical space.  They contend that the dual experience phenomenon is comparable to visualizing an imaginary apple in the same place as a real coffee cup, “you’ll see there is nothing impossible, or even particularly confusing about two objects, one real and one imagined, sharing the same coordinates.”  This dismissal, however, fails to come to terms with the conundrum.  Instead of an apple, try visualizing a perfect duplicate of the actual coffee cup in precisely the same location (for those who believe they can do this, continue visualizing additional coffee cups until the point becomes obvious).  If Cytowic and Eagleman are to be taken literally this ought to be easy.  The visualization of a contrasting color also meets a conceptual obstacle.  What does it even mean to visualize a red surface in exactly the same place as a real black surface in the absence of alternating presentations (as in binocular rivalry) or blending?

Another perplexing feature of synesthetic color experience are reports of strange “alien” colors somehow different from ordinary color experience.  These “Martian” colors may or may not indicate a special kind of color qualia inaccessible to non-synesthetes, though given the apparent causal role differences from ordinary colors when it comes to such things as “lighting, viewing geometry and chromatic context” (Noë & Hurley, 2003, p.195) this is unsurprising and even expected by broadly functionalist theories of phenomenal experience.  Ramachandran and Hubbard (2001b, pp.5, 26, 30) offer some discussion and conjectures about the underlying neural processes.

Whether the more bizarre testimony can be explained away along one (or more) of the above suggestions, or has deep implications about synesthesia, self-report, and the nature of color experience, demands further investigation by philosophers and scientists.

6. Wittgenstein’s Philosophical Psychology

Ter Hark (2009) offers a Wittgensteinian analysis of color-grapheme synesthesia, arguing that it fails to fit the contrast between perception and mental imagery, and so calls for a third category bearing only some of the logical marks of experience.  He contends that it is somewhat like a percept in that it depends on looking, has a definite beginning and end, and is affected by shifts in attention.  On the other hand, it is also somewhat like mental imagery in that it is voluntary and non-informative about the external world.

Although ter Hark cites Rich et al. (2005) for support, only 15% of their informants claimed to have full control over synesthetic experience (that is, induced by thought independent of sensory stimulation) and most (76%) characterized it as involuntary.  It would therefore seem that ter Hark’s analysis applies to only a fraction of synesthetes.  The claim that synesthetic percepts seem non-experiential because they fail to represent the world is also contestable.  Visual experience need not always be informative (for example, hallucinations, “seeing stars,” and so forth) and failing to inform us about the world is compatible with aiming to do so but misrepresenting.

7. Individuating the Senses

Synesthesia might be important when it comes to questions about the nature of the senses, how they interact, and how many of them there are.  For example, Keeley (2002) proposes that synesthesia may challenge the assumption that the various senses are, “significantly separate and independent” (p.25, n.37) and so complicate discussions about what distinguishes one sense from another.  A similar point is made by Ross who notes that synesthesia undermines his “modified property condition” (2001, p.502).  The modified property condition is supposed to be necessary for individuating the senses, and states that each sense modality specializes in detecting certain properties (2001, p.500).  As discussed in the section on representationalism, synesthesia might seem to indicate that properties usually deemed proprietary to one sense can be detected by others after all.  Meanwhile, Ross’ proposal that synesthesia be explained away as a memory association seems unpersuasive in light of the preponderance of considerations suggesting it is a genuine sensory phenomenon (see Ramachandran & Hubbard, 2001a, 2001b, 2003b; for further discussion of Ross see Gatzia, 2008).  At present, little seems to have been written by philosophers on the significance of synesthesia as concerns the individuation and interaction of the senses (though see Macpherson, 2007, O’Callaghan 1998, p.325 and R. Gray 2011, p.253, n.17).

8. Aesthetics and “Literary Synesthesia”

The use of “intersense analogy” or sense-related metaphor as a literary technique is long familiar to authors and critics (for example, a sharp taste, a loud shirt) perhaps starting with Aristotle who noticed a “sort of parallelism between what is acute or grave to hearing and what is sharp or blunt to touch” (quoted in O’Malley, 1957, p.391).  Intersense metaphors such as “the sun is silent” (Dante quoted in O’Malley, 1957, p.409) and, more recently, “sound that makes the headphones edible” (from the lyrics of a popular rock band) may be, “a basic feature of language” natural for literature to incorporate (O’Malley, 1957, p.397), and to some “an essential component in the poetic sensibility” (Götlind, 1957, p.329).  Such “literary” synesthesia is therefore an important part of aesthetic criticism, as in Hellman’s (1977, p.287) discussion of musical styles, Masson’s analysis of acoustic associations (1953, p.222) and Ueda’s evaluation of cross-modal analogies in Haiku poetry which draw attention to “strange yet harmonious” combinations (1963, p.428).

Importantly, “the writer’s use of the ‘metaphor of the senses’” (O’Malley, 1957, p.391) is not to be confused with synesthesia as a sensory phenomenon, as repeatedly noted over the years by several philosophical works on poetry and aesthetics including Downey (1912, p.490), Götlind (1957, p.328) and O’Malley (1958, p.178).  Nevertheless, there are speculations about the connection between the two (for example, Smith, 1972, p.28; O’Malley, 1957, pp.395-396) and sensory synesthesia has been put forward as an important creative source in poetry (Downey, 1912, pp.490-491; Rayan, 1969), music and film (Brougher et al., 2005), painting (Tomas, 1969; Cazeaux, 1999; Ione, 2004) and artistic development generally (Donnell & Duignan, 1977).

That not all sensory matches work aesthetically—it seems awkward to speak of a loud smell or a salty color—might be significant in suggesting ties to perceptual synesthesia.  Perhaps they have more in common than is usually suspected (Marks, 1982; Day 1996).

Synesthetic metaphor is a “human universal” found in every culture and may be an expression of our shared nature (Pinker, 2002, p.439).  Maurer and Mondloch (2004) suggest that the fact that the cross-modal parings in synesthesias tend to be the same as the sensory matches manifest in common metaphors may reveal that non-synesthete adults share cross-modal activations with synesthetes, and synesthesia is a normal feature of early development.  Matey suggests that this lends credibility to the view that the cross-wiring present in synesthetes and non-synesthetes differs in degree and so we may draw conclusions about the types of representational contents possible of normal perceivers’ experiences based on the perceptual contents of synesthetes.

9. Synesthesia and Creativity

Ramachandran and Hubbard, among others, have been developing a number of hypotheses about the explanatory value of synesthesia towards creativity, the nature of metaphor, and even the origins of language (2001b, 2003a; see also Mulvenna, 2007; Hunt, 2005).  Like synesthesia, creativity seems to consist in, “linking two seemingly unrelated realms in order to highlight a hidden deep similarity” (Ramachandran & Hubbard, 2001b, p.17).  Ramachandran and Hubbard (2001b) conjecture that greater connectivity (or perhaps the absence of inhibitory processes) between functionally discrete brain regions might facilitate creative mappings between concepts, experiences, and behaviors in both artists and synesthetes.  These ideas are controversial and although there is some evidence that synethetes are more likely to be artists (for example, Ward et al., 2008; Rothen & Meier, 2010) the links between synesthesia and creativity remain tentative and conjectural.

10. References and Further Reading

  • Alter, T. (2006). Does synesthesia undermine representationalism? Psyche, 12(5).
  • Asher, J.E., Lamb, J., Brocklebank, D., Cazier, J., Maestrini, E., Addis, L., … Monaco, A. (2009). A whole-genome scan and fine-mapping linkage study of auditory-visual synesthesia reveals evidence of linkage to chromosomes. American Journal of Human Genetics, 84, 279-285.
  • Baron-Cohen, S. (1996). Is there a normal phase of synaesthesia in development? Psyche, 2(27).
  • Baron-Cohen, S., Wyke, M.A., & Binnie, C. (1987). Hearing words and seeing colours: An experimental investigation of a case of synaesthesia. Perception, 16(6), 761-767.
  • Beck, J. (1966). Effect of orientation and of shape similarity on perceptual grouping. Perception and Psychophysics, 1, 300-302.
  • Baron-Cohen, S., Harrison, J., Goldstein, L., & Wyke, M.A. (1993). Coloured speech perception: Is synaesthesia what happens when modularity breaks down? Perception, 22, 419-426.
  • Brougher, K., Mattis, O., Strick, J., Wiseman, A., & Zikczer, J. (2005). Visual music: Synaesthesia in art and music since 1900. London: Thames and Hudson.
  • Cazeaux, C. (1999). Synaesthesia and epistemology in abstract painting. British Journal of Aesthetics, 39(3), 241-251.
  • Chalmers, D. (2004). The representational character of experience. In B. Leiter (Ed.), The Future for Philosophy (pp.153-181). Oxford: Clarendon Press.
  • Critchley, E.M.R. (1997). Synaesthesia: Possible mechanisms. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classical and contemporary readings (pp.259-268). Cambridge, Massachusetts: Blackwell
  • Cytowic, R.E. (1989). A union of the senses. New York: Springer-Verlag.
  • Cytowic, R.E. (1997). Synesthesia: Phenomenology and neuropsychology: A review of current knowledge. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classical and contemporary readings (pp.17-39). Cambridge, Massachusetts: Blackwell.
  • Cytowic, R.E., & Eagleman, D. (2009). Wednesday is indigo blue: Discovering the brain of synesthesia. Cambridge: The MIT Press.
  • Day, S.A. (1996). Synaesthesia and synaesthetic metaphor. Psyche, 2(32).
  • Day, S.A. (2005). Some demographic and socio-cultural aspects of synesthesia. In L. Robertson & N. Sagiv (Eds.), Synesthesia: Perspectives from cognitive neuroscience (pp.11-33). Oxford: Oxford University Press.
  • Dixon, M.J., Smilek, D., Cudahy, C., & Merikle, P.M. (2000). Five plus two equals yellow. Nature, 406, 365.
  • Dixon, M.J., Smilek, D., & Merikle, P.M. (2004). Not all synaesthetes are created equal: Projector versus associator synaesthetes. Cognitive, Affective & Behavioral Neuroscience, 4(3), 335-343.
  • Dixon, M.J., Smilek, D., Duffy, P.L., Zanna, M.P., & Merikle, P.M. (2006). The role of meaning in grapheme-colour synaesthesia. Cortex, 42(2), 243-252.
  • Donnell, C.A., & Duignan, W. (1977). Synaesthesia and aesthetic education. Journal of Aesthetic Education, 11, 69-85.
  • Downey, J.E. (1912). Literary Synesthesia. The Journal of Philosophy, Psychology and Scientific Methods, 9(18), 490-498.
  • Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: The MIT Press.
  • Edquist, J., Rich, A.N., Brinkman, C., & Mattingley, J.B. (2006). Do synaesthetic colours act as unique features in visual search? Cortex, 42(2), 222-231.
  • Fish, W. (2010). Philosophy of perception: A contemporary introduction. New York: Routledge.
  • Fodor, J. (1974). Special sciences, or the disunity of science as a working hypothesis. Synthese, 28, 97-115.
  • Fodor, J. (1983). Modularity of mind: An essay on faculty psychology. Cambridge, MA: MIT Press.
  • Galton, F. (1880). Visualized numerals. Nature, 22, 494-495.
  • Galton, F. (1883). Inquiries into human faculty and its development. Dent & Sons: London.
  • Gatzia, D.E. (2008). Martian colours. Philosophical Writings, 37, 3-16.
  • Gray, J.A. (2003). How are qualia coupled to functions? Trends in Cognitive Sciences, 7(5), 192-194.
  • Gray, J.A. (2004). Consciousness: Creeping up on the hard problem. Oxford: Oxford University Press.
  • Gray, J.A., Williams, S.C.R., Nunn, J., & Baron-Cohen, S. (1997). Possible implications of synaesthesia for the question of consciousness. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.173-181). Cambridge, MA: Blackwell.
  • Gray, J.A. (1998).  Creeping up on the hard question of consciousness. In S. Hameroff, A. Kaszniak & A. Scott (Eds.), Toward a science of consciousness II: The second Tucson discussions and debates (pp.279-291). Cambridge, MA: The MIT Press.
  • Gray, J.A., Nunn J., & Chopping S. (2002). Implications of synaesthesia for functionalism: Theory and experiments. Journal of Consciousness Studies, 9(12), 5-31.
  • Gray, J.A., Parslow, D.M., Brammer, M.J., Chopping, S.M., Vythelingum, G.N., & Ffytche, D.H. (2006). Evidence against functionalism from neuroimaging of the alien colour effect in synaesthesia. Cortex, 42(2), 309-318.
  • Gray, R. (2001a). Synaesthesia and misrepresentation: A reply to Wager. Philosophical Psychology, 14(3), 339-346.
  • Gray, R. (2001b). Cognitive modules, synaesthesia and the constitution of psychological natural kinds. Philosophical Psychology, 14(1), 65-82.
  • Gray, R. (2004). What synaesthesia really tells us about functionalism. Journal of Consciousness Studies, 11(9), 64-69.
  • Gray, R. (2011). On the nature of the senses. In F. Macpherson (Ed.), The Senses: Classic and contemporary philosophical perspectives, pp.243-260. New York: Oxford University Press.
  • Götlind, E. (1957). The appreciation of poetry: A proposal of certain empirical inquiries. The Journal of Aesthetics and Art Criticism, 15(3), 322-330.
  • Grossenbacher, P.G., & Lovelace, C.T. (2001). Mechanisms of synesthesia: Cognitive and physiological constraints. Trends in Cognitive Sciences, 5(1), 36-42.
  • Hardcastle, V.G. (1997). When a pain is not. The Journal of Philosophy, 94(8), 381-409.
  • Harrison, J.E. (2001). Synaesthesia: The strangest thing. New York: Oxford University Press.
  • Harrison, J.E., & Baron-Cohen, S. (1997). Synaesthesia: A review of psychological theories. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.109-122). Cambridge, MA: Blackwell.
  • Hellman, G. (1977). Symbol systems and artistic styles. The Journal of Aesthetics and Art Criticism, 35(3), 279-292.
  • Hunt, H. (2005). Synaesthesia, metaphor, and consciousness: A cognitive-developmental perspective. Journal of Consciousness Studies, 12(12), 26-45.
  • Hurley, S., & Noë, A. (2003a). Neural plasticity and consciousness. Biology and Philosophy, 18, 131-168.
  • Hurley, S., & Noë, A. (2003b). Neural plasticity and consciousness: Reply to Block. Trends in Cognitive Sciences, 7(1), 342.
  • Ione, A. (2004). Klee and Kandinsky: Polyphonic painting, chromatic chords and synaesthesia. Journal of Consciousness Studies, 11(3-4), 148-158.
  • Keeley, B.L. (2002). Making sense of the senses: Individuating modalities in humans and other animals. The Journal of Philosophy, 99(1), 5-28.
  • Kim, C-Y., Blake, R., & Palmeri, T.J. (2006). Perceptual interaction between real and synesthetic colors. Cortex, 42, 195-203.
  • Kadosh R.C., Henik, A., Catena, A., Walsh, V., & Fuentes, L.J. (2009). Induced cross-modal synaesthetic experiences without abnormal neuronal connections. Psychological Science, 20(2), 258-265.
  • Kohler, I. (1964). Formation and transformation of the perceptual world. Psychological Issues 3(4, Monogr. No. 12), 1-173.
  • Lycan, W. (1987). Consciousness. Cambridge, MA: The MIT Press.
  • Lycan, W. (2006). Representational theories of consciousness. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy.
  • Luria, A.R. (1968). The mind of a mnemonist. New York: Basic Books.
  • Macpherson, F. (2007). Synaesthesia, functionalism and phenomenology. In M. de Caro, F. Ferretti & M. Marraffa (Eds.), Cartographies of the mind: Philosophy and psychology in intersection series: Studies in brain and mind (Vol.4, pp.65-80). Dordrecht, The Netherlands: Springer.
  • Marks, L.E. (1982). Synesthetic perception and poetic metaphor. Journal of experimental psychology: Human perception and performance, 8(1): 15-23.
  • Masson, D.I. (1953). Vowel and consonant patterns in poetry. The Journal Aesthetics and Art Criticism, 12(2), 213-227.
  • Maurer, D. (1993). Neonatal synesthesia: Implications for the processing of speech and faces. In B. de Boysson-Bardies, S. de Schonen, P. Jusczyk, P. Mcneilage & J. Morton (Eds.), Developmental neurocognition: Speech and face processing in the first year of life (pp.109-124). Dordrecht: Kluwer.
  • Maurer, D., & Mondloch, C. (2004). Neonatal synesthesia: A re-evaluation. In L. Robertson & N. Sagiv (Eds.), Attention on Synesthesia: Cognition, Development and Neuroscience, (pp. 193-213). Oxford: Oxford University Press.
  • Meier, B., & Rothen, N. (2009). Training grapheme-colour associations produces a synaesthetic Stroop effect, but not a conditioned synaesthetic response. Neuropsychologia, 47(4), 1208-1211.
  • Mulvenna, C.M. (2007). Synaesthesia, the arts and creativity: A neurological connection. Frontiers of Neurology and Neuroscience, 22, 206-222.
  • Noë, A., & Hurley, S. (2003). The deferential brain in action. Trends in Cognitive Sciences, 7(5), 195-196.
  • O’Callaghan, C. (1998). Seeing what you hear: Cross-modal illusions and perception. Philosophical Issues, 18(1), 316-338.
  • O’Malley, G. (1957). Literary synesthesia. The Journal of Aesthetics and Art Criticism, 15(4), 391-411.
  • O’Malley, G. (1958). Shelley’s “air-prism”: The synesthetic scheme of “Alastor.” Modern Philology, 55(3), 178-187.
  • Paulesu, E., Harrison, J., Baron-Cohen, S., Watson, J.D.G., Goldstein, L., Heather, J., … Frith, C.D. (1995). The physiology of coloured hearing: A PET activation study of colour-word synaesthesia, Brain 118, 661-676.
  • Pettit, P. (2003). Looks red. Philosophical Issues, 13(1), 221-252.
  • Pinker, S. (2002). The blank slate: The modern denial of human nature. New York: Viking.
  • Proulx, M.J. (2010). Synthetic synaesthesia and sensory substitution. Consciousness and Cognition, 19(1), 501-503.
  • Ramachandran, V.S., & Hubbard, E.M. (2000). Number-color synaesthesia arises from cross-wiring in the fusiform gyrus. Society for Neuroscience Abstracts, 30, 1222.
  • Ramachandran, V.S., & Hubbard, E.M. (2001a). Psychophysical investigations into the neural basis of synaesthesia. Proceedings of the Royal Society of London B, 268, 979-983.
  • Ramachandran, V.S., & Hubbard, E.M. (2001b). Synaesthesia: A window into perception though and language. Journal of Consciousness Studies, 8(12), 3-34.
  • Ramachandran, V.S., & Hubbard, E.M. (2003a). Hearing colors, tasting shapes. Scientific American, April, 52-59.
  • Ramachandran, V.S., & Hubbard, E.M. (2003b). The phenomenology of synaesthesia. Journal of Consciousness Studies, 10(8), 49-57.
  • Rang, H.P., & Dale, M.M. (1987). Pharmacology. Edinburgh: Churchill Livingstone.
  • Rayan, K. (1969). Edgar Allan Poe and suggestiveness. The British Journal of Aesthetics, 9, 73-79.
  • Revonsuo, A. (2001). Putting color back where it belongs. Consciousness and Cognition, 10(1), 78-84.
  • Rich, A.N., Bradshaw, J.L., & Mattingley, J.B. (2005). A systematic, large-scale study of synaesthesia: Implications for the role of early experience in lexical-colour associations, Cognition, 98, 53-84.
  • Rosenberg, G. (2004). A place for consciousness: Probing the deep structure of the natural world. Oxford: Oxford University Press.
  • Ross, P.W. (2001). Qualia and the senses. The Philosophical Quarterly, 51(205), 495-511.
  • Rothen, N., & Meier, B. (2010). Higher prevalence of synaesthesia in art students. Perception, 39, 718-720.
  • Segal, G.M.A. (1997). Synaesthesia: Implications for modularity of mind. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.211-223). Cambridge, MA: Blackwell.
  • Simner, J., Sagiv, N., Mulvenna, C., Tsakanikos, E., Witherby, S., Fraser, C., … Ward, J. (2006). Synaesthesia: The prevalence of atypical cross-modal experiences. Perception, 35, 1024-1033.
  • Smilek, D., Dixon, M.J., Cudahy, C., & Merikle, P.M. (2001). Synaesthetic photisms influence visual perception. Journal of Cognitive Neuroscience, 13, 930-936.
  • Smilek. D., Dixon, M.J., Cudahy, C. & Merikle, P.M. (2002). Synesthetic color experiences influence memory. Psychological Science, 13(6), 548-552
  • Smilek, D., Dixon M.J., & Merikle P.M. (2003). Synaesthetic photisms guide attention. Brain & Cognition, 53, 364-367.
  • Ter Hark, M. (2009). Coloured vowels: Wittgenstein on synaesthesia and secondary meaning. Philosophia: Philosophical Quarterly of Israel, 37(4), 589-604.
  • Tomas, V. (1969). Kandinsky’s theory of painting. British Journal of Aesthetics, 9, 19-38.
  • Treisman, A. (1982). Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology: Human perception and performance, 8(2), 194-214.
  • Tye, M. (1995). Ten problems of consciousness: A representational theory of the phenomenal mind. Cambridge, MA: The MIT Press.
  • Ueda, M. (1963). Basho and the poetics of “Haiku.” The Journal of Aesthetics and Art Criticism, 21(4), 423-431.
  • Wager, A. (1999). The extra qualia problem: Synaesthesia and Representationalism. Philosophical Psychology, 12(3), 263-281.
  • Wager, A. (2001). Synaesthesia misrepresented. Philosophical Psychology, 14(3), 347-351.
  • Ward, J., & Simner, J. (2005). Is synaesthesia an X-linked dominant trait with lethality in males? Perception, 34(5), 611-623.
  • Ward, J., & Sagiv, N. (2007). Synaesthesia for finger counting and dice patterns: A case of higher synaesthesia? Neurocase, 13(2), 86-93.
  • Ward, J., Thompson-Lake, D., Ely, R., & Kaminski, F. (2008). Synaesthesia, creativity and art: What is the link? British Journal of Psychology, 99, 127-141.
  • Wittgenstein, L. (1958/1994). Philosophical investigations. Oxford: Blackwell.

 

Author Information

Sean Allen-Hermanson
Email: hermanso@fiu.edu
Florida International University
U. S. A.

and

Jennifer Matey
Email: jmatey@fiu.edu
Florida International University
U. S. A.

Theory of Mind

Theory of Mind is the branch of cognitive science that investigates how we ascribe mental states to other persons and how we use the states to explain and predict the actions of those other persons. More accurately, it is the branch that investigates mindreading or mentalizing or mentalistic abilities. These skills are shared by almost all human beings beyond early childhood. They are used to treat other agents as the bearers of unobservable psychological states and processes, and to anticipate and explain the agents’ behavior in terms of such states and processes. These mentalistic abilities are also called “folk psychology” by philosophers, and “naïve  psychology” and “intuitive psychology” by cognitive scientists.

It is important to note that Theory of Mind is not an appropriate term to characterize this research area (and neither to denote our mentalistic abilities) since it seems to assume right from the start the validity of a specific account of the nature and development of mindreading, that is, the view that it depends on the deployment of a theory of the mental realm, analogous to the theories of the physical world (“naïve physics”). But this view—known as theory-theory—is only one of the accounts offered to explain our mentalistic abilities. In contrast, theorists of mental simulation have suggested that what lies at the root of mindreading is not any sort of folk-psychological conceptual scheme, but rather a kind of mental modeling in which the simulator uses her own mind as an analog model of the mind of the simulated agent.

Both theory-theory and simulation-theory are actually families of theories. Some theory-theorists maintain that our naïve theory of mind is the product of the scientific-like exercise of a domain-general theorizing capacity. Other theory-theorists defend a quite different hypothesis, according to which mindreading rests on the maturation of a mental organ dedicated to the domain of psychology. Simulation-theory also shows different facets. According to the “moderate” version of simulationism, mental concepts are not completely excluded from simulation. Simulation can be seen as a process through which we first generate and self-attribute pretend mental states that are intended to correspond to those of the simulated agent, and then project them onto the target. By contrast, the “radical” version of simulationism rejects the primacy of first-person mindreading and contends that we imaginatively transform ourselves into the simulated agent, interpreting the target’s behavior without using any kind of mental concept, not even ones referring to ourselves.

Finally, the claim─common to both theorists of theory and theorists of simulation─that mindreading plays a primary role in human social understanding was challenged in the early 21st century, mainly by phenomenology-oriented philosophers and cognitive scientists.

Table of Contents

  1. Theory-Theory
    1. The Child-Scientist Theory
    2. The Modularist Theory-Theory
    3. First-Person Mindreading and Theory-Theory
  2. Simulation-Theory
    1. Simulation with and without Introspection
    2. Simulation in Low-Level Mindreading
  3. Social Cognition without Mindreading
  4. References and Further Reading
    1. Suggested Further Reading
    2. References

1. Theory-Theory

Social psychologists have investigated mindreading since at least the 1940s. In Heider and Simmel’s (1944) classic studies, participants were presented with animated events involving interacting geometric shapes. When asked to report what they saw, the participants almost invariably treated these shapes as intentional agents with motives and purposes, suggesting the existence of an automatic capacity for mentalistic attribution. Pursuing this line of research would lead to Heider’s The Psychology of Interpersonal Relations (1958), a seminal book which is one of the main historical referents of the scientific inquiry into our mentalistic practice. In this book Heider characterizes “commonsense psychology” as a sophisticated conceptual scheme that has an influence on human perception and action in the social world comparable to that which Kant’s categorical framework has on human perception and action in the physical world (see Malle & Ickes 2000: 201).

Heider’s visionary work played a central role in the origination and definition of attribution theory, that is, the field of social psychology that investigates the mechanisms underlying ordinary explanations of our own and other people’s behavior. However, attribution theory is a quite different way of approaching our mentalistic practice. Heider took commonsense psychology in its real value of knowledge, arguing that scientific psychology has a good deal to learn from it. In contrast, most research on causal attribution has been faithful to behaviorism’s methodological lesson and focused on the epistemic inaccuracy of commonsense psychology.

Two years before Heider’s book, Wilfred Sellars’ (1956) Empiricism and the Philosophy of Mind had suggested that our grasp of mental phenomena does not originate from direct access to our inner life, but is the result of a “folk” theory of mind, which we acquire through some form or other of enculturation. Sellars’ speculation turned out to be very philosophically productive and in agreement with social-psychology research on self-attribution, coming to be known as “Theory-Theory” (a term coined by Morton 1980—henceforth “TT”).

During the 1970s one or other form of TT was seen as a very effective antidote to Cartesianism and philosophical behaviorism. In particular, TT was coupled with Nagel’s (1961) classic account of intertheoretic reduction as deduction of the reduced from the reducing theory via bridge principles in order to turn the ontological problem of the relationship between the mental and the physical into a more tractable epistemological problem concerning the relations between theories. Thus it became possible to take a notion—intertheoretic reduction—rigorously studied by philosophers of science; to examine the relations between folk psychology as a theory including the commonsense mentalistic ontology and its scientific successors (scientific psychology, neuroscience, or some other form of science of the mental); and to let ontological/metaphysical questions be answered by (i) focusing on questions about explanation and theory reduction first and foremost, and then (ii) depending on how those first questions were answered, drawing the appropriate ontological/metaphysical conclusions based on a comparison with how similar questions about explanation and reduction got answered in other scientific episodes and the ontological conclusions philosophers and scientists drew in those cases (this strategy is labelled “the intertheoretic-reduction reformulation of the mind-body problem” in Bickle 2003).

In this context, TT was taken as the major premise in the standard argument for eliminative materialism (see Ramsey 2011: §2.1). In its strongest form, eliminativism predicts that part or all of our folk-psychological theory will vanish into thin air, just as it happened in the past when scientific progress led to the abandonment of the folk theory of witchcraft or the protoscientific theories of phlogiston and caloric fluid. This prediction rests on an argument which moves from considering folk psychology as a massively defective theory to the conclusion that—just as with witches, phlogiston, and caloric fluid—folk-psychological entities do not exist. Thus philosophy of mind joined attribution theory in adopting a critical attitude toward the explanatory adequacy of folk psychology (see, for example, Stich’s 1983 eliminativistic doubts about the folk concept of belief, motivated inter alia by the experimental social psychology literature on dissonance and self-attribution).

Notice, however, that TT can be differently construed depending on whether we adopt a personal or subpersonal perspective (see Stich & Ravenscroft 1994: §4). The debate between intentional realists and eliminativists favored David Lewis’ personal-level formulation of TT. According to Lewis, the folk theory of mind is implicit in our everyday talk about mental states. We entertain “platitudes” regarding the causal relations of mental states, sensory stimuli, and motor responses that can be systematized (or “Ramsified”). The result is a functionalist theory that gives the terms of mentalistic vocabulary their meaning in the same way as scientific theories define their theoretical terms, namely “as the occupants of the causal roles specified by the theory…; as the entities, whatever those may be, that bear certain causal relations to one another and to the referents of the O[bservational]-terms” (Lewis 1972: 211). In this perspective, mindreading can be described as an exercise in reflective reasoning, which involves the application of general reasoning abilities to premises including ceteris paribus folk-psychological generalizations. A good example of this conception of mindreading is Grice’s schema for the derivation of conversational implicatures:

He said that P; he could not have done this unless he thought that Q; he knows (and knows that I know that he knows) that I will realize that it is necessary to suppose that Q; he has done nothing to stop me thinking that Q; so he intends me to think, or is at least willing for me to think, that Q(Grice 1989: 30-1; cit. in Wilson 2005: 1133).

Since the end of the 1970s, however, primatology, developmental psychology, cognitive neuropsychiatry and empirically-informed philosophy have been contributing to a collaborative inquiry into TT. In the context of this literature the term “theory” refers to a “tacit” or “sub-doxastic” structure of knowledge, a corpus of internally represented information that guides the execution of mentalistic capacities. But then the functionalist theory that fixes the meaning of mentalistic terms is not the theory implicit in our everyday, mentalistic talk, but the tacit theory (in Chomsky’s sense) subserving our thought and talk about the mental realm (see Stich & Nichols 2003: 241). On this perspective, the inferential processes that depend on the theory have an automatic and unconscious character that distinguishes them from reflective reasoning processes.

In developmental psychology part of the basis for the study of mindreading skills in children was already in Jean Piaget’s seminal work on egocentrism in the 1930s to 50s, and the work on metacognition (especially metamemory) in the 1970s. But the developmental research on mindreading took off only under the thrust of three discoveries in the 1980s (see Leslie 1998). First, normally developing 2-year-olds are able to engage in pretend play. Second, normally developing children undergo a deep change in their understanding of the psychological states of other people somewhere between the ages of 3 and 4, as indicated especially by the appearance of their ability to solve a variety of “false-belief” problems (see immediately below). Lastly, children diagnosed with autism spectrum disorders are especially impaired in attributing mental states to other people.

In particular, Wimmer & Perner (1983) provided the theory-of-mind research with a seminal experimental paradigm: the “false-belief task.” In the most well-known version of this task, a child watches two puppets interacting in a room. One puppet (“Sally”) puts a toy in location A and then leaves the room. While Sally is out of the room, the other puppet (“Anne”) moves the toy from location A to location B. Sally returns to the room, and the child onlooker is asked where she will look for her toy, in location A or in location B. Now, 4- and 5-year-olds have little difficulty passing this test, judging that Sally will look for her toy in location A although it really is in location B. These correct answers provide evidence that the child realizes that Sally does not know that the toy has been moved, and so will act upon a false belief. Many younger children, typically 3-year-olds, fail such a task, often asserting that Sally will look for the toy in the place where it was moved. Dozens of versions of this task have now been used, and while the precise age of success varies between children and between task versions, in general we can confidently say that children begin to successfully perform the (“verbal”) false-belief tasks at around 4 years (see the meta-analysis in Wellman et al. 2001; see also below, the reference to “non-verbal” false-belief tasks).

Wimmer and Perner’s false-belief task set off a flood of experiments concerning the infant understanding of the mind. In this context, the first hypotheses about the process of acquisition of the naïve theory of mind were suggested. The finding that mentalistic skills emerge very early, in the first 3-4 years, and in a way relatively independent from the development of other cognitive abilities, led some scholars (for example, Simon Baron-Cohen, Jerry Fodor, Alan Leslie) to conceive them as the end-state of the endogenous maturation of an innate theory-of-mind module (or system of modules). This contrasted with the view of other researchers (for example, Alison Gopnik, Josef Perner, Henry Wellman), who maintained that the intuitive theory of mind develops in childhood in a manner comparable to the development of scientific theories.

a. The Child-Scientist Theory

According to a first version of TT, “the child (as little) scientist theory,” the body of internally-represented knowledge that drives the exercise of mentalistic abilities has much the same structure as a scientific theory, and it is acquired, stored, and used in much the same way that scientific theories are: by formulating explanations, making predictions, and then revising the theory or modifying auxiliary hypotheses when the predictions fail.  Gopnik & Meltzoff (1997) put forward this idea in its more radical form. They argue that the body of knowledge underlying mindreading has all the structural, functional and dynamic features that, on their view, characterize most scientific theories. One of the most important features is defeasibility.  As it happens in scientific practice, the child’s naïve theory of mind can also be “annulled,” that is, replaced when an accumulation of counterevidence to it occurs. The child-scientist theory is, therefore, akin to Piaget’s constructivism insofar as it depicts the cognitive development in childhood and early adolescence as a succession of increasingly sophisticated naïve theories. For instance, Wellman (1990) has argued that around age 4 children become able to pass the false-belief tests because they move from an elementary “copy” theory of mind to a fully “representational” theory of mind, which allows them to acknowledge the explanatory role of false beliefs.

The child-scientist theory inherits from Piaget not only the constructivist framework but also the idea that the cognitive development is a process that depends on a domain-general learning mechanism. A domain-general (or general-purpose) psychological structure is one that can be used to do problem solving across many different content domains; it contrasts with a domain-specific psychological structure, which is dedicated to solving a restricted class of problems in a restricted content domain (see Samuels 2000). Now, Piaget’s model of cognitive development posits an innate endowment of reflexes and domain-general learning mechanisms, which enable the child to set up sensorimotor interactions with the environment that unfold a steady improvement in the capacity of problem-solving in any cognitive domain—physical, biological, psychological, and so forth. Analogously, Gopnik & Schulz (2004, 2007) have argued that the learning mechanism that supports all of cognitive development is a domain-general Bayesian mechanism that allows children to extract causal structure from patterns of data.

Another theory-theorist who endorses a domain-general conception of cognitive development is Josef Perner (1991). On his view, it is the appearance of the ability to metarepresent that enables the 4-year-olds to shift from a “situation theory” to a “representation theory,” and thus pass false-belief tests. Children are situation theorists by the age of around 2 years. At 3 they possess a concept, “prelief” (or “betence”), in which the concepts of pretend and belief coexist undifferentiated. The concept of prelief allows the child to understand that a person can “act as if” something was such and such (for example, as if “this banana is a telephone”) when it is not. At 4 children acquire a representational concept of belief which enables them to understand that, like the public representations, inner representations can also misrepresent states of affairs (see Perner, Baker & Hutton 1994). Thus Perner suggests that children first learn to understand the properties of public (pictorial and linguistic) representations; only in a second moment they extend, through a process of analogical reasoning, these characteristics to mental representations. On this perspective, then, the concept of belief is the product of a domain-general metarepresentational capacity that includes but is not limited to metarepresentation of mental states. (But for criticism, see Harris 2000, who argues that pretence and belief are very different and are readily distinguished by context by 3-year olds.)

b. The Modularist Theory-Theory

According to the child-scientist theory, children learn the naïve theory of mind in much the same way that adults learn about scientific theories. By contrast, the modularist version of TT holds that the body of knowledge underlying mindreading lacks the structure of a scientific theory, being stored in one or more innate modules, which gradually become functional (“mature”) during infant development. Inside the module the body of information can be stored as a suite of domain-specific computational mechanisms; or as a system of domain-specific representations; or in both ways (see Simpson et al. 2005: 13).

The notion of modularity as domain-specificity, whose paradigm is Noam Chomsky’s module of language, informs the so-called “core knowledge” hypothesis, according to which human cognition builds on a repertoire of domain-specific systems of knowledge. Studies of children and adults in diverse cultures, human infants, and non-human primates provide evidence for at least four systems of knowledge that serve to represent significant aspects of the environment: inanimate objects and their motions; agents and their goal-directed actions; places and their geometric relations; sets and their approximate numerical relation. These are systems of domain-specific, task-specific representations, which are shared by other animals, persist in adults, and show little variation by culture, language or sex (see Carey & Spelke 1996; Spelke & Kinzler 2007).

And yet a domain-specific body of knowledge is an “inert” psychological structure, which gives rise to behavior only if it is manipulated by some cognitive mechanism. The question arises, then, whether the domain-specific body of information that subserves mentalistic abilities is the database of either a domain-specific or domain-general computational system. In some domains, a domain-specific computational mechanism and a domain-specific body of information can form a single mechanism (for example, a parser is very likely to be a domain-specific computational mechanism that manipulates a domain-specific data structure). But in other domains, as Samuels (1998, 2000) has noticed, domain-specific systems of knowledge might be computed by domain-general rather than domain-specific algorithms (but for criticism, see Carruthers 2006, §4.3).

The existence of a domain-specific algorithm that exploits a body of information specific to the domain of naïve psychology has been proposed by Alan Leslie (1994, 2000). He postulated a specialized component of social intelligence, the “Theory-of-Mind Mechanism” (ToMM), which receives as input information about the past and present behavior of other people and utilizes this information to compute their probable psychological states. The outputs of ToMM are descriptions of psychological states in the form of metarepresentations or M-representations, that is, agent-centered descriptions of behavior, which include a triadic relation that specifies four kinds of information: (i) an agent, (ii) an informational relation that specifies the agent’s attitude (pretending, believing, desiring, and so forth), (iii) an aspect of reality that grounds the agent’s attitude, (iv) the content of the agent’s attitude. Therefore, in order to pretend and understand others’ pretending, the child’s ToMM is supposed to output the M-representation <Mother PRETENDS (of) this banana (that) “it is a telephone”>. Analogously, in order to predict Sally’s behavior in the false-belief test, ToMM is supposed to output the M-representation <Sally BELIEVES (of) her marble (that) “it is in the basket”>. (Note that Leslie coined the term “M-representation” to distinguish his own concept of meta-representation from Perner’s 1991. For Perner uses the term at a personal level to refer to the child’s conscious theory of representation, whereas Leslie utilizes the term at a subpersonal level to designate an unconscious data structure computed by an information-processing mechanism. See Leslie & Thaiss 1992: 231, note 2.)

In the 1980s, Leslie’s ToMM hypothesis was the basis for the development of a neuropsychological perspective on autism. Children suffering from this neurodevelopmental disorder exhibit a triad of impairments: social incompetence, poor verbal and nonverbal communicative skills, and a lack of pretend play. Because social competence, communication, and pretending all rest on mentalistic abilities, Baron-Cohen, Frith & Leslie (1985) speculated that the autistic triad might be the result of an impaired ToMM. This hypothesis was investigated in an experiment in which typically developing 4-year-olds, children with autism (12 years; IQ 82), and children with Down syndrome (10 years; IQ 64) were tested on the Sally and Ann false-belief task. Eighty-five percent of the normally developing children and 86% of the children with Down syndrome passed the test; but only 20% of the autistic children predicted that Sally would look in the basket. This is one of the first examples of psychiatry driven by cognitive neuropsychology (followed by Christopher Frith’s 1992 theory of schizophrenia as late-onset autism).

According to Leslie, the ToMM is the specific innate basis of basic mentalistic abilities, which matures during the infant’s second year. In support of this hypothesis, he cites inter alia his analysis of pretend play that would show that 18-month-old children are able to metarepresent the propositional attitude of pretending. This analysis results, however, in an immediate empirical problem. If the ToMM is fully functional at 18 months, why are children unable to successfully perform false-belief tasks until they are around 4 years old? Leslie’s hypothesis is that although the concept of belief is already in place in children younger than 4, in the false-belief tasks this concept is masked by immaturity in another capacity that is necessary for good performance on the task—namely inhibitory control. Since, by default, the ToMM attributes a belief with content that reflects current reality, to succeed in a false-belief task this default attribution must be inhibited and an alternative nonfactual content for the belief selected instead. This is the task of an executive control mechanism that Leslie calls “Selection Processor” (SP). Thus 3-year-olds fail standard false-belief tasks because they possess the ToMM but not yet the inhibitory SP (see Leslie & Thaiss 1992; Leslie & Polizzi 1998).

The ToMM/SP model seems to find support in a series of experiments that test understanding of false mental and public representations in normal and autistic children. Leslie & Thaiss (1992) have found that normal 3-year-olds fail the standard false-belief tasks, the two non-mental meta-representational tests, the false-map task and Zaitchik’s (1990) outdated-photograph task. In contrast, autistic children are at or near ceiling on the non-mental metarepresentational tests but fail false-belief tasks. Normal 4-year-olds can succeed in all these tasks. According to Leslie and Thaiss, the ToMM/SP model can account for these findings: normal 3-year-olds possess the ToMM but not yet SP; autistic children are impaired in ToMM but not in SP; normal 4-year-olds possess both the ToMM and an adequate SP. By contrast, these results appear to be counterevidence to Perner’s idea that children first understand public representations before then applying that understanding to mental states. If this were right, then autistic children should have difficulty with both kinds of representations. And in fact Perner (1993) suggests that the autistic deficit is due to a genetic impairment of the mechanisms that subserve attention shifting, a damage that interferes with the formation of the database required for the development of a theory of representation in general. But what autistics’ performance in mental and non-mental metarepresentational tasks seems to show is a dissociation between understanding false maps and outdated photographs, on one hand, and understanding false beliefs, on the other. A finding that can be easily explained in the context of Leslie’s domain-specific approach to mindreading, according to which children with autism have a specific deficit in understanding mental representation but not representation in general. In support of this interpretation, fMRI studies showed that activity in the right temporo-parietal junction is high while participants are thinking about false beliefs, but no different from resting levels while participants are thinking about outdated photographs or false maps or signs. This suggests a neural substrate for the behavioral dissociation between pictorial and mental metarepresentational abilities (see Saxe & Kanwisher 2003; for a critical discussion of the domain-specificity interpretation of these behavioral and neuroimaging data, see Gerrans & Stone 2008; Perner & Aichhorn 2008; Perner & Leekam 2008).

Leslie (2005) recruits new data to support his claim that mental metarepresentational abilities emerge from a specialized neurocognitive mechanism that matures during the second year of life. Standard false-belief tasks are “elicited-response” tasks in which children are asked a direct question about an agent’s false belief. But investigations using “spontaneous-response” tasks (Onishi & Baillargeon 2005) seem to suggest that the ability to attribute false beliefs is present much earlier, at the age of 15 months (even at 13 months in Surian, Caldi & Sperber 2007). However, Leslie’s mentalistic interpretation of these data has been challenged by Ruffman & Perner (2005), who have proposed an explanation of Onishi and Baillargeon’s results that assumes that the infants might be employing a non-mentalistic behavior-rule such as, “People look for objects where last seen” (for replies, see Baillargeon et al. 2010).

The ToMM has been considered, contra Fodor, as one of the strongest candidates for central modularity (see, for example, Botterill & Carruthers 1999: 67-8). However, Samuels (2006: 47) has objected that it is difficult to establish whether or not the ToMM’s domain of application is really central cognition. He suggests that the question is still more controversial in light of Leslie’s proposal of modelling ToMM as a relatively low-level mechanism of selective attention, whose functioning depends on SP, which is a non-modular mechanism, penetrable to knowledge and instruction (see Leslie, Friedman & German 2004).

c. First-Person Mindreading and Theory-Theory

During the 1980s and 1990s most of the work in Theory of Mind was concerned with the mechanisms that subserve the attribution of psychological states to others (third-person mindreading). In the last decade, however, an increasing number of psychologists and philosophers have also proposed accounts of the mechanisms underlying the attribution of psychological states to oneself (first-person mindreading).

For most theory-theorists, first-person mindreading is an interpretative activity that depends on mechanisms that capitalize on the same theory of mind used to attribute mental states to other agents. Such mechanisms are triggered by information about mind-external states of affairs, essentially the target’s behavior and/or the situation in which it occurs/occurred. The claim is, then, that there is a functional symmetry between first-person and third-person mentalistic attribution—the “outside access” view of introspection in Robbins (2006: 619); the “symmetrical” or “self/other parity” account of self-knowledge in Schwitzgebel (2010, §2.1).

The first example of a symmetrical account of self-knowledge is Bem’s (1972) “self-perception theory.”  With reference to Skinner’s methodological guidance, but with a position that reveals affinities with symbolic interactionism, Bem holds that one knows one’s own inner states (for example, attitudes and emotions) through a process completely analogous to that occurring when one knows other people’ inner states, that is, by inferring them from the observation/recollection of one’s own behavior and/or the circumstances in which it occurs/occurred. The TT version of the symmetrical account of self-knowledge develops Bem’s approach by claiming that observations and recollections of one’s own behavior and the circumstances in which it occurs/occurred are the input of mechanisms that exploit theories that apply to the same extent to ourselves and to others.

In the well-known social-psychology experiments reviewed by Nisbett & Wilson (1977), the participants’ attitudes and behavior were caused by motivational factors inaccessible to consciousness—such factors as cognitive dissonance, numbers of bystanders in a public crisis, positional and “halo” effects and subliminal cues in problem solving and semantic disambiguation, and so on. However, when explicitly asked about the motivations (causes) of their actions, the subjects did not hesitate to state, sometimes with great eloquence, their very reasonable motives. Nisbett and Wilson explained this pattern of results by arguing that the subjects did not have any direct access to the real causes of their attitudes and behavior; rather, they engaged in an activity of confabulation, that is, they exploited a priori causal theories to develop reasonable but imaginary explanations of the motivational factors of their attitudes and behavior (see also Johansson et al. 2006, where Nisbett and Wilson’s legacy is developed through a new experimental paradigm to study introspection, the “choice blindness” paradigm).

Evidence for the symmetrical account of self-knowledge comes from Nisbett & Bellows’ (1977) utilization of the so-called “actor-observer paradigm.” In one experiment they compared the introspective reports of participants (“actors”) to the reports of a control group of “observers” who were given a general description of the situation and asked to predict how the actors would react. Observers’ predictions were found to be statistically identical to—and as inaccurate as—the reports by the actors. This finding suggests that “both groups produced these reports via the same route, namely by applying or generating similar causal theories” (Nisbett & Wilson 1977: 250-1; see also Schwitzgebel 2010: §§2.1.2 and 4.2.1).

In developmental psychology Alison Gopnik (1993) has defended a symmetrical account of self-knowledge by arguing that there is good developmental evidence of developmental synchronies: children’s understanding of themselves proceeds in lockstep with their understanding of others. For example, since TT assumes that first-person and third-person mentalistic attributions are both subserved by the same theory of mind, it predicts that if the theory is not yet equipped to solve certain third-person false-belief problems, then the child should also be unable to perform the parallel first-person task. A much discussed instance of parallel performance on tasks for self and other is in Gopnik & Astington (1988). In the “Smarties Box” experiment, children were shown with the candy container for the British confection “Smarties” and were asked what they thought was in the container. Naturally they answered “Smarties.” The container was then opened to reveal not Smarties, but a pencil. Children were then asked a series of questions, including “What will [your friend] say is in the box?”, and successively “When you first saw the box, before we opened it, what did you think was inside it?”. It turned out that the children’s ability to answer the question concerning oneself was significantly correlated with their ability to answer the question concerning another. (See also the above-cited Wellman et al. 2001, which offers meta-analytic findings to the effect that performance on false-belief tasks for self and for others is virtually identical at all ages.)

Data from autism have also been used to motivate the claim that first-person and third-person mentalistic attribution has a common basis. An intensely debated piece of evidence comes from a study by Hurlburt, Happé & Frith (1994), in which three people suffering from Asperger syndrome were tested with the descriptive experience sampling method. In this experimental paradigm, subjects are instructed to carry a random beeper, pay attention to the experience that was ongoing at the moment of the beep, and jot down notes about that now-immediately-past experience (see Hurlburt & Schwitzgebel 2007). The study showed marked qualitative differences in introspection in the autistic subjects: unlike normal subjects who report several different phenomenal state types—including inner verbalisation, visual images, unsymbolised thinking, and emotional feelings—the first two autistic subjects reported visual images only; the third subject could report no inner experience at all. According to Frith & Happé (1999: 14), this evidence strengthens the hypothesis that self-awareness, like other-awareness, is dependent on the same theory of mind.

Thus, evidence from social psychology, development psychology and cognitive neuropsychiatry makes a case for a symmetrical account of self-knowledge. As Schwitzgebel (2010: §2.1.3) rightly notes, however, no one advocates a thoroughly symmetrical conception because some margin is always left for some sort of direct self-knowledge. Nisbett & Wilson (1977: 255), for example, draw a sharp distinction between “cognitive processes” (the causal processes underlying judgments, decisions, emotions, sensations) and mental “content” (those judgments, decisions, emotions, sensations themselves). Subjects have “direct access” to this mental content, and this allows them to know it “with near certainty.” In contrast, they have no access to the processes that cause behavior. However, insofar as Nisbett and Wilson do not propose any hypothesis about this alleged direct self-knowledge, their theory is incomplete.

In order to offer an account of this supposedly direct self-knowledge, some philosophers made a more or less radical return to various forms of Cartesianism, construing first-person mindreading as a process that permits the access to at least some mental phenomena in a relatively direct and non-interpretative way. On this perspective, introspective access does not appeal to theories that serve to interpret “external” information, but rather exploits mechanisms that can receive information about inner life through a relatively direct channel— the “inside access” view of introspection in Robbins (2006: 618); the “self-detection” account of self-knowledge in Schwitzgebel (2010: §2.2).

The inside access view comes in various forms. Mentalistic self-attribution may be realized by a mechanism that processes information about the functional profile of mental states, or their representational content, or both kinds of information (see Robbins 2006: 618; for a “neural” version of the inside access view, see below, §2a). A representationalist-functionalist version of the inside access view is Nichols & Stich’s (2003) account of first-person mindreading in terms of “monitoring mechanisms.” The authors begin by drawing a distinction between detection and inference. It is one thing to detect mental states, it is another to reason about mental states, that is, using information about mental states to predict and explain one’s own or other people’s mental states and behavior. Moreover, both the attribution of a mental state and the inferences that one can make about it can be referred to oneself or other people. Thus, we get four possible operations: first- and third-person detection, first- and third-person reasoning. Now, Nichols and Stich’s hypothesis is that whereas third-person detecting and first- and third-person reasoning are all subserved by the same theory of mind, the mechanism for detecting one’s own mental states is quite independent of the mechanism that deals with the mental states of other people. More precisely, the Monitoring Mechanism (MM) theory assumes the existence of a suite of distinct self-monitoring computational mechanisms, including one for monitoring and providing self-knowledge of one’s own experiential states, and one for monitoring and providing self-knowledge of one’s own propositional attitudes. Thus, for example, if X believes that p, and the proper MM is activated, it copies the representation p in X’s “Belief Box”, embeds the copy in a representation schema of the form “I believe that___”, and then places this second-order representation back in X’s Belief Box.

Since the MM theory assumes that first-person mindreading does not involve mechanisms of the sort that figure in third-person mindreading, it implies that the first capacity should be dissociable, both diachronically and synchronically, from the second. In support of this prediction Nichols & Stich (2003) cite developmental data to the effect that, on a wide range of tasks, instead of the parallel performance predicted by TT, children exhibit developmental asynchronies. For example, children are capable of attributing knowledge and ignorance to themselves before they are capable of attributing those states to others (Wimmer et al. 1988). Moreover, they suggest—on the basis, inter alia, of a reinterpretation of the aforementioned Hurlburt, Happé & Frith’s (1994) data—that there is some evidence of a double dissociation between schizophrenic and autistic subjects: the MMs might be intact in autistics despite their impairment in third-person mindreading; in schizophrenics the pattern might be reversed.

The MM theory provides a neo-Cartesian reply to TT—and especially to its eliminativist implications inasmuch as the mentalistic self-attributions based on MMs are immune to the potentially distorting influence of our intuitive theory of psychology. However, the MM theory faces at least two difficulties. To start with, the theory must tell us how MM establishes which attitude type (or percept type) a given mental state belongs to (Goldman 2006: 238-9). A possibility is that there is a separate MM for each propositional attitude type and for each perceptual modality. But then, as Engelbert and Carruthers (2010: 246) remark, since any MM can be selectively impaired, the MM theory predicts a multitude of dissociations—for example, subjects who can self-attribute beliefs but not desires, or visual experiences but not auditory ones, and so on. However, the hypothesis of such a massive dissociability has little empirical plausibility.

Moreover, Carruthers (2011) has offered a book-length argument against the idea of a direct access to propositional attitudes. His neurocognitive framework is Bernard Baars’ Global Workspace Theory model of consciousness (see Gennaro 2005: §4c), in which a range of perceptual systems “broadcast” their outputs (for example, sensory data from the environment, imagery, somatosensory and proprioceptive data) to a complex of conceptual systems (judgment-forming, memory-forming, desire-forming, decision-making systems, and so forth). Among the conceptual systems there is also a multi-componential “mindreading system,” which generates higher-order judgments about the mental states of others and of oneself. By virtue of receiving globally broadcast perceptual states as input, the mindreading system can easily recognize those percepts, generating self-attributions of the form “I see something red,” “It hurts,” and so on. But the system receives no input from the systems that generate propositional attitude events (like judging and deciding). Consequently, the mindreading system cannot directly self-attribute propositional attitude events; it must infer them by exploiting the perceptual input (together with the outputs of various memory systems). Thus, Carruthers (2009: 124) concludes, “self-attributions of propositional attitude events like judging and deciding are always the result of a swift (and unconscious) process of self-interpretation.” On this perspective, therefore, we do not introspect our own propositional attitude events. Our only form of access to those events is via self-interpretation, turning our mindreading faculty upon ourselves and engaging in unconscious interpretation of our own behavior, physical circumstances, and sensory events like visual imagery and inner speech. Carruthers bases his proposal on considerations to do with the evolution of mindreading and metacognition, the rejection of the above-cited data that according to Nichols & Stich (2003) suggest developmental asynchronies and dissociation between self-attribution and other-attribution, and on evidence about the confabulation of attitudes. Thus, Carruthers develops a very sophisticated version of the symmetrical account of self-knowledge in which the theory-driven mechanisms underlying first- and third-person mindreading can count not only on observations and recollections of one’s own behavior and the circumstances in which it occurs/occurred, but also on the recognition of a multitude of perceptual and quasi-perceptual events.

2. Simulation-Theory

Until the mid-1980s the debate on the nature of mindreading was a debate between the different variants of TT. But in 1986, TT as a whole was impugned by Robert Gordon and, independently, by Jane Heal, who gave life to an alternative which was termed “simulation-theory” (ST). In 1989 Alvin Goldman and Paul Harris began to contribute to this new approach to mindreading. In 2006, Goldman provided the most thoroughly developed, empirically supported defense of a simulationist account of our mentalistic abilities.

According to ST, our third-person mindreading ability does not consist in implicit theorizing but rather in representing the psychological states and processes of others by mentally simulating them, that is, attempting to generate similar states and processes in ourselves. Thus, the same resources that are used in our own psychological states and processes are recycled—usually but not only in imagination—to provide an understanding of psychological states and processes of the simulated target. This has often been compared to the method of Einfühlung exalted by the theorists of Verstehen (see Stueber 2006: 5-19).

In order for a mindreader to engage in this process of imaginative recycling, various information processing mechanisms are needed. The mindreader simulates the psychological etiology of the actions of the target in essentially two steps. First, the simulator generates pretend or imaginary mental states in her own mind which are intended to (at least partly) correspond to those of the target. Second, the simulator feeds the imaginary states into a suitable cognitive mechanism (for example, the decision-making system) that is taken “offline,” that is, it is disengaged from the motor control systems. If the simulator’s decision-making system is similar to the target’s one, and the pretend mental states that the simulator introduces into the decision-making system (at least partly) match the target’s, then the output of the simulator’s decision-making system might reliably be attributed or assigned to the target. On this perspective, there is no need for an internally represented knowledge base and there is no need of a naïve theory of psychology. The simulator exploits a part of her cognitive apparatus as a model for a part of the simulated agent’s cognitive apparatus.

Hence follows one of the main advantages ST is supposed to have over TT—namely its computational parsimony. According to advocates of ST, the body of tacit folk-psychological knowledge which TT attributes to mindreaders imposes too heavy a burden on mental computation. However, such a load will diminish radically if, instead of computing the body of knowledge posited by TT, mindreaders must only co-opt mechanisms that are primarily used online, when they experience a kind of mental state, to run offline simulations of similar states in the target (the argument is suggested by Gordon 1986 and Goldman 1995, and challenged by Stich & Nichols 1992, 1995).

In the early years of the debate over ST, a main focus was on its implications for the controversy between intentional realism and eliminative materialism. Gordon (1986) and Goldman (1989) suggested that by rejecting the assumption that folk psychology is a theory, ST undercuts eliminativism. Stich & Ravenscroft (1994: §5), however, objected that ST undermines eliminativism only provided that the latter adopts the subpersonal version of TT. For ST does not deny the evident fact that human beings have intuitions about the mental, and neither rules out that such intuitions might be systematized by building, as David Lewis suggests, a theory that implies them. Consequently, ST does not refute eliminativism; it instead forces the eliminativist to include among the premises of her argument Lewis’ personal formulation of TT, together with the observation/prediction that the theory implicit in our everyday talk about mental states is or will turn out to be seriously defective.

One of the main objections that theory-theorists raise against ST is the argument from systematic errors in prediction. According to ST errors in prediction can arise either (i) because the predictor’s executive system is different from that of the target, or (ii) because the pretend mental states that the predictor has introduced into the executive system do not match the ones that actually motivate the target. However, Stich & Nichols (1992, 1995; see also Nichols et al. 1996) describe experimental situations in which the participants systematically fail to predict the behavior of targets, and in which it is unlikely that (i) or (ii) is the source of problem. Now, TT can easily explain such systematic errors in prediction: it is sufficient to assume that our naïve theory of psychology lacks the resources required to account for such situations. It is no surprise that a folk theory that is incomplete, partial, and in many cases seriously defective often causes predictive failures. But this option is obviously not available for ST: simulation-driven predictions are “cognitively impenetrable,” that is, they are not affected by the predictor’s knowledge or ignorance about psychological processes (see also Saxe 2005; and the replies by Gordon 2005 and Goldman 2006: 173-4).

More recently, however, a consensus seems to be emerging to the effect that mindreading involves both TT and ST. For example, Goldman (2006) grants a variety of possible roles for theorizing in the context of what he calls “high-level mindreading.” This is the imaginative simulation discussed so far, which is subject to voluntary control, is accessible to consciousness, and involves the ascription of complex mental states such as propositional attitudes. High-level simulation is a species of what Goldman terms “enactment imagination” (a notion that builds on Currie & Ravenscroft’s 2002 concept of “recreative imagination”). Goldman contrasts high-level mindreading to the “low-level mindreading,” which is unconscious, hard-wired, involves the attribution of structurally simple mental states such as face-based emotions (for example, joy, fear, disgust), and relies on simple imitative or mirroring processes (see, for example, Goldman & Sripada 2005). Now, theory definitely plays a role in high-level mindreading. In a prediction task, for example, theory may be involved in the selection of the imaginary inputs that will be introduced into the executive system. In this case, Goldman (2006: 44) admits, mindreading depends on the cooperation of simulation and theorizing mechanisms.

Goldman’s blend of ST and TT (albeit with a strong emphasis on the simulative component) is not the only “hybrid” account of mindreading: for other hybrid approaches, see Botterill & Carruthers (1999), Nichols & Stich (2003), and Perner & Kühberger (2006). And it is right to say that now the debate aims first of all to establish to what extent and in which processes theory or simulation prevails.

a. Simulation with and without Introspection

There is an aspect, however, that makes Goldman’s (2006) account of ST different from other hybrid theories of mindreading, namely the neo-Cartesian priority that he assigns to introspection. On his view, first-person mindreading both ontogenetically precedes and grounds third-person mindreading. Mindreaders need to introspectively access their offline products of simulation before they can project them onto the target. And this, Goldman claims, is a form of “direct access.”

In 1993 Goldman put forward a phenomenological version of the inside access view (see above, §1c), by arguing that introspection is a process of detection and classification of one’s (current) psychological states that does not depend at all on theoretical knowledge, but rather occurs in virtue of information about the phenomenological properties of such states. But in light of criticism (Carruthers 1996; Nichols & Stich 2003), in his 2006 book Goldman has remarkably reappraised the relevance of the qualitative component for the detection of psychological states, pointing out the centrality of the neural properties. Building on Craig’s (2002) account of interoception, as well as Marr’s and Biederman’s computational models of visual object recognition, Goldman now maintains that introspection is a perception-like process that involves a transduction mechanism that takes neural properties of mental states as input and outputs representations in a proprietary code (the introspective code, or the “I-code”). The I-code represents types of mental categories and classifies mental-state tokens in terms of those categories. Goldman also suggests some possible primitives of the I-code. So, for example, our coding of the concept of pain might be the combination of the “bodily feeling” parameter (a certain raw feeling) with the “preference” or “valence” one (a negative valence toward the feeling). Thus, the neural version of the inside access view is an attempt to solve the problem of the recognition of the attitude type, which proved problematic for Nichols and Stich’s representationalist-functionalist approach (see above, §1c). However, since different percept and attitude types are presumably realized in different cerebral areas, each percept or attitude type will depend on a specific informational channel to feed the introspective mechanism. Consequently, Goldman’s theory also seems to be open to the objection of massive dissociability raised to the MM theory (see Engelbert and Carruthers 2010: 247).

Goldman’s primacy of first-person mindreading is, however, rejected by other simulationists. According to Gordon’s (1995, 1996) “radical” version of ST, simulation can occur without introspective access to one’s own mental states. The simulative process begins not with my pretending to be the target, but rather with my becoming the target. As Gordon (1995: 54) puts it, simulation is not “a transfer but a transformation.” “I” changes its referent and the equivalence “I=target” is established. In virtue of this de-rigidification of the personal pronoun, any introspective step is ruled out: one does not first assign a psychological state to oneself to transfer it to the target. Since the simulator becomes the target, no analogical inference from oneself to the other is needed. Still more radically, simulation can occur without having any mentalistic concepts. Our basic competence in the use of utterances of the form “I <propositional attitude> that p” involves not direct access to the propositional attitudes, but only an “ascent routine” through which we express our propositional attitudes in this new linguistic form (see Gordon 2007).

Carruthers has raised two objections to Gordon’s radical ST. First, it is a “step back” to a form of “quasi-behaviorism” (Carruthers 1996: 38). Second, Gordon problematically assumes that our mentalistic abilities are constituted by language (Carruthers 2011: 225-27). In developmental psychology de Villiers & de Villiers (2003) have put forward a constitution-thesis similar to Gordon’s: thinking about mental states comes from internalizing the language with which these states are expressed in the child’s linguistic environment. More specifically, mastery of the grammatical rules for embedding tensed complement clauses under verbs of speech or cognition provides children with a necessary representational format for dealing with false beliefs. However, correlation between linguistic exposure and mindreading does not depend on the use of specific grammatical structures. In a training study Lohman & Tomasello (2003) found that performance on a false-belief task is enhanced by simply using perspective-shifting discourse, without any use of sentential complement syntax. Moreover, syntax is not constitutive of the mentalistic capacities of adults. Varley et al. (2001) and Apperly et al. (2006) provided clear evidence that adults with profound grammatical impairment show no impairments on non-verbal tests of mindreading. Finally, mastery of sentence complements is not even a necessary condition of the development of mindreading in children. Perner et al. (2005) have shown that such mastery may be required for statements about beliefs but not about desires (as in English), for beliefs and desires (as in German), or for neither beliefs nor desires (Chinese); and yet children who learn each of these three languages all understand and talk about desire significantly earlier than belief.

b. Simulation in Low-Level Mindreading

Another argument for a (prevalently) simulationist approach to mindreading consists in pointing out that TT is thoroughly limited to high-level mindreading (essentially the attribution of propositional attitudes), whereas ST is also well equipped to account for forms of low-level mindreading such as the perception of emotions or the recognition of facial expressions and motor intentions (see Slors & Macdonald 2008: 155).

This claim finds its main support in the interplay between ST and neuroscience. In the early 1990s mirror neurons were first described in the ventral premotor cortex and inferior parietal lobe of macaque monkeys. These visuomotor neurons activate not only when the monkey executes motor acts (such as grasping, manipulating, holding, and tearing objects), but also when it observes the same, or similar, acts performed by the experimenter or a conspecific. Although there is only one study that seems to offer direct evidence for the existence of mirror neurons in humans (Mukamel et al. 2010), many neurophysiological and brain imaging investigations support the existence of a human action mirroring system. For example, fMRI studies using action observation or imitation tasks demonstrated activation in areas in the human ventral premotor and parietal cortices assumed to be homologous to the areas in the monkey cortex containing mirror neurons (see Rizzolatti et al. 2002). It should be emphasized that most of the mirror neurons that discharge when a certain type of motor act is performed also activate when the same act is perceived, even though it is not performed with the same physical movement—for example, many mirror neurons that discharge when the monkey grasps food with the hand also activate when it sees a conspecific who grasps food with the mouth. This seems to suggest that mirror neurons code or represent an action at a high level of abstraction, that is, they are receptive not only to a mere movement but indeed to an action.

In 1998, Vittorio Gallese and Goldman wrote a very influential article in which mirror neurons were indicated as the basis of the simulative process. When the mirror neurons in the simulator’s brain are externally activated in observation mode, their activity matches (simulates or resonates with) that of mirror neurons in the target’s brain, and this resonance process retrodictively outputs a representation of the target’s intention from a perception of her movement.

More recently a number of objections have been raised against the “resonance” ST advocated by some researchers that have built on Gallese and Goldman’s hypothesis. Some critics, although admitting the presence of mirror neurons in both non-human and human primates, have drastically reappraised their role in mindreading. For example, Saxe (2009) has argued that there is no evidence that mirror neurons represent the internal states of the target rather than some relatively abstract properties of observed actions (see also Jacob & Jeannerod 2005; Jacob 2008). On the other hand, Goldman himself has mitigated his original position. Unlike Gallese, Keysers & Rizzolatti (2004), who propose mirror systems as the unifying basis of all social cognition, now Goldman (2006) considers mirror neuron activity, or motor resonance in general, as merely a possible part of low-level mindreading. Nonetheless, it is right to say that resonance phenomena are at the forefront of the field of social neuroscience (see Slors & Macdonald 2008: 156).

3. Social Cognition without Mindreading

By the early 21st century, the primacy that both TT and ST assigns to mindreading in social cognition had been challenged. One line of attack has come from philosophers working in the phenomenological tradition, such as Shaun Gallagher, Matthew Ratcliffe, and Dan Zahavi (see Gallagher & Zahavi 2008). Others working more from the analytic tradition, such as Jose Luis Bermúdez (2005, 2006b), Dan Hutto (2008), and Heidi Maibom (2003, 2007) have made similar points. Let’s focus on Bermúdez’ contribution because he offers a very clear account of the kind of cognitive mechanisms that might subserve forms of social understanding and coordination without mindreading (for a brief overview of this literature, see Slors & Macdonald 2008; for an exhaustive examination, see Herschbach 2010).

Bermúdez (2005) argues that the role of high-level mindreading in social cognition needs to be drastically re-evaluated. We must rethink the traditional nexus between intelligent behavior and propositional attitudes, realizing that much social understanding and social coordination are subserved by mechanisms that do not capitalize on the machinery of intentional psychology. For example, a mechanism of emotional sensitivity such as “social referencing” is a form of low-level mindreading that subserve social understanding and social coordination without involving the attribution of propositional attitudes (see Bermúdez 2006a: 55).

To this point Bermúdez is on the same wavelength as simulationists and social neuroscientists in drawing our attention to forms of low-level mindreading that have been largely neglected by philosophers. However, Bermúdez goes a step beyond them and explores cases of social interactions that point in a different direction, that is, situations that involve mechanisms that can no longer be described as mindreading mechanisms. He offers two examples.

(1) In game theory there are social interactions that are modeled without assuming that the agents involved are engaged in explaining or predicting each other’s behavior. In social situations that have the structure of the iterated prisoner’s dilemma, the so-called “tit-for-tat” heuristic simply says: “start out cooperating and then mirror your partner’s move for each successive move” (Axelrod 1984). Applying this heuristic simply requires understanding the moves available to each player (cooperation or defection), and remembering what happened in the last round. So we have here a case of social interaction that is conducted on the basis of a heuristic strategy that looks backward to the results of previous interactions rather than to their psychological etiology. We do not need to infer other players’ reasons; we only have to coordinate our behavior with theirs.

(2) There is another important class of social interactions that involve our predicting and/or explaining the actions of other participants, but in which the relevant predictions and explanations seem to proceed without us having to attribute propositional attitudes. These social interactions rest on what social psychologists call “scripts” (“frames” in artificial intelligence), that is, complex information structures that allow predictions to be made on the basis of the specification of the purpose of some social practice (for example, eating a meal at a restaurant), the various individual roles, and the appropriate sequence of moves.

According to Bermúdez, then, much social interaction is enabled by a suite of relatively simple mechanisms that exploit purely behavioral regularities. It is important to notice that these mechanisms subserve central social cognition (in Fodor’s sense). Nevertheless, they implement relatively simple processes of template matching and pattern recognition, that is, processes that are paradigmatic cases of perceptual processing. For example, when a player A applies the tit-for-tat rule, A must determine what the other player B did in the preceding round. This can be implemented in virtue of a template matching in which A verifies that B’s behavioral pattern matches A’s prototype of cooperation and defection. And also detecting the social roles implicated in a script-based interaction is a case of template matching: one verifies whether the perceived behavior matches one of the templates associated with the script (or the prototype represented in the “frame”).

Bermúdez (2005: 223) notes that the idea that much of what we intuitively identify as central processing is actually implemented by mechanisms of template matching and pattern recognition has been repeatedly put forward by the advocates of the connectionist computationalism, especially by Paul M. Churchland. But unlike the latter, Bermúdez does not carry the reappraisal of the role of propositional attitudes in social cognition to the point of their elimination; he argues that social cognition does not involve high-level mindreading when the social world is “transparent” or “ready-to-hand,” as he says quoting Heidegger’s zuhanden. However, when we find ourselves in social situations that are “opaque,” that is, situations in which all the standard mechanisms of social understanding and interpersonal negotiation break down, it seems that we cannot help but appeal to the type of metarepresentational thinking characteristic of intentional psychology (2005: 205-6).

4. References and Further Reading

a. Suggested Further Reading

  • Apperly, I. (2010). Mindreaders: The Cognitive Basis of “Theory of Mind.” Hove, East Sussex, Psychology Press.
  • Carruthers, P. and Smith, P. K. (eds.) (1996). Theories of Theories of Mind. Cambridge, Cambridge University Press.
  • Churchland, P. M. (1994). “Folk Psychology (2).” In S. Guttenplan (ed.), A Companion to the Philosophy of Mind, Oxford, Blackwell, pp. 308–316.
  • Cundall, M. (2008). “Autism.” In The Internet Encyclopedia of Philosophy..
  • Davies, M. and Stone, T. (eds.) (1995a). Folk Psychology: The Theory of Mind Debate. Oxford, Blackwell.
  • Davies, M. and Stone, T. (eds.) (1995b). Mental Simulation: Evaluations and Applications. Oxford, Blackwell.
  • Decety, J. and Cacioppo, J. T. (2011). The Oxford Handbook of Social Neuroscience. Oxford, Oxford University Press.
  • Doherty, M. J. (2009). Theory of Mind. How Children Understand Others’ Thoughts and Feelings. Hove, East Sussex, Psychology Press.
  • Dokic, J. and Proust, J. (eds.) (2002). Simulation and Knowledge of Action. Amsterdam, John Benjamins.
  • Gerrans, P. (2009). “Imitation and Theory of Mind.” In G. Berntson and J. T. Cacioppo (eds.), Handbook of Neuroscience for the Behavioral Sciences. Chicago, University of Chicago Press, vol. 2, pp. 905–922.
  • Gordon, R. M. (2009). “Folk Psychology as Mental Simulation.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2009 Edition).
  • Hutto, D., Herschbach, M. and Southgate, V. (eds.) (2011). Special Issue “Social Cognition: Mindreading and Alternatives.” Review of Philosophy and Psychology 2(3).
  • Kind, A. (2005). “Introspection.” In The Internet Encyclopedia of Philosophy.
  • Meini, C. (2007). “Naïve psychology and simulations.” In M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind. Dordrecht, Kluwer, pp. 283–294.
  • Nichols, S. (2002). “Folk Psychology.” In Encyclopedia of Cognitive Science. London, Nature Publishing Group, pp. 134–140.
  • Ravenscroft, I. (2010). “Folk Psychology as a Theory.”  In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2010 Edition).
  • Rizzolatti, G., Sinigaglia, C. and Anderson, F. (2007). Mirrors in the Brain. How Our Minds Share Actions, Emotions, and Experience. Oxford, Oxford University Press.
  • Saxe, R. (2009). “The happiness of the fish: Evidence for a common theory of one’s own and others’ actions.” In K. D. Markman, W. M. P. Klein and J. A. Suhr (eds.), The Handbook of Imagination and Mental Simulation. New York, Psychology Press, pp. 257–266.
  • Shanton, K. and Goldman, A. (2010). “Simulation theory.” Wiley Interdisciplinary Reviews: Cognitive Science 1(4): 527–538.
  • Stich, S. and Rey, G. (1998). “Folk psychology.” In E. Craig (ed.), Routledge Encyclopedia of Philosophy. London, Routledge.
  • Von Eckardt, B. (1994). “Folk Psychology (1).” In S. Guttenplan (ed.), A Companion to the Philosophy of Mind. Oxford, Blackwell, pp. 300–307.
  • Weiskopf, D. A. (2011). “The Theory-Theory of Concepts.” In The Internet Encyclopedia of Philosophy.

b. References

  • Apperly, I.A., Samson, D., Carroll, N., Hussain, S. and Humphreys, G. (2006). “Intact first- and second-order false belief reasoning in a patient with severly impaired grammar.” Social Neuroscience 1(3-4): 334-348.
  • Axelrod, R. (1984). The Evolution of Cooperation. New York, Basic Books.
  • Baillargeon, R., Scott, R.M. and He, Z. (2010). “False-belief understanding in infants.”  Trends in Cognitive Sciences 14(3): 110–118.
  • Bem, D. J. (1972). “Self-Perception Theory.” In L. Berkowitz (ed.), Advances in Experimental Social Psychology. New York, Academic Press, vol. 6, pp. 1–62.
  • Bermúdez, J. L. (2005). Philosophy of Psychology: A Contemporary Introduction. London, Routledge.
  • Bermúdez, J. L. (2006a). “Commonsense psychology and the interface problem: Reply to Botterill.” SWIF Philosophy of Mind Review 5(3): 54–57.
  • Bermúdez, J. L. (2006b), “Arguing for eliminativism.” In B. L. Keeley (ed.), Paul Churchland. Cambridge, Cambridge University Press, pp. 32–65.
  • Bickle, J. (2003). Philosophy and Neuroscience: A Ruthlessly Reductive Account. Dordrecht, Kluwer.
  • Botterill, G. and Carruthers, P. (1999). The Philosophy of Psychology. Cambridge, Cambridge University Press.
  • Carey, S. and Spelke, E. (1996). “Science and core knowledge.” Philosophy of Science 63: 515–533.
  • Carruthers, P. (1996). “Simulation and self-knowledge.” In P. Carruthers and P. K. Smith (eds.), Theories of Theories of Mind. Cambridge, Cambridge University Press, pp. 22–38.
  • Carruthers, P. (2006). The Architecture of the Mind. Oxford, Oxford University Press.
  • Carruthers, P. (2009). “How we know our own minds: The relationship between mindreading and metacognition.” Behavioral and Brain Sciences 32: 121–138.
  • Carruthers, P. (2011). The Opacity of Mind: The Cognitive Science of Self-Knowledge. Oxford, Oxford University Press.
  • Craig, A. D. (2002). “How do you feel? Interoception: The sense of the physiological condition of the body.” Nature Reviews Neuroscience 3: 655–666.
  • Currie, G. and Ravenscroft, I. (2002). Recreative Minds: Imagination in Philosophy and Psychology. Oxford, Oxford University Press.
  • de Villiers, J. G. and de Villiers P. A. (2003). “Language for thought: Coming to understand false beliefs.” In D. Gentner and S. Goldin-Meadow (eds.), Language in Mind. Cambridge, MIT Press, pp. 335–384.
  • Engelbert, M. and Carruthers, P. (2010). “Introspection.” Wiley Interdisciplinary Reviews: Cognitive Science 1: 245–253.
  • Fogassi, L. and Ferrari P. F. (2010). “Mirror systems.” Wiley Interdisciplinary Reviews: Cognitive Science 2(1): 22–38.
  • Frith, C. (1992). Cognitive Neuropsychology of Schizophrenia. Hove, Erlbaum.
  • Frith, U. and Happé, F. (1999). “Theory of mind and self-consciousness: What is it like to be autistic?” Mind & Language 14(1): 1–22.
  • Gallagher, S. and Zahavi, D. (2008). The Phenomenological Mind. London, Routledge.
  • Gallese, V. and Goldman, A. (1998). “Mirror neurons and the simulation theory of mind-reading.” Trends in Cognitive Sciences 12: 493–501.
  • Gallese, V., Keysers, C. and Rizzolatti, G. (2004). “A unifying view of the basis of social cognition.” Trends in Cognitive Sciences 8: 396–403.
  • Gennaro, R. J. (2005). “Consciousness.” In The Internet Encyclopedia of Philosophy..
  • Gerrans, P. and Stone, V. E. (2008). Generous or parsimonious cognitive architecture? Cognitive neuroscience and Theory of Mind. British Journal for the Philosophy of Science 59: 121–141.
  • Goldman, A. I. (1993). “The psychology of folk psychology.” Behavioral and Brain Sciences 16: 15–28.
  • Goldman, A. I. (1989). “Interpretation psychologized.” Mind and Language, 4: 161–185; reprinted in M. Davies and T. Stone (eds.), Folk Psychology. Oxford, Blackwell, 1995, pp. 74–99.
  • Goldman, A. I. (1995). “In defense of the simulation theory.” In M. Davies and T. Stone (eds.), Folk Psychology. Oxford, Blackwell, pp. 191–206.
  • Goldman, A. I. (2006). Simulating Minds. Oxford, Oxford University Press.
  • Goldman, A. I. and Sripada, C. (2005). “Simulationist models of face-based emotion recognition.” Cognition 94: 193–213.
  • Gopnik, A. (1993). “How we read our own minds: The illusion of first-person knowledge of intentionality.” Behavioral and Brain Sciences 16: 1–14.
  • Gopnik, A. and Astington, J. W. (1988). “Children’s understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction. “ Child Development 59: 26–37.
  • Gopnik, A. and Meltzoff, A. (1997). Words, Thoughts, and Theories. Cambridge, MA, MIT Press.
  • Gopnik, A. and Schulz, L. (2004). “Mechanisms of theory-formation in young children.” Trends in Cognitive Sciences 8(8): 371–377.
  • Gopnik, A. and Schulz, L. (eds.) (2007). Causal Learning: Psychology, Philosophy, and Computation. New York, Oxford University Press.
  • Gordon, R. M. (1986). “Folk psychology as simulation.” Mind and Language, 1: 158–171; reprinted in M. Davies and T. Stone (eds.), Folk Psychology. Oxford, Blackwell, 1995, pp. 60–73.
  • Gordon, R. M.  (1995). “Simulation without introspection or inference from me to you.” In M. Davies and T. Stone (eds.), Mental simulation: Evaluations and Applications. Oxford, Blackwell, pp. 53–67.
  • Gordon, R. M. (1996). “Radical simulationism.” In P. Carruthers and P. Smith (eds.), Theories of theories of mind. Cambridge, Cambridge University Press, pp. 11–21.
  • Gordon, R. M. (2005). “Simulation and systematic errors in prediction.” Trends in Cognitive Sciences 9: 361–362.
  • Gordon, R. M. (2007). “Ascent routines for propositional attitudes.” Synthese 159: 151–165.
  • Grice, H. P. (1989). Studies in the Way of Words. Cambridge, MA, Harvard University Press.
  • Harris, P. L. (1989). Children and Emotion: The Development of Psychological Understanding. Oxford, Blackwell.
  • Harris, P. L. (2000). The Work of the Imagination. Oxford: Blackwell.
  • Heider, F. (1958). The Psychology of Interpersonal Relations, New York, Wiley.
  • Heider, F. and Simmel, M. (1944). “An experimental study of apparent behavior.” American Journal of Psychology 57: 243–259.
  • Herschbach, M. (2010). Beyond Folk Psychology? Toward an Enriched Account of Social Understanding. PhD dissertation, University of California, San Diego.
  • Hurlburt, R., Happé, F. and Frith, U. (1994). “Sampling the form of inner experience in three adults with Asperger syndrome.” Psychological Medicine 24: 385–395.
  • Hurlburt, R. T. and Schwitzgebel, E. (2007). Describing Inner Experience? Proponent Meets Skeptic. Cambridge, MA, MIT Press.
  • Hutto, D. D. (2008). Folk Psychological Narratives: The Sociocultural Basis of Understanding Reasons. Cambridge, MA, MIT Press.
  • Jacob, P. (2008). “What do mirror neurons contribute to human social cognition?” Mind and Language 23: 190–223.
  • Jacob, P. and Jeannerod, M. (2005). “The motor theory of social cognition: A critique.” Trends in Cognitive Science 9: 21–25.
  • Johansson, P., Hall, L., Sikström, S., Tärning, B. and Lind, A. (2006). “How something can be said about telling more than we can know: On choice blindness and introspection.” Consciousness and Cognition 15: 673–692.
  • Leslie, A.M. (1998). “Mind, child’s theory of.” In E. Craig (ed.), Routledge Encyclopedia of Philosophy. London, Routledge.
  • Leslie, A.M. (1994). “ToMM, ToBy, and agency: Core architecture and domain specificity.” In L. Hirschfeld and S. Gelman (eds.), Mapping the Mind: Domain Specificity in Cognition and Culture. Cambridge, MA, Cambridge University Press, pp. 119–148.
  • Leslie, A. M. (2000). “‘Theory of mind’ as a mechanism of selective attention.” In M. Gazzaniga (ed.), The New Cognitive Neurosciences. Cambridge, MA, MIT Press, 2nd Edition, pp. 1235–1247.
  • Leslie, A. M. (2005). “Developmental parallels in understanding minds and bodies.” Trends in Cognitive Sciences 9(10): 459-462.
  • Leslie, A. M., Friedman, O. and German, T. P. (2004). “Core mechanisms in ‘theory of mind’.” Trends in Cognitive Sciences 8(12): 528–533.
  • Leslie, A. M. and Polizzi, P. (1998). “Inhibitory processing in the false belief task: two conjectures.” Development Science 1: 247–254.
  • Leslie, A.M. and Thaiss, L. (1992). “Domain specificity in conceptual development: Neuropsychological evidence from autism.” Cognition 43: 225–251.
  • Lewis, D. (1972). “Psychophysical and theoretical identifications.” Australasian Journal of Philosophy, 50: 249–258.
  • Lohman, H. and Tomasello, M. (2003). “The role of language in the development of false belief understanding: A training study.” Child Development 74: 1130–1144.
  • Maibom, H. L. (2003). “The mindreader and the scientist.” Mind & Language 18(3): 296–315.
  • Maibom, H. L. (2007). “Social systems.” Philosophical Psychology 20(5): 557-578.
  • Malle, B. F. and Ickes, W. (2000). “Fritz Heider: Philosopher and psychologist.” In G. A. Kimble and M. Wertheimer (eds.), Portraits of Pioneers in Psychology. Washington (DC), American Psychological Association, vol. IV, pp. 195–214.
  • Morton, A. (1980). Frames of Mind. Oxford, Oxford University Press.
  • Mukamel, R., Ekstrom, A.D., Kaplan, J., Iacoboni, M. and Fried, I. (2010). “Single-Neuron Responses in Humans during Execution and Observation of Actions.” Current Biology 20: 750–756.
  • Nagel, E. (1961). The Structure of Science. New York, Harcourt, Brace, and World.
  • Nichols, S. and Stich, S. (2003). Mindreading. Oxford, Oxford University Press.
  • Nichols, S., Stich, S., Leslie, A. and Klein, D. (1996). “Varieties of Off-Line Simulation.” In P. Carruthers and P. Smith (eds.). Theories of Theories of Mind. Cambridge, UK, Cambridge University Press, 39–74.
  • Nisbett, R. E. and Bellows, N. (1977). “Verbal reports about causal influences on social judgments: Private access versus public theories.” Journal of Personality and Social Psychology, 35: 613–624.
  • Nisbett, R. and Wilson, T. (1977). “Telling more than we can know: Verbal reports on mental processes.” Psychological Review 84: 231–259.
  • Onishi, K. H. and Baillargeon, R. (2005). “Do 15-month-old infants understand false beliefs?” Science 308: 255–258.
  • Perner, J. (1991). Understanding the Representational Mind. Cambridge, MA, MIT Press.
  • Perner, J. and Aichhorn, M. (2008). “Theory of Mind, language, and the temporo-parietal junction mystery.” Trends in Cognitive Sciences 12(4): 123–126.
  • Perner, J., Baker, S. and Hutton, D. (1994). “Prelief: The conceptual origins of belief and pretence.” In C. Lewis and P. Mitchell (eds.), Children’s Early Understanding of Mind. Hillsdale, NJ, Erlbaum, pp. 261–286.
  • Perner J., and Kuhlberger, A. (2005). “Mental simulation: Royal road to other minds?” In Malle, B. F. and Hodges, S. D. (eds.), Other Minds. New York, Guilford Press, pp. 166–181.
  • Perner, J. and Leekam, S. (2008). “The curious incident of the photo that was accused of being false: Issues of domain specificity in development, autism, and brain imaging.” The Quarterly Journal of Experimental Psychology 61(1): 76–89.
  • Perner, J., Zauner, P. and Sprung, M. (2005). “What does ‘that’ have to do with point of view? Conflicting desires and ‘want’ in German.” In J. W. Astington and J. A. Baird (eds.), Why Language Matters for Theory of Mind. Oxford, Oxford University Press, pp. 220–244.
  • Ramsey, W. (2011). “Eliminative Materialism.” In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2011 Edition).
  • Rizzolatti, G., Fogassi, L. and Gallese V. (2002). “Motor and cognitive functions of the ventral premotor cortex.” Current Opinion in Neurobiology 12:149–154.
  • Robbins, P. (2006). “The ins and outs of introspection.” Philosophy Compass 1(6): 617-630.
  • Ruffman, T. and Perner, J. (2005). “Do infants really understand false belief?” Trends in Cognitive Sciences 9(10): 462-463.
  • Samuels, R. (1998). “Evolutionary psychology and the massive modularity hypothesis.” The British Journal for the Philosophy of Science 49: 575–602.
  • Samuels, R. (2000). “Massively modular minds: Evolutionary psychology and cognitive architecture.” In P. Carruthers and A. Chamberlain (eds.). Evolution and the Human Mind. Cambridge, Cambridge University Press, pp. 13–46.
  • Samuels, R. (2006). “Is the mind massively modular?” In R. J. Stainton (ed.), Contemporary Debates in Cognitive Science. Oxford, Blackwell, pp. 37–56.
  • Saxe, R. (2005). “Against simulation: The argument from error.” Trends in Cognitive Science 9: 174–179.
  • Saxe, R. (2009). “The neural evidence for simulation is weaker than I think you think it is.” Philosophical Studies 144: 447-456.
  • Saxe, R. and Kanwisher, N. (2003). “People thinking about thinking people: The role of the temporo-parietal junction in ‘theory of mind’.” NeuroImage 19: 1835–1842.
  • Schwitzgebel, E. (2010). “Introspection.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2010 Edition).
  • Sellars, W. (1956). “Empiricism and the philosophy of mind.” In Science, Perception and Reality. London and New York, Routledge & Kegan Paul, 1963, 127–96.
  • Simpson, T., Carruthers, P., Laurence, S. and Stich, S. (2005). “Introduction: Nativism past and present.” In P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind: Structure and Contents. Oxford, Oxford University Press, pp. 3–19.
  • Slors, M. and Macdonald, C. (2008). “Rethinking folk-psychology: Alternatives to theories of mind.” Philosophical Explorations 11(3): 153–161.
  • Spelke, E.S. and Kinzler, K.D. (2007). “Core knowledge.” Developmental Science 10: 89–96.
  • Stich, S. (1983). From Folk Psychology to Cognitive Science: The Case Against Belief. Cambridge, MA, MIT Press.
  • Stich, S. and Nichols, S. (1992). “Folk Psychology: Simulation or Tacit Theory?” Mind & Language 7(1): 35–71; reprinted in M. Davies and T. Stone (eds.), Folk Psychology. Oxford, Blackwell, 1995, pp. 123–158.
  • Stich, S. and Nichols, S. (1995). “Second Thoughts on Simulation.” In M. Davies and A. Stone (eds.). Mental Simulation: Evaluations and Applications. Oxford, Blackwell, 87–108.
  • Stich, S. and Nichols, S. (2003). “Folk Psychology.” In S. Stich and T. A. Warfield (eds.), The Blackwell Guide to Philosophy of Mind. Oxford, Blackwell, pp. 235–255.
  • Stich, S. and Ravenscroft, I. (1994). “What is folk psychology?” Cognition 50: 447–468.
  • Stueber, K. R. (2006). Rediscovering Empathy: Agency, Folk Psychology, and the Human Sciences. Cambridge, MA, MIT Press.
  • Surian, L., Caldi, S. and Sperber, D. (2007). “Attribution of beliefs by 13-month-old infants.” Psychological Science 18(7): 580–586.
  • Wellman, H. M. (1990). The Child’s Theory of Mind, Cambridge, MA, MIT Press.
  • Wellman, H. M., Cross, D. and Watson, J. (2001). “Meta-analysis of theory-of-mind development: The truth about false belief.” Child Development 72: 655–684.
  • Wilson, D. (2005). “New directions for research on pragmatics and modularity.” Lingua 115: 1129–1146.
  • Wimmer, H., Hogrefe, G. and Perner, J. (1988). “Children’s understanding of informational access as a source of knowledge.” Child Development 59: 386-396.
  • Wimmer, H. and Perner, J. (1983). “Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception.” Cognition 13: 103–128.
  • Varley, R., Siegal, M. and Want, S.C. (2001). “Severe impairment in grammar does not preclude theory of mind.” Neurocase 7: 489–493.
  • Zaitchik, D. (1990). “When representations conflict with reality: The preschooler’s problem with false beliefs and ‘false’ photographs.” Cognition 35: 41–68.

 

Author Information

Massimo Marraffa
Email: marraffa@uniroma3.it
University Roma Tre
Italy

Edmund Husserl: Intentionality and Intentional Content

HusserlEdmund Husserl (1859—1938) was an influential thinker of the first half of the twentieth century. His philosophy was heavily influenced by the works of Franz Brentano and Bernard Bolzano, and was also influenced in various ways by interaction with contemporaries such as Alexius Meinong, Kasimir Twardowski, and Gottlob Frege. In his own right, Husserl is considered the founder of twentieth century Phenomenology with influence extending to thinkers such as Martin Heidegger, Jean-Paul Sartre, Maurice Merleau-Ponty, and to contemporary continental philosophy generally. Husserl’s philosophy is also being discussed in connection with contemporary research in the cognitive sciences, logic, the philosophy of language, and the philosophy of mind, as well as in discussions of collective intentionality. At the center of Husserl’s philosophical investigations is the notion of the intentionality of consciousness and the related notion of intentional content (what Husserl first called ‘act-matter’ and then the intentional ‘noema’). To say that thought is “intentional” is to say that it is of the nature of thought to be directed toward or about objects. To speak of the “intentional content” of a thought is to speak of the mode or way in which a thought is about an object. Different thoughts present objects in different ways (from different perspectives or under different descriptions) and one way of doing justice to this fact is to speak of these thoughts as having different intentional contents. For Husserl, intentionality includes a wide range of phenomena, from perceptions, judgments, and memories to the experience of other conscious subjects as subjects (inter-subjective experience) and aesthetic experience, just to name a few. Given the pervasive role he takes intentionality to play in all thought and experience, Husserl believes that a systematic theory of intentionality has a role to play in clarifying and founding most other areas of philosophical concern, such as the theory of consciousness, the philosophy of language, the philosophy of logic, epistemology, and the philosophies of action and value. This article presents the key elements of Husserl’s understanding of intentionality and intentional content, specifically as these are developed in his works Logical Investigations and Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy.

Table of Contents

  1. Intentionality: Background and General Considerations
    1. Intentional Content
  2. Logical Investigations
    1. Intentionality in Logical Investigations
      1. Act-Character
      2. Act-Matter
    2. Intentionality, Meaning and Expression in Logical Investigations
      1. Meaning and Expression
      2. Essentially Occasional Expressions: Indexicals
  3. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy: The Perceptual Noema
    1. Noesis and Noema: Terminology and Ontology
    2. Structural Features of the Noema
    3. Systems of Noemata and Explication
    4. Additional Considerations
  4. References and Further Reading
    1. Works by Husserl
    2. Secondary Sources

1. Intentionality: Background and General Considerations

Franz Brentano (1838—1917) is generally credited with having inspired renewed interest in the idea of intentionality, especially in his lectures and in his 1874 book Psychology from an Empirical Standpoint. In this work Brentano is, among other things, concerned to identify the proper sphere or subject matter of psychology. Influenced in various ways by Aristotle’s psychology, by the medieval notion of the intentio of a thought, and by modern philosophical views such as those of Descartes and the empiricists, he identifies intentionality as the mark or distinctive characteristic of the mental. For Brentano this means that every mental phenomenon involves the “intentional inexistence” of an object toward which the mental phenomenon is directed. While every such mental phenomenon has an object, different mental phenomena relate to their objects in different ways depending on whether they are mental acts of presenting something, of judging about something, or of evaluating something as good or bad. Identifying intentionality as the mark of the mental in this way opens up the possibility of studying the mind in terms of its relatedness to objects, the different modes or forms that this relatedness takes (perceiving, imagining, hallucinating, and so forth), and in terms of the relationships that these different modes of intentionality bear to one another (the relationships between presentations, judgments, and evaluations; for example, that every judgment fundamentally depends on a presentation the object of which it is a judgment about). Husserl studied with Brentano from 1884 to 1886 and, along with others such as Alexius Meinong, Kasimir Twardowski, and Carl Stumpf, took away from this experience an abiding interest in the analysis of the intentionality of mind as a key to the clarification of other issues in philosophy.

It is important to note the distinction between intentionality in the sense under discussion here on the one hand and the idea of an intention in the sense of an intelligent agent’s goal or purpose in taking a specific action on the other. The intentionality under consideration here includes the idea of agent’s intentions to do things, but is also much broader, applying to any sort of object-directed thought or experience whatsoever. Thus, while it would be normal to say that “Jack intended to score a point when he kicked the ball toward the goal”, in the sense of ‘intention’ pertinent to Husserl it is equally correct to say that “Jack intended the bird as a blue jay”. This latter being a way of saying that Jack directed his mind toward the bird by thinking of it or perceiving it as a blue jay.

Husserl himself analyzes intentionality in terms of three central ideas: intentional act, intentional object, and intentional content. It is arguably in Husserl’s Logical Investigations that these ideas receive their first systematic treatment as distinct but correlative elements in the structure of thought and experience. This section clarifies these three notions based on Husserl’s main commitments, though not always using his exact terminology.

The intentional act or psychological mode of a thought is the particular kind of mental event that is, whether this be perceiving, believing, evaluating, remembering, or something else. The intentional act can be distinguished from its object, which is the topic, thing, or state of affairs that the act is about. So the intentional state of seeing a white dog can be analyzed in terms of its intentional act, visually perceiving, and in terms of its intentional object, a white dog. Intentional act and intentional object are distinct since it is possible for the same kind of intentional act to be directed at different objects (perceiving a tree vs. perceiving a pond vs. perceiving a house) and for different intentional acts to be directed at the same object (merely thinking about the Eiffel Tower vs. perceiving the Eiffel Tower vs. remembering the Eiffel Tower). At the same time the two notions are correlative. For any intentional mental event it would make no sense to speak of it as involving an act without an intentional object any more than it would to say that the event involved an intentional object but no act or way of attending to that object (no intentional act). The notion of intentionality as a correlation between subject and object is a prominent theme in Husserl’s Phenomenology.

a. Intentional Content

The third element of the structure of intentionality identified by Husserl is the intentional content. It is a matter of some controversy to what extent and in what way intentional content is truly distinct from the intentional object in Husserl’s writings. The basic idea, however, can be stated without too much difficulty.

The intentional content of an intentional event is the way in which the subject thinks about or presents to herself the intentional object. The idea here is that a subject does not just think about an intentional object simpliciter; rather the subject always thinks of the object or experiences it from a certain perspective and as being a certain way or as being a certain kind of thing. Thus one does not just perceive the moon, one perceives it “as bright”, “as half full” or “as particularly close to the horizon”. For that matter, one perceives it “as the moon” rather than as some other heavenly body. Intentional content can be thought of along the lines of a description or set of information that the subject takes to characterize or be applicable to the intentional objects of her thought. Thus, in thinking that there is a red apple in the kitchen the subject entertains a certain presentation of her kitchen and of the apple that she takes to be in it and it is in virtue of this that she succeeds in directing her thought towards these things rather than something else or nothing at all. It is important to note, however, that for Husserl intentional content is not essentially linguistic. While intentional content always involves presenting an object in one way rather than another, Husserl maintained that the most basic kinds of intentionality, including perceptual intentionality, are not essentially linguistic. Indeed, for Husserl, meaningful use of language is itself to be analyzed in terms of more fundamental underlying intentional states (this can be seen, for example, throughout LI, I). For this reason characterizations of intentional content in terms of “descriptive content” have their limits in the context of Husserl’s thought.

The distinction between intentional object and intentional content can be clarified based on consideration of puzzles from the philosophy of language, such as the puzzle of informative identity statements. It is quite trivial to be told that Mark Twain is Mark Twain. However, for some people it can be informative and cognitively significant to learn that Mark Twain is Samuel Clemens. The notion of intentional content can be used to explain this. When a subject thinks about the identity statement asserting that Mark Twain is Mark Twain, the subject thinks about Mark Twain in the same way (using the same intentional content; perhaps “the author of Huckleberry Finn”) in association with the name on both the left and right sides of the identity, whereas when a subject thinks about the identity statement asserting that Mark Twain is Samuel Clemens what he learns is that different intentional contents (those associated with the names ‘Mark Twain’ and ‘Samuel Clemens’ respectively) are true of the same intentional object. Cases such as this both motivate the distinction between intentional content and intentional object and can be explained in terms of it.

The notion of intentional content as distinct from intentional object is also important in relation to the issue of thought about and reference to non-existent objects. Examples of this include perceptual illusions, thought about fictional objects such as Hamlet or Lilliput, thought about impossible objects such as round-squares, and thought about scientific kinds that turn out not to exist such as phlogiston. What is common to each of these cases is that it seems possible to have meaningful experiences, thoughts and beliefs about these things even though the corresponding objects do not exist, at least not in any ordinary sense of ‘exist’. Identifying intentional content as a distinct and meaningful element of the structure of intentionality makes it possible for Husserl to explain such cases of meaningful thought about the non-existent in a way similar to that of Gottlob Frege and different from the strategy of his fellow student of Brentano, Alexius Meinong. Approaching issues of intentionality from the perspective of logic and the philosophy of language, Frege handled such cases by drawing a distinction between the sense or meaning and the referent (object denoted) of a term, and then saying that non-referring terms such as ‘Ulysses’ have senses, but no referents (Frege 1948). Meinong, on the other hand, was driven by his commitment to the thesis of intentionality to posit a special category of objects, the non-existing objects or objects that have Nichtsein, as the intentional objects of such thoughts (Meinong 1960). For Husserl, such cases involve an intentional act and intentional content where the intentional content does present an intentional object, but there is no real object at all corresponding to the intentional appearance. Given this, one way of reading the distinction between intentional content and intentional object is as a generalization to all mental acts of Frege’s primarily linguistic distinction between the senses and the referents of terms and sentences (for a defense of this interpretation see Føllesdal 1982, while for discussion and resistance to the view, see Drummond 1998). Husserl’s exact understanding of the ontological situation regarding intentional objects is quite involved and undergoes some changes between Logical Investigations and his later phenomenology, beginning with Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy. However, throughout his work Husserl is able to make use of the distinction between intentional content and intentional object to handle cases of meaningful thought about the non-existent without having to posit, in Meinongian fashion, special categories of non-existent objects.

The basic structure of Husserl’s account of intentionality thus involves three elements: intentional act, intentional content and intentional object. For Husserl, the systematic analysis of these elements of intentionality lies at the heart of the theory of consciousness, as well as, in varying ways, of logic, language and epistemology.

2. Logical Investigations

Logical Investigations (hereafter ‘Investigations’), which came out in two volumes in the years 1900 and 1901, represents Husserl’s first definitive treatment of intentionality and is the source of the main ideas that would drive much of his later philosophical thinking. The primary project of the Investigations is to criticize a view in the philosophy of logic called “psychologism” according to which the laws of logic are in some sense natural laws or rules governing the human mind and can thus be studied empirically by psychology. Husserl, notably in agreement with Frege, believed that this view had the undesirable consequences of treating the laws of logic as contingent rather than necessarily true and as being empirically discoverable rather than as known and validated a priori. In the first part of the Investigations, the “Prolegomena to Pure Logic”, Husserl systematically criticizes the psychologistic view and proposes to replace it with his own conception of “pure logic” as the a priori framework for organizing, understanding and validating the results of the formal, natural and social sciences (Husserl called the “theory of scientific theory in general” that pure logic was to be the foundation for ‘Wissenschaftslehre’). For Husserl, pure logic is an a priori system of necessary truths governing entailment and explanatory relationships among propositions that does not in any way depend on the existence of human minds for its truth or validity. However, Husserl maintains that the task of developing a human understanding of pure logic requires investigations into the nature of meaning and language, and into the way in which conscious intentional thought is able to comprehend meanings and come to know logical (and other) truths. Thus the bulk of a work that is intended to lay the foundations for a theory of logic as a priori, necessary, and completely independent of the composition or activities of the mind is devoted precisely to systematic investigations into the way in which language, meaning, thought, and knowledge are intentionally structured by the mind. While this tension is more apparent than real, it was a major source of criticism directed against the first edition of Logical Investigations, one which Husserl was concerned to clarify and defend himself against in his subsequent writings and in the second edition of the Investigations in 1913. Pertinent here is what Husserl had to say about language and expression (LI, I) and about intentionality itself (LI, V & VI).

a. Intentionality in Logical Investigations

In Logical Investigations Husserl developed a view according to which conscious acts are primarily intentional, and a mental act is intentional only in case it has an act-quality and an act-matter. Introducing this key distinction, Husserl writes:

The two assertions ‘2 x 2 = 4’ and ‘Ibsen is the principal founder of modern dramatic realism’, are both, qua assertions, of one kind; each is qualified as an assertion, and their common feature is their judgment-quality. The one, however, judges one content and the other another content. To distinguish such ‘contents’ from other notions of ‘content’ we shall speak here of the matter (material) of judgment. We shall draw similar distinctions between quality and matter in the case of all acts (LI, V § 20, p. 586).

An additional notion in the Investigations, which grows in importance in Husserl’s later work and will be discussed here, is the act-character. Husserl views act-quality, act-matter and act-character as mutually dependent constituents of a concrete particular thought. Just as there cannot be color without saturation, brightness and hue, so for Husserl there cannot be an intentional act without quality, matter and character. The quality of an act (called ‘intentional act’ above) is the kind of act that it is, whether perceiving, imagining, judging, wishing, and so fotrth. The matter of an act is what has been called above its intentional content, it is the mode or way in which an object is thought about, for example a house intended from one perspective rather than another, or Napoleon thought of first as “the victor at Jena”, then as “the vanquished at Waterloo”. The character of an act can be thought of as a contribution of the act-quality that is reflected in the act-matter. Act-character has to do with whether the content of the act, the act-matter, is posited as existing or as merely thought about and with whether the act-matter is taken as given with evidence (fulfillment) or without evidence (emptily intended). The next two sub-sections deal with act-character and act-matter respectively.

i. Act-Character

In the Investigations and in his later work, Husserl sometimes writes of an additional dimension in the analysis of intentionality, which he first calls the “act-character” and then in later writings the “doxic and ontic modalities” (For the former, see for example LI, VI § 7; for the latter, see Ideas, Chapter 4 particularly §§ 103—10). In the Investigations, act-character includes such things as whether the intentional act is merely one of reflecting on a possibility (a “non-positing act”) or one of judging or asserting that something is the case (a “positing act”), as well as the degree of evidence that is available to support the intention of the act as fulfilled or unfulfilled (as genuinely presenting some object in just the way that the act-matter suggests, or not). It seems clear that the character of an act is ultimately traceable to the act-quality, since it has to do with the way in which an act-matter is thought about rather than with what that act-matter itself presents. However, it is a contribution of the act-quality that casts a shadow or a halo around the matter, giving the content of the act a distinctive character. This becomes clearer through consideration of particular cases.

Consider first positing and non-positing acts. When a subject wonders whether or not the train will be on time, the content or act-matter of her intention is that of the train being on time. However, in this case the subject is not positing that the train will be on time, but merely reflecting on this in a non-committal (“non-positing”) way as a possibility. The same difference is present in the case of merely wondering whether Bob is the murderer on the one hand (non-positing act), and forming the firm judgment that he is on the other (positing act) (on positing and non-positing acts, see LI, V §§ 38—42).

The character of an intentional act also has to do with whether it is an “empty” merely signitive intention or whether it is a “non-empty” or fulfilled intention. Here what is at issue is the extent to which a subject has evidence of some sort for accepting the content of their intention. For example, a subject could contemplate, imagine or even believe that “the sun set today will be beautiful with few clouds and lots of orange and red colors” already at eleven in the morning. At this point the intention is an empty one because it merely contemplates a possible state of affairs for which there is no intuitive (experiential) evidence. When the same subject witnesses the sun set later in the day, her intention will either be fulfilled (if the sunset matches what she thought it would be like) or unfulfilled (if the sun set does not match her earlier intention). For Husserl, the difference here too does not have to do with the content or act-matter itself, but rather with the evidential character of the intention (LI VI, §§ 1—12).

Importantly, the distinctions between positing and non-positing acts on the one hand and between empty and fulfilled intentions on the other are separate. It would be possible for a subject to posit the existence of something for which she had no evidence or fulfillment (perhaps the belief that her favorite candidate will win next year’s election), just as it would be possible for a subject to not posit or affirm something for which she did have fulfillment or evidence (such as refraining from believing that water causes sticks immersed in it to bend, in spite of immediate perceptual information supporting this).

ii. Act-Matter

As noted above, the matter of an intentional act is its content: the way in which it presents the intentional object as being. The act-matter is:

that element in an act which first gives it reference to an object, and reference so wholly definite that it not merely fixes the object meant in a general way, but also the precise way in which it is meant. (LI, V § 20, p. 589, italics Husserl’s)

So the act-matter both determines to what object, if any, a thought refers, and determines how the thought presents that object as being. For Husserl, the matter of an intentional act does not consist of only linguistic descriptive content. The notion of act-matter is simply that of the significant object-directed mode of an act, and can be perceptual, imaginative, or memorial, linguistic or non-linguistic, particular and indexical, or general, context-neutral and universal. This makes intentionality and intentional content (act-matter) the fundamental targets of analysis, with the theory of language and expression to be analyzed in terms of these notions rather than the other way around. Husserl is thus committed to the notion that intentionality is primary and language secondary, and so also to the view that meaningful non-linguistic intentional thought and experience are both possible and common (LI, I §§ 9—11, 19, & 20).

Husserl’s understanding of the metaphysics of act-matter is also important. Motivated by his anti-psychologism he wants to treat meanings as objective and independent of the minds of particular subjects. Because of this Husserl views meanings in the Investigations as “ideal species”, a kind of abstract entity akin to a universal. However, having done this Husserl also needs to explain how it is that these abstract meanings can play a role in the intentional thought of actual subjects. Husserl’s solution to this is to say that meanings are ideal species or kinds of act-matter that are then instantiated in the actual act-matter of particular intentional subjects when they think the relevant thoughts. Thus, just as there is an ideal species or universal for shape, which gets instantiated in particular instances of shaped objects in the world, so there is an ideal species or universal of the act-matter “2+2=4”, which gets instantiated in the act-matter of a particular subject when he thinks this thought. Whereas Fregean accounts deal with the fact that one individual can have the same thought at different times and different individuals can think about the same thing at any time by positing a single abstract sense that is the numerically identical content of all of their thoughts, Husserl views particular act-matters or contents as instances of ideal act-matter species. Thus, on Husserl’s view, two subjects are able to think about the same thing in the same way when both of them instantiate exactly similar instances of a single kind of content or act-matter. Thus if John and Sarah are both thinking about how they would like to see the Twins win the 2008 World Series in baseball, they are having the same thought and thinking about the same objects in virtue of instantiating exactly similar act-matter instances of the single act-matter species “the Twins win the 2008 World series in baseball” (LI, I §§ 30—4, V §§ 21 & 45).

b. Intentionality, Meaning and Expression in Logical Investigations

Largely motivated by his concern with developing a pure logic, Husserl devotes the entire first Logical Investigation, “Meaning and Expression”, to an analysis of issues of language, linguistic meaning and linguistic reference. Husserl’s discussion here is systematic and wide ranging, covering many issues that are also of concern to Frege in his analysis of language and that have continued to spur discussion in the philosophy of language up to the present. These include the distinction between linguistic types and tokens, the distinction between words and sentences and the meanings that these express, the distinction between sentence meaning and speaker meaning, the meaning and reference of proper names and the function of indexicals and demonstratives. As noted above, Husserl takes the intentionality of thought to be fundamental and the meaning-expressing and reference fixing capabilities of language to be parasitic on more basic features of intentionality. Here the main features of Husserl’s intentionality-based view of language are discussed.

i. Meaning and Expression

Husserl is interested in analyzing the meaning and reference of language as part of his project of developing a pure logic. This leads him to focus primarily on declarative sentences from ordinary language, rather than on other kinds of potentially meaningful signs (such as the way in which smoke normally indicates or is a sign of fire) and gestures (such as the way in which a grimace might indicate or convey that someone feels pain or is uncomfortable). Husserl thus uses ‘expression’ to refer to declarative sentences in natural language and to parts thereof, such as names, general nouns, indexicals,and so forth (LI, I §§ 1—5).

Husserl maintains that the meaning of an expression cannot be identical to the expression for two reasons. First, expressions in different languages, such as ‘the cat is friendly’ and ‘il gatto è simpatico’ are linguistically different, but have the same meaning. Additionally, the same linguistic expression, such as ‘I am going to the bank’ can have different meanings on different occasions (due in this case to the ambiguity of the word ‘bank’). Thus sameness of word or linguistic expression is neither necessary nor sufficient for sameness of meaning (LI, I §§ 11 & 12).

Husserl also maintains that the meaning of a linguistic expression cannot be identical with its referent or referents. In support of this Husserl appeals to phenomena such as informative identity statements and meaningful linguistic expressions that have no referent, among others. An example of the first sort of case would be Frege’s famous ‘Hesperus is Phosphorus’, where ‘Hesperus’ means “the evening star” and ‘Phosphorus’ means “the morning star”. Both ‘Hesperus’ and ‘Phosphorus’ refer to the planet Venus and so if the meaning of a term just is the object that it refers to, then anyone who knows that Hesperus is Hesperus should also know that Hesperus is Phosphorus, yet clearly this is not the case. Husserl’s own explanation for this would be that a subject who found ‘Hesperus is Phosphorus’ informative would do so because he associated different act-matters or intentional contents with each of these names. Thus Husserl, like Frege, distinguishes the meaning of a term or expression both from that term itself and from the object or objects to which the term refers. Husserl identifies these distinctive linguistic meanings as kinds of intentional act-matter (LI, I §§ 13 & 14).

In the Investigations Husserl describes the normal use of an expression, such as ‘the weather is cool today’, in the following way. A subject who utters this expression to a companion is in an intentional state, which includes an act-matter or intentional content that presents the weather as being cool today. This act-matter instantiates an ideal species or act-matter type “the weather is cool today” and in virtue of doing so directs the utterer’s attention to the actual state of affairs regarding the weather. It is in virtue of these facts about the utterer’s intentional states that the words express, for him, the meaning that they do (which is not, of course, to rule out the possibility of miscommunication; for Husserl the description here is just the standard case). The subject performing the utterance does, in principle, three things for his interlocutor. First, the subject’s utterance “expresses” the ideal meaning “the weather is cool today”. Second, assuming the interlocutor grasps that this is what is being expressed, her attention will itself be directed to the referent of this ideal sense, namely the state of affairs involving the weather today (her act-matter will then also instantiate the relevant ideal act-matter species). Third, the subject will, in making his utterance, “intimate” to his interlocutor that he has certain beliefs or is undergoing certain mental states or experiences. This last point is very important for Husserl. He maintains that in normal cases what a subject intimates in uttering an expression (that he believes that the weather is cool today or that he fears that his country will intervene) is not part of the meaning of that expression, even though it is something that the interlocutor will be able to understand on the basis of the subject’s utterance. It is only in cases where a subject is making an assertion about his experiences, attitudes or mental states (such as ‘I doubt that things will improve this year’) that expressed meaning and intimated meaning coincide (on intimation, see LI, I §§ 7 & 8; the majority of the points summarized here are in the first chapter of LI, I, which is §§ 1—16).

ii. Essentially Occasional Expressions: Indexicals

Husserl recognized clearly the need for a distinction between what he called “objective” expressions on the one hand, and those that are “essentially occasional” on the other. An example of an objective expression would be a statement concerning logic, mathematics or the sciences whose meaning is fixed regardless of the context in which it is used (for example ‘The Pythagorean Theorem is a theorem of geometry’ or ‘7+5=12’). An example of an essentially occasional expression would be a sentence such as ‘I am hungry’, which seems to in some sense change its meaning on different occasions of utterance, depending on who is speaking. According to Husserl, essentially occasional expressions include both indexicals (‘I’, ‘you’, ‘here’, ‘now’, and so forth) and demonstratives (‘this’, ‘that’ , and so forth). Such expressions have two facets of meaning. The first is what Husserl calls a constant “semantic function” associated with particular indexical expressions. For example, “It is the universal semantic function of the word ‘I’ to designate whoever is speaking…” (LI, I §26, p. 315). Husserl recognizes, however, that the sentences expressing these semantic functions cannot simply be substituted for indexicals without affecting the meaning of sentences containing them. A subject who believes “whoever is now speaking is hungry” effectively has an existentially quantified belief to the effect that the person, whoever he or she is, who is now speaking is hungry. In order to capture what such a subject would mean when he says ‘I am hungry’ it is necessary to somehow make it clear that the individual quantified over is indeed the person now speaking, but there seems to be no way to do this other than to re-insert the indexical ‘I’ itself in the sentence. This makes it necessary to identify a second facet or component of indexical content.

To deal with this, Husserl proposes a distinction between the semantic function or “indicating meaning” of indexicals, which remains constant from use to use, and the “indicated” meaning of indexicals, which is fundamentally cued to certain features of the speaker and context of utterance. Thus the “indicating meaning” of ‘I’ is always “whoever is now speaking”, but the indicated meaning of its use on a given occasion is keyed to the “self-awareness” or “self-presentation” of the speaker on that occasion. In general, the indicating meaning of an indexical will specify some general relationship between the utterance of a sentence and some feature of the speaker’s conscious awareness or perceptually given environment, while the indicated meaning will be determined by what the speaker is actually aware of in the context in which the sentence is uttered. In the case of many indexicals, such as ‘you’ and ‘here’ their indicating meaning may be supplied in part by demonstrative pointing to features of the immediate perceptual environment. Thus, Husserl writes, “The meaning of ‘here’ is in part universal and conceptual [semantic function/indicating meaning], inasmuch as it always names a place as such, but to this universal element the direct place-presentation [indicated meaning] attaches, varying from case to case” (LI I § 26, pp. 317—18). Husserl thus has a relatively clear understanding of some of the key issues surrounding indexical thought and reference that have been recently discussed in the work of philosophers of language such as John Perry (1977, 1979), as well as an account of how indexical thought and reference works. The question of whether or not this account is adequate to resolve all of the issues raised by contemporary discussions of indexicals and demonstratives, however, is one that goes beyond the scope of this article (for discussion of this issue in Husserl’s philosophy see Smith and McIntyre 1982, pp. 194—226).

3. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy: The Perceptual Noema

In the year 1913 Husserl published both a revised edition of Logical Investigations and the Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy (hereafter, Ideas). Between the first publication of the Investigations and the works of 1913 the main transition in Husserl’s thought is a change in emphasis from the primary project of laying the foundations of a pure a priori logic to the primary project of developing a systematic phenomenology of consciousness with the theory of intentionality at its core. In the Ideas, Husserl proposes the systematic description and analysis of first person consciousness, focusing on the intentionality of this consciousness, as the fundamental first step in both the theory of consciousness itself and, by extension, in all other areas of philosophy as well. With hints of the idea already present in the first edition of Logical Investigations, by 1913 Husserl has come to see first person consciousness as epistemologically and so logically prior to other forms of knowledge and inquiry. Whereas Descartes took his own conscious awareness to be epistemically basic and then immediately tried to infer, based on his knowledge of this awareness, the existence of a God, an external world, and other knowledge, Husserl takes first-person conscious awareness as epistemically basic and then proposes the systematic study of this consciousness itself as a fundamental philosophical task. In order to lay the foundations for this project Husserl proposes a methodology known as the phenomenological reduction.

The phenomenological reduction involves performing what Husserl calls the epoché, which is carried out by “bracketing”, setting in abeyance, or “neutralizing” the existential thesis of the “natural attitude”. The idea behind this is that most people most of the time do not focus their attention on the structure of their experience itself but rather look past this experience and focus their attention and interests on objects and events in the world, which they take to be unproblematically real or existent. This assumption about the unproblematic existence of the objects of experience is the “existential thesis” of the natural attitude. The purpose of the epoché is not to doubt or reject this thesis, but simply to set it aside or put it out of play so that the subject engaging in phenomenological investigation can reorient the focus of her attention to her experiences qua experiences and just as they are experienced. This amounts to a reorienting of the subject’s intentional focus from the natural to the phenomenological attitude. A subject who has performed the epoché and adopted the phenomenological attitude is in a position to objectively describe the features of her experience as she experiences them, the phenomena. Questions of the real existence of particular objects of experience and even of the world or universe themselves are thus set aside in order to make way for the systematic study of first person conscious experience (Ideas, §§ 27—32; Natanson 1973, chapters 2 & 3).

 

Distinct from the phenomenological reduction, but important for the project of Husserl’s Phenomenology as a whole, is what is sometimes called the “eidetic reduction”. The eidetic reduction involves not just describing the idiosyncratic features of how things appear to one, as might occur in introspective psychology, but focusing on the essential characteristics of the appearances and their structural relationships and correlations with one another. Husserl calls insights into essential features of kinds of things “eidetic intuitions”. Such eidetic intuitions, or intuitions into essence, are the result of a process Husserl calls ‘eidetic’ or ‘free’ variation in imagination. It involves focusing on a kind of object, such as a triangle, and systematically varying features of that object, reflecting at each step on whether the object being reflected upon remains, in spite of its altered feature(s), an instance of the kind under consideration. Each time the object does survive imaginative feature alteration that feature is revealed as inessential, while each feature the removal of which results in the object intuitively ceasing to instantiate the kind (such as addition of a fourth side to a triangle) is revealed as a necessary feature of that kind. Husserl maintained that this procedure can incrementally reveal elements of the essence of a kind of thing, the ideal case being one in which intuition of the full essence of a kind occurs. The eidetic reduction compliments the phenomenological reduction insofar as it is directed specifically at the task of analyzing essential features of conscious experience and intentionality. The considerations leading to the initial positing of the distinction between intentional act, intentional object and intentional content would, according to Husserl, be examples of this method at work and of some of its results in the domain of the mental. Whereas the purpose of the phenomenological reduction is to disclose and thematize first person consciousness so that it can be described and analyzed, the purpose of the eidetic reduction is to focus phenomenological investigations more precisely on the essential or invariant features of conscious intentional experience. (Ideas, §§ 34 & 69—71; Natanson 1973, chapter 4).

There is much debate about the exact significance, especially metaphysical and epistemological, of Husserl’s shift in focus and introduction of the methodology of the phenomenological reduction in the Ideas. Important here is that the notions of intentionality and intentional content remain central to Husserl’s project and so many of the descriptions and results of the Investigations remain relevant for the Ideas. However, Husserl does both modify and expand his views about intentionality, as well as the kinds of analyses of it that he pursues. Whereas in the Investigations Husse