This article surveys philosophical issues related to the nature and scope of animal mentality, as well as to our commonsense understanding and scientific knowledge of animal minds. Two general sets of problems have played a prominent role in defining the field and will take center stage in the discussion below: (i) the problems of animal thought and reason, and (ii) the problems of animal consciousness.
The article begins by examining three historically influential views on animal thought and reason. The first is David Hume‘s analogical argument for the existence of thought and reason in animals. The second is René Descartes‘ two arguments against animal thought and reason. And the third is Donald Davidson‘s three arguments against ascribing thought and reason to animals.
Next, the article examines contemporary philosophical views on the nature and limits of animal reason by Jonathan Bennett, José Bermúdez, and John Searle, as well as four prominent arguments for the existence of animal thought and reason: (i) the argument from the intentional systems theory by Daniel Dennett, (ii) the argument from common-sense functionalism by Jerry Fodor, Peter Carruthers, and Stephen Stich, (iii) the argument from biological naturalism by John Searle, and (iv) the argument from science by Colin Allen and Marc Bekoff, and José Bermúdez.
The article then turns to the important debate over animal consciousness. Three theories of consciousness—the inner-sense theory, the higher-order thought theory, and the first-order theory—are examined in relation to what they have to say about the possibility and existence of animal consciousness.
The article ends with a brief description of other important issues within the field, such as the nature and existence of animal emotions and propositional knowledge, the status of Lloyd Morgan’s canon and other methodological principles of simplicity used in the science of animal minds, the nature and status of anthropomorphism employed by scientists and lay folk, and the history of the philosophy of animal minds. The field has had a long and distinguished history and has of late seen a revival.
Given what we know or can safely assume to be true of their behaviors and brains, can animals have thought and reason? The answer depend in large measure on what one takes thought and reason to be, as well as what animals one is considering. Philosophers have held various views about the nature and possession conditions of thought and reason and, as a result, have offered various arguments for and against thought and reason in animals. Below are the most influential of such arguments.
David Hume (1711-1776) famously proclaimed that “no truth appears to be more evident, than that beast are endow’d with thought and reason as well as men” (1739/1978, p. 176). The type of thought that Hume had in mind here was belief, which he defined as a “lively idea” or “image” caused by (or associated with) a prior sensory experience (1739/1978, p. 94). Reason Hume defined as a mere disposition or instinct to form associations among such ideas on the basis of past experience. In the section of A Treatise of Human Nature entitled, “Of the Reason of Animals,” Hume argued by analogy that since animals behave in ways that closely resemble the behaviors of human beings that we know to be caused by associations among ideas, animals also behave as a result of forming similar associations among ideas in their minds. Given Hume’s definitions of “thought” and “reason,” he took this analogical argument to give “incontestable” proof that animals have thought and reason.
A well-known problem with Hume’s argument is the fact that “belief” does not appear to be definable in terms of vivid ideas presented to consciousness. Beliefs have propositional content, whereas ideas, as Hume understood them, do not (or need not). To have a belief or thought about some object (for example, the color red) always involves representing some fact or proposition about it (for example, that red is the color of blood), but one can entertain an image of something (for example, the color red) without representing any fact or proposition about it. Also, beliefs aim at the truth, they represent states of affairs as being the case, whereas ideas, even vivid ideas, do not. Upon looking down a railway track, for instance, one could close one’s eyes and entertain a vivid idea of the tracks as they appeared a moment ago (that is, as converging in the distance) without thereby believing that the tracks actually converge. And it is further argued, insofar as “belief” fails to be definable in terms of vivid ideas presented to consciousness, “reason” fails to be definable in terms of a disposition to form associations among such ideas; for whatever else reason might be, so the argument goes, it is a surely a relation among beliefs. Finally, and independently of Hume’s definitions of “belief” and “reason,” there is a serious question about how incontestable his analogical proof is, since similar types of behaviors can often be caused by very different types of processes. Toy robotic dogs, computers, and even radios behave in ways that are similar to the ways that human beings behave when we have vivid ideas presented to our consciousness, but few would take this fact alone as incontestable proof that these objects act as a result of vivid ideas presented to their consciousness (Searle 1994).
Equally as famous as Hume’s declaration that animals have thought and reason is René Descartes’ (1596-1650) declaration that they do not. “[A]fter the error of those who deny God, ” Descartes wrote, “there is none that leads weak minds further from the straight path of virtue than that of imagining that the souls of beasts are of the same nature as our own” (1637/1988, p. 46). Descartes gave two independent arguments for his denial of animal thought and reason, which have come to be called his language-test argument and his action-test argument, respectively (Radner & Radner 1989).
Not surprising, Descartes meant something different from Hume by “thought.” In the context of denying it of animals, Descartes appears to take the term to stand for occurrent thought—that is, thoughts that one entertains, brings to mind, or is suddenly struck by (Malcolm 1973). Normal adult human beings, of course, express their occurrent thoughts through their declarative speech; and declarative speech and occurrent thoughts share some important features. Both, for example, have propositional content, both are stimulus independent (that is, thoughts can occur to one, and declarative speech can be produced, quite independently of what is going on in one’s immediate perceptual environment), and both are action independent (that is, thoughts can occur to one, and declarative speech can be produced, that are quite irrelevant to one’s current actions or needs). In light of these commonalities, it is understandable why Descartes took declarative speech to be “the only certain sign of thought hidden in a body” (1649/1970, p. 244-245).
In addition to taking speech to be thought’s only certain sign, Descartes argued that the absence of speech in animals could only be explained in terms of animals lacking thought. Descartes was well aware that animals produce calls, cries, songs, and various gestures that function to express their “passions,” but, he argued, they never produce anything like declarative speech in which they “use words, or put together other signs, as we do in order to declare our thoughts to others” (1637/1988, p. 45). This fact, Descartes reasoned, could not be explained in terms of animals lacking the necessary speech organs, since, he argued, speech organs are not required, as evidenced by the fact that humans born “deaf” or “dumb” typically invent signs to engage in declarative speech, and some animals (for example, magpies and parrots) who have the requisite speech organs never produce declarative speech; nor could it be explained as a result of speech requiring a great deal of intelligence, since even the most “stupid” and “insane” humans beings are capable of it; and neither could it be explained, as it is in the case of human infants who are incapable of speech but nevertheless possess thought, in terms of animals failing to develop far enough ontogenetically, since “animals never grow up enough for any certain sign of thought to be detected in them” (1649/1970, p. 251). Rather, Descartes concluded, the best explanation for the absence of speech in animals is the absence of what speech expresses—thought. There are various places in his writings where Descartes appears to go on from this conclusion to maintain that since all modes of thinking and consciousness depend upon the existence of thought, animals are devoid of all forms of thinking and consciousness and are nothing but mindless machines or automata. It should be noted, however, that not every commentator has accepted this interpretation (see Cottingham 1978).
Various responses have been given to Descartes’ language-test argument. Malcolm (1973), for example, argued that dispositional thinking is not dependent upon occurrent thought, as Descartes seemed to suppose, and is clearly possessed by many animals. The fact that Fido cannot entertain the thought, the cat is in the tree, Malcolm argued, is not a reason to doubt that he thinks that the cat is in the tree. Others (Hauser et al. 2002), following Noam Chomsky, have argued that the best explanation for the absence of speech in animals is the not the absence of occurrent thought but the absence of the capacity for recursion (that is, the ability to produce and understand a potentially infinite number of expressions from a finite array of expressions). And others (Pepperberg 1999; Savage-Rumbaugh et al. 1998; Tetzlaff & Rey 2009) have argued that, contrary to Descartes and Chomsky, some animals, such as grey parrots, chimpanzee, and honeybees, possess the capacity to put together various signs in order to express their thoughts. Finally, it has been argued that there are behaviors other than declarative speech, such as insight learning, that can reasonably be taken as evidence of occurrent thought in animals (see Köhler 1925; Heinrich 2000).
Whereas Descartes’ principal aim in his language-test argument was to prove that animals lack thought, his principal aim in his action-test argument is prove that animals lack reason. By “reason,” Descartes meant “a universal instrument which can be used in all kinds of situations” (1637/1988, p. 44). For Descartes, to act through reason is to act on general principles that can be applied to an open-ended number of different circumstances. Descartes acknowledged that animals sometime act in accordance with such general rules of reason (for example, as when the kingfisher is said to act in accordance with Snell’s Law when it dives into a pond to catch a fish (see Boden 1984)), but he argued that this does not show that they act for these reasons, since animals show no evidence of transferring this knowledge of the general principles under which their behaviors fall to an open-ended number of novel circumstances.
Some researchers and philosophers have accepted Descartes’ definition of “reason” but have argued that some animals do show the capacity to transfer their general knowledge to a wide (or wide enough) range of novel situations. For example, honey bees that were trained to fly down a corridor that had the same (or different) color as the entry room into which they had initially flown automatically transferred this knowledge to the novel stimulus dimension of smell: those that were trained to choose the corridor with the same color, flew down the corridor with the same smell as in the entry room; and those that were trained to choose the corridor with a different color, flew down the corridor with a different smell as in the entry room. It is difficult to resist interpreting the bees’ performance here, as the researchers do, in terms of their grasping and then transferring the general rule, “pick the same/different feature” (Giurfa et al. 2001). Other researchers and philosophers, however, have objected to Descartes’ definition of “reason.” They argue that reason is not, as Descartes conceived it, a universal instrument but is more like a Swiss army knife in which there is a collection of various specialized capacities dedicated to solving problems in particular domains (Hauser 2000; Carruthers 2006). On this view of intelligence, sometimes called the massive modularity thesis, subjects have various distinct mechanisms, or modules, in their brains for solving problems in different domains (for example, a module for solving navigation problems, a module for solving problems in the physical environment, a module for solving social problems within a group, and so on). It is not to be expected on this theory of intelligence that an animal capable of solving problems in one domain, such as exclusion problems for food, should be capable of solving similar problems in a variety of other domains, such as exclusion problems for predators, mates, and offspring. Therefore, on the massive modularity thesis, the fact that “many animals show more skill than we do in some of their actions, yet the same animals show none at all in many others” is not evidence, as Descartes saw it (1637/1988, p. 45), that animals lack intelligence and reason but that their intelligence and reason are domain specific.
No 20th century philosopher is better known for his denial of animal thought and reason than Donald Davidson (1917-2003). In a series of articles (1984, 1985, 1997), Davidson put forward three distinct but related arguments against animal thought and reason: the intensionality test, the argument from holism, and his main argument. Although Davidson’s arguments are not much discussed these days (for exceptions, see Beisecker 2001; Glock 2000; Fellows 2000), they were quite influential in shaping the direction of the contemporary debate in philosophy on animal thought and reason and continue to pose a challenging skeptical position on this topic, which makes them deserved of close examination.
The intensionality test rest on the assumption that the contents of beliefs (and thought in general) are finer grained than the states of affairs they are about. The belief that Benjamin Franklyn was the inventor of bifocals, for example, is not the same as the belief that the first postmaster general of the US was the inventor of bifocals, even though both beliefs are about the same state of affairs. This fine-grained nature of belief content is reflected in the sentences we use to ascribe them. Thus, the sentence, “Sam believes that Benjamin Franklyn was the inventor of bifocals,” may be true while the sentence, “Sam believes that the first postmaster general of the US was the inventor of bifocals,” may be false. Belief ascriptions that have this semantic feature—that is, their truth value may be affected by the substitution of co-referring expressions within their “that”-clauses—are called intensional (or semantically opaque). The reason that is typically given for why belief ascriptions are intensional is that their purpose is to describe the way the subject thinks or conceives of some object or state of affairs. Belief ascriptions with this purpose are called de dicto ascriptions, as opposed to de re ascriptions (see below).
Our de dicto belief ascriptions to animals are unjustified, Davidson argued, since for any plausible de dicto belief ascription that we make there are countless others and no principled way of deciding which is the correct way of describing how the animal thinks. Take, for instance, the claim that Fido believes that the cat is in the tree. It seems that one could just as well have said that Fido believes that the small furry object is in the tree, or that the small furry object is in the tallest object in the yard, and so on. And yet there does not appear to be any objective fact of the matter that would determine the correct translation into our language of the way Fido thinks about the cat and the tree. Davidson concludes that “unless there is behaviour that can be interpreted as speech, the evidence will not be adequate to justify the fine distinctions we are used to making in attribution of thought” (1984, p. 164).
Some philosophers (Searle 1994; McGinn 1982) have interpreted Davidson’s argument here as aiming to prove that animals cannot have thought on the basis of a verificationist principle which holds that if we cannot determinately verify what a creature thinks, then it cannot think. Such philosophers reject this principle on the grounds that absence of proof of what is thought is not thereby proof of the absence of thought. But Davidson himself states that he is not appealing to such a principle in his argument (1985, p. 476), and neither does he say that he takes the intensionality test to prove that animals cannot have thought. Rather, he takes the argument to undermine our intuitive confidence in our ascriptions of de dicto beliefs to animals.
However, even on this interpretation of the intensionality test, objections have been raised. Some philosophers (Armstrong 1973; Allen & Bekoff 1997; Bermúdez 2003a, 2003b) have argued that, contrary to Davidson’s claim, there is a principled way of deciding among the alternative de dicto belief ascriptions to animals—by scientifically studying their discriminatory behaviors under various conditions and by stipulating the meanings of the terms used in our de dicto ascriptions so the they do not attribute more than what is necessary to capture the way the animal thinks. Although at present we may not be completely entitled to any one of the many de dicto belief ascriptions to animals, according to this view, there is no reason to think that we could not come to be so entitled through future empirical research on animal behavior and by the stipulation of the meanings of the terms used in our belief ascriptions. Also, it is important to mention that Bermúdez (2003a; 2003b) has developed a fairly well worked out theory of how to make de dicto ascriptions to animals that takes the practice of making such attributions to be a form of success semantics—”the idea that true beliefs are functions from desires to action that cause thinkers to behave in the ways that will satisfy their desires” (2003a, p. 65). (See Fodor 2003 for a criticism of Bermúdez’s success semantic approach.)
In addition, David Armstrong (1973) has objected that the intensionality test merely undermines our justification of de dicto belief ascriptions to animals, not de re belief ascriptions, since the latter do not aim to describe how the animal thinks but simply to identify the state of affairs the animal’s thought is about. Furthermore, Armstrong argues that it is in fact de re belief ascriptions, not de dicto belief ascriptions, that we ordinarily use to describe animal beliefs. When we say that Fido believes that the cat is up the tree, for example, our intention is simply to pick out the state of affairs that Fido’s belief is about, while remaining neutral with respect to how Fido thinks about it. Roughly, what we are saying, according to Armstong, is that Fido believes a proposition of the form Rab, where “R” is Fido’s relational concept that picks out the same two-place relation as our term “up,” “a” is Fido’s concept that refers to the same class of animals as our word “cat,” and “b” is Fido’s concept that refers to the same class of objects as our word “tree.”
One thing that Armstrong’s objection assumes is that we are at present justified in saying what objects, properties, or states of affairs in the world an animal’s belief is about. Davidson’s second argument, the argument from holism, aims to challenge this assumption. Davidson endorses a holistic principle regarding how the referents or extension of beliefs are determined. According to this principle, “[b]efore some object in, or aspect of, the world can become part of the subject matter of a belief (true or false) there must be endless true beliefs about the subject matter” (1984, p. 168). Applying this principle to the case of animals, Davidson argues that in order for us to be entitled to fix the extension of an animal’s belief, we must suppose that the animal has an endless stock of other beliefs. So, according to Davidson, to be entitled to say that Fido has a belief about a cat, we must assume that Fido has a large stock of other beliefs about cats and related things, such as that cats are three-dimensional objects that persist through various changes, that they are animals, that animals are living organisms, that cats can move freely about their environment, and so on. There is no fixed list of beliefs about cats and related items that Fido needs to possess in order to have a belief about cats, Davidson maintains, but unless Fido has a very large stock of such general beliefs, we will not be entitled to say that he has a belief about a cat as opposed to something else, such as undetached cat parts, or the surface of a cat, or a cat appearance, or a stage in the history of a cat. But in the absence of speech, Davidson claims, “there could [not] be adequate grounds for attributing the general beliefs needed for making sense of any thought” (Davidson 1985, p. 475). The upshot is that we are not, and never will be, justified even in our de re ascriptions of beliefs to animals.
One chief weakness with Davidson’s argument here is that its rests upon a radical form of holism that would appear to deny that any two human beings could have beliefs about the same things, since no two human beings ever share all (or very nearly all) the same general background beliefs on some subject. This has been taken by some philosophers as a reductio of the theory (Fodor and Lepore 1992).
Davidson’s main argument against animal thought consists of the following two steps:
First, I argue that in order to have a belief, it is necessary to have the concept of belief.
Second, I argue that in order to have the concept of belief one must have language.
(1985, p. 478)
Davidson concludes from these steps that since animals do not understand or speak a language, they cannot have beliefs. Davidson goes on to defend the centrality of belief, which holds that no creature can have thought or reason of any form without possessing beliefs, and concludes that animals are incapable of any form of thought or reason.
Davidson supports the first step of his main argument by pointing out what he sees as a logical connection between the possession of belief and the capacity for being surprised, and between the capacity for being surprised and possessing the concept belief. The idea, roughly, is that for any (empirical) proposition p, if one believes that p, then one should be surprised to discover that p is not the case, but to be surprised that p is not the case involves believing that one’s former belief that p was false, which, in turn, requires one to have the concept belief (as well as the concept falsity). (See Moser (1983) for a rendition of Davidson’s argument that avoids Davidson’s appeal to surprise.)
Davidson’s defense of the second step of his main argument is sketchier and more speculative. The general idea, however, appears to be as follows. If one has the concept belief and is thereby able to comprehend that one has beliefs, then one must also be able to comprehend that one’s beliefs are sometimes true and sometimes false, since beliefs are, by their nature, states capable of being true or false. However, to comprehend that one’s beliefs are true or false is to comprehend that they succeed or fail to depict the objective facts. But the only way for a creature to grasp the idea of a world of objective facts, Davidson speculates, is through its ability to triangulate—that is, through its ability to compare its own beliefs with those of others. Therefore, Davidson argues, since triangulation necessarily involves the capacity of ascribing beliefs to others and this capacity, according to the intensionality test and the argument from holism (see sections 1c.i and 1c.ii. above), requires language, possessing the concept belief requires the possession of language.
A number of commentators of Davidson’s main argument have raised objections to his defense of its first step—that having beliefs requires having the concept belief. Carruthers (2008), Tye (1997) and Searle (1996), for example, all argue that having beliefs does not require having the concept belief. These philosophers agree that beliefs, by their nature, are states that are revisable in light of supporting or countervailing evidence presented to the senses but maintain that this process of belief revision does not require the creature to be aware of the process or to have the concept belief. Carruthers (2008) offers the most specific defense of this claim by developing an account of surprise that does not involve higher-order beliefs, as Davidson maintains. According to Carruthers’ account, being surprised simply involves a mechanism that is sensitive to conflicts between the contents of one’s beliefs—that is, conflicts with what one believes, not conflicts with the fact that one believes such contents. On this model, being surprised that there is no coin in one’s pocket involves having a mechanism in one’s head that takes as its input the content that there is a coin in one’s pocket (not the fact that one believes this content) and the content that there is no coin in one’s pocket (again, not the fact that one believes this content) and produces as its output a suite of reactions, such as releasing chemicals into the bloodstream that heightens alertness, widening the eyes, and orienting towards and attending to the perceived state of affairs one took as evidence that there is no coin in one’s pocket. It is one’s awareness of these changes, Carruthers argues, not one’s awareness that one’s former belief was false, as Davidson maintains, that constitutes being surprised.
Compared with the commentary on the first step of his main argument, there is little critical commentary in print on Davidson’s defense of the second step of his main argument. However, Lurz (1998) has raised the following objection. He argues that the intensionality test and the argument from holism at most show that belief attributions to nonlinguistic animals are unjustified but not that they are impossible. The fact that we routinely attribute beliefs to nonlinguistic animals shows that such attributions are quite possible. But, Lurz argues, if we can attribute beliefs to nonlinguistic animals on the basis of their nonlinguistic behavior, then there is no reason to think (at least, none provided by the intensionality test and the argument from holism) that a nonlinguistic animal could not in principle attribute beliefs to other nonlinguistic animals on the same basis. Of course, if the intensionality test and argument from holism are sound, such belief attributions would be unjustified, but this alone is irrelevant to whether it is possible for nonlinguistic animals to attribute beliefs to others and thereby engage in triangulation; for triangulation requires the capacity for belief attribution, not the capacity for justified belief attribution. Therefore, Lurz argues, if triangulation is possible without language, then Davidson has failed to prove that having the concept belief requires language. Furthermore, if some animals actually are capable of attributing beliefs to others, as some researchers (Premack & Woodruff 1978; Menzel 1974; Tschudin 2001) have suggested that chimpanzees and dolphins may be (thought such claims are considered highly controversial at present), then even if triangulation is a requirement for having beliefs, as Davidson maintains, it may turn out that some animals (for example, chimpanzees and dolphins) actually have beliefs, contrary to what Davidson’s main argument concludes.
Although the vast majority of contemporary philosophers do not go as far as Descartes and Davidson in denying reason to animals completely, a number of them have argued for important limits on animal rationality. The arguments here are numerous and complex; so only an outline of the more influential ones is provided.
In Rationality (1964/1989), Jonathan Bennett argued that since it is impossible for animals without language to express universal beliefs (for example, All As are Bs) and past-tensed beliefs (for example, A was F) separately, they cannot posses either type of belief, on the grounds that what cannot be manifested separately in behavior cannot exist as distinct and separate states in the mind. A consequence of this argument is that animals cannot think or reason about matters beyond their own particular and immediate circumstances. In Linguistic Behaviour (1976), Bennett went further and argued that animals cannot draw logical inferences from their beliefs, on the grounds that if they did, they would do so for every belief that they possessed, which is absurd. According to this argument, Fido may believe that the cat is in tree, as well as believe that there is an animal in the tree, but he cannot come to have the latter belief as result of inferring it from the former.
More recently, José Bermúdez (2003a) has argued that the ability to think about thoughts (what Bermúdez calls “intentional ascent”) requires the ability to think about words in one’s natural language (what Bermúdez calls “semantic ascent”), and that since animals cannot do the latter, they cannot do the former. Bermudez’s argument that intentional ascent requires semantic ascent is, roughly, that thinking about thought involves the ability to “‘to hold a thought in mind’ in such a way that can only be done if the thought is linguistically vehicled” via a natural language sentence that one understand (p. ix). The idea is that the only way for a creature to grasp and think about a thought (that is, an abstract proposition) is by its saying, writing, or bringing to mind a concrete sentence that expresses the thought in question. Bermúdez goes on to argue that the ability to think about thoughts (propositions) is involved in a wide variety of types of reasoning, from thinking about and reasoning with truth-functional, temporal, modal, and quantified propositions, to thinking and reasoning about one’s own and others’ propositional attitudes (for example, beliefs and desires). Bermúdez concludes that since animals do no think about words or sentences in a natural language, their thinking and reasoning are restricted to observable states of affairs in their environment. However, see Lurz (2007) for critical comment on Bermúdez’s argument here.
Finally, John Searle (1994) has argued that since animals lack certain linguistic abilities, they cannot think or reasons about institutional facts (for example, facts about money or marriages), facts about the distant past (for example, facts about matters before their birth), logically complex facts (for example, subjunctive facts or facts that involve mixed quantifies), or facts that can only be represented via some symbolic system (for example, facts pertaining to the days of the week). In addition, and more interesting, Searle (2001) has argued that since animals cannot perform certain speech acts such as asserting, they cannot have desire-independent reasons for action. According to this argument, animals act only for the sake of satisfying some non-rationally assessable desire (for example, the satisfaction of hunger) and never out of a sense of commitment. Consequently, if acts of courage, fidelity, loyalty, and parental commitment involve desire-independent reasons for action, as they arguably do, then on Searle’s argument here, no animal is or can be courageous, faithful, loyal, or a committed parent.
There are four types of arguments in contemporary philosophy for animal thought and reason. The first is the argument from the intentional systems theory championed by Daniel Dennett (1987, 1995, 1997). The second is the argument from common-sense functionalism championed by (among others) Jerry Fodor (1987), Stephen Stich (1979) and Peter Carruthers (2004). The third is the argument from biological naturalism, championed by John Searle (1994). And the fourth is the argument from science championed by (among others) Allen and Bekoff (1997) and Bermúdez (2003a).
The intentional systems theory consists of two general ideas. The first is that our concepts of intentional states, such as our concepts belief, desire, and perceiving, are theoretical concepts whose identity and existence are determined by a common-sense psychological theory or folk-psychology. Folk psychology is a set of general principles that state that subjects, on the assumption that they are rational, tend to believe what they perceive, tend to draw obvious logical inferences from their beliefs, and tend act so as to satisfy their desires given what they believe. In many cases, we apply our folk psychology to animals to predict and make sense of their behaviors. When we do, we view animals as intentional systems and take up, what Dennett (1987) calls, the intentional stance toward them. The second important idea of the intentional systems theory is its instrumentalist interpretation of folk psychology. On the instrumentalist interpretation, what it is for a creature to have intentional states is for its behaviors to be well predicted and explained by the principles of folk psychology. Nothing more is required. There need not be anything inside the creature’s brain or body, for instance, that corresponds to or has structural or functional features similar to the intentional state concepts employed in our folk psychology. Our intentional state concepts, on the instrumentalist reading, do not aim to refer to real, concrete internal states of subjects but to abstract entities that are merely useful constructs for predicting and explaining various behaviors (much like centers of gravity used in mechanics). Therefore, according to the intentional systems theory argument, the fact that much of animal behavior is usefully predicted and explained from the intentional stance makes animals genuine thinkers and reasoners.
There are two general types of objections raised against the intentional systems theory argument. First, some have argued (Searle 1983) that our intentional state concepts are not theoretical concepts, since intentional states are experienced and, hence, our concepts of them are independent of our having any theory about them. Second, some (Braddon-Mitchell & Jackson 2007) have objected to the intentional systems theory’s commitment to instrumentalism, arguing that on such an interpretation of folk psychology, even lowly thermostats, laptop computers, and Blockheaded robots have beliefs and desires, since it is useful to predict and explain behaviors of such objects from the intentional stance.
Similar to the intentional systems theory, common-sense functionalism holds that our intentional state concepts are theoretical concepts that belong to and are determined by our folk psychology. Unlike the intentional systems theory, however, common-sense functionalism takes a realist interpretation of folk psychology. (In addition, many common-sense functionalists reject the rationality assumption that the intentional systems theory places on folk psychology (Fodor 1987, 1991).) On the realist interpretation, for a subject to have intentional states is for the subject to have in his brain a variety of discrete internal states that play the causal roles and have the internal structures that our intentional state concepts describe. According to this view, if Fido believes that the cat is up the tree, then he has in his brain an individual state, s, that plays the causal role that beliefs play according to our folk psychology, and s has an internal structure similar to the “that”-clause used to specify its content—that is, s has the structure Rxy where “R” represents the two-place relation up, “x” represents the cat, and “y” represents the tree. Since the internal state s is seen as having an internal structure similar to the sentence “the cat is up the tree,” common-sense functionalism is often taken to support the view that thinking involves an internal language or language of thought (Fodor 1975). It is then argued that since animal behavior is successfully predicted and explained by our folk psychology, there is defeasible grounds for supposing that animals actually have such internal states in their heads (Fodor 1987; Stich 1979; Carruthers 2004).
Two problems are typically raised regarding the argument from common-sense functionalism. Some (Stalnaker 1999) have objected that if, as common-sense functionalism claims, our ascriptions of intentional states to animals commit us to thinking that the animals have in their heads states that have the same representational structure as the “that”-clauses we use to specify their contents, then intentional ascriptions to animals (and to ourselves) would be a far more speculative practice than it actually is. The objection here does not deny that animals actually have such representational structures in their heads, it simply denies that that is what we are saying or thinking when we ascribe intentional states to them. Others (Camp, 2009) accept the common-sense functionalist account of intentional state concepts but have argued, on the basis of Evan’s (1982) generality constraint principle, that few animals have the sorts of structured representational states in their heads that folk psychology describes them as having. If Fido’s thoughts are structured in the way that common-sense functionalism claims, the objection runs, then if Fido is able to think that he is chasing a cat, then he must also be capable of thinking that a cat is chasing him, but, it is argued, this may be a thought that is completely unthinkable by Fido. However, see Carruthers (2009) and Tetzlaff and Rey (2009) for important objections to this type of argument.
Biological naturalism is the theory, championed by John Searle (1983, 1992), that holds that our concepts of intentional states are concepts of experienced subjective states. The concept belief, for example, is the concept of an experienced, conscious state that has truth conditions and world-to-mind direction of fit; whereas, our concept desires is the concept of an experienced, conscious state that has satisfaction conditions and mind-to-world direction of fit. Intentional states, according to this theory, are irreducibly subjective states that are caused by low-level biochemical states of the brain in virtue of their causal structures, not in virtue of their functional or causal roles, or, if they have such, their representational structures. According to biological naturalism, if Fido believes that the cat is in the tree, then he has in his brain a low-level biochemical state, s, that, in virtue of its unique causal structure, causes Fido to have a subjective experience that has a world-to-mind direction of fit and is true if and only if the cat is in the tree.
Searle argues that there are two main reasons why we find it irresistible to suppose that animals have intentional states, as biological naturalism conceives them. First, many animals have perceptual organs (for example, eyes, ears, mouths, and skin) that we see as similar to our own and which, we assume, operate according to similar physiological principles. Since we know in our own case that the stimulation of our perceptual organs leads to certain physiological processes which cause us to have certain perceptual experiences, we reason, from the principle of similar cause-similar effect, that the stimulation of perceptual organs in animals leads to similar physiological processes which cause them to have similar perceptual experiences. The behavior of animals, Searle repeatedly stresses, is by itself irrelevant to why we think animals have perceptual experiences; it is only relevant if we take the behavior to be caused by the stimulation of perceptual organs and underlying physiological processes relevantly similar to our own. This argument, of course, would only account for why we think that animals have perceptual experiences, not why we think that they have beliefs, desires, and other intentional states that are only distantly related to the stimulation of sensory organs. So Searle adds that the second reason we find it irresistible that animals have intentional states is that we cannot make sense of their behaviors otherwise. To make sense of why Fido is still barking up the tree when the cat is long out of sight, for example, we must suppose that Fido continues to want to catch the cat and continues to think that the cat is up the tree.
There are two main problems with Searle’s argument for animal thought and reason. First, according to biological naturalism, animals have intentional states solely in virtue of their having brain states that are relevantly similar in causal structure to those in human beings which cause us to have intentional states. But this raises the question: how are we to determine whether the brain states of animals are relevantly similar to our own? They will not be exactly similar, since animal brains and human brains are different. Suppose, for example, scientists discover that a certain type of electro-chemical process (XYZ) in human brains is necessary and sufficient for intentional states in us, and that an electro-chemical process (PDQ) similar to XYZ occurs in animal brains. Is PDQ similar enough to XYZ to produce intentional states in animals? Well, suppose PDQ produces behaviors in animals that are similar to those that XYZ produces in humans. Would that show that PDQ is enough like XYZ to produce intentional states in animals? No, says Searle, for unless those behaviors are produced by relevantly similar physiological processes they are simply irrelevant to whether the animal has intentional states. But that is precisely what we are trying to determine here, of course. It would appears that the only way to determine whether PDQ is similar enough to XYZ, on biological naturalism, is if we humans could temporarily exchange our brains for those of animals and see whether PDQ produces intentional states in us. This, of course, is impossible. And so it would appear that the question of whether animals have intentional states is, on biological naturalism, unknowable in principle.
Finally, Searle’s explanation for why we find it irresistible to ascribe perceptual experiences to animals seems questionable in some cases. If Searle’s explanation were correct, then most ordinary individuals should not find it at all compelling, for example, to ascribe auditory experiences (that is, hearing) to birds, or tactile experiences (that is, feelings of pressures, pain, or temperature) to fish or armadillos, since most ordinary individuals do not see anything on birds’ heads that looks like ears or on the outer surface of fish or armadillos that looks like skin.
Why should we believe that colds are caused by viruses and not by drastic changes in weather, as many folk had (and still do) believe? A reasonable answer is that our best scientific theory of the causes of colds is in terms of viruses, commonsense notwithstanding. Sometimes, of course, science and commonsense agree, and when they do, commonsense can be said to be vindicated by science. In either case, it is science that ultimately determines what should (and should not) be believed. This type of argument, sometimes called the argument from science, has been used to justify the claim that animals have thought, reason, consciousness, and other folk-psychological states of mind (see Allen & Bekoff 1997; Bermúdez 2003a). In the past thirty years or so, due in large measure to the demise of radical behaviorism and the birth of cognitivism in psychology, as well as from the influential writings of ethologist Donald Griffin (1976, 1984, 2001), scientists from various fields have found it increasingly useful to propose, test, and ultimately accept hypotheses about the causes of animal behavior in explicitly folk-psychological terms. It is quite common these days to see scientific articles on whether, for example, animals have conscious experiences such as pain, seeing and (even) joy (Griffin & Speck 2004; Panksepp & Burgdorf 2003), on whether scrub jays have desires, and beliefs, and can recollect their pasts (Clayton et al. 2006), on whether primates understand that other animals know, see, and hear(Hare et al. 2000; Hare et al. 2001; Santos et al. 2006), on whether primates make judgments about their own states of knowledge and ignorance (Hampton et al. 2004; Smith et al. 2003), and so on. According to the argument, since scientists are finding it useful to test and accept hypothesis about animal behavior in folk-psychological terms, we are justified in believing that animals have such states of mind.
Not everyone has found the argument from science here convincing, however. The chief concern is whether explanations of animal behavior in folk-psychological terms are, as the argument assumes, scientifically respectable (see Kennedy 1992). There are two features of scientific explanations of animal behavior that appear to count against their being so. First, scientific explanations of animal behavior are causal explanations in terms of concrete internal states of the animal, but on some models of folk-psychology, such as Dennett’s intentional systems theory (see 1.e.i. above), folk-psychological explanations are neither causal explanations nor imply anything about the internal states of the animal. Second, scientific explanations of animal behavior are objective in that there is typically a general agreement among researchers in the field on what would count in favor of or against the explanation; however, it has been argued that since the only generally agreed upon indicators of consciousness are verbal reports of the subject, explanations of animal behavior in terms of consciousness are unscientific (see Clayton et al. 2006, p. 206).
One standard type of reply to these objections has been to adopt a common-sense functionalist model of folk-psychology (see 1e.ii above) which interprets folk-psychological explanations as imputing causally efficacious internal states while denying that these explanations imply anything about the consciousness of the internal states. (This seems to be the approach that Clayton et al. (2006) take in their explanation of the behaviors of scrub jays in terms of “episodic-like” memories, which are episodic memories minus consciousness.) This, of course, raises the vexing issue of whether our folk-psychological concepts, such as belief, desire, intention, seeing, and so forth, imply consciousness (see Carruthers 2005; Lurz 2002a; Searle 1992; Stich 1979). Others have responded to the above objections by developing non-subjective measures for consciousness that could be applied to animals (and humans) incapable of verbal reports (Dretske 2006). And still others have proposed objective measures of consciousness in animals by appealing to the communicative signals of animals as non-verbal reports of the presence of conscious experiences (Griffin 1976, 1984, 2001).
It is generally accepted that most (if not all) types of mental states can be either conscious or unconscious, and that unconscious mental states can have effects on behavior that are not altogether dissimilar from those of their conscious counterparts. It is quite common, for example, for one to have a belief (for example, that one’s keys are in one’s jacket pocket) and a desire (for example, to locate one’s keys) that are responsible for some behavior (for example, reaching into one’s jacket pocket as one approaches one’s apartment) even though at the time of the behavior (and beforehand) one’s mind is preoccupied with matters completely unrelated to one’s belief or desire. Similarly, scientists have shown through various masking experiments and the like that our behaviors are often influenced by stimuli that are perceived below the level of consciousness (Marcel 1983). Also some philosophers have argued that even pains and other bodily sensations can be unconscious, such as when one continues to limp from a pain in one’s leg though at the time one is preoccupied with other matters and is not attending to the pain (Tye 1995).
Given this distinction between conscious and unconscious mental states, the question arises whether the mental states of animals are or can be conscious. It should be noted that this question not only has theoretical import but moral and practical import, as well. For arguably the fact that conscious pains and experiences feel a certain way to their subjects makes them morally relevant conditions, and it is, therefore, of moral and practical concern to determine whether the mental states of animals are conscious (Carruthers 1992). Of course, as with the question of animal thought and reason, the answer to this question depends in large part on what one takes consciousness to be. There are two general philosophical approaches to consciousness—typically referred to as first-order and higher-order theories—that have played a prominent role in the debate over the status of animal consciousness. These two approaches and their relevance to the question of conscious states in animals are described below.
Higher-order theories of consciousness start with the common assumption that conscious mental states are states of which one is higher-order aware, and unconscious mental states are states of which one is not higher-order aware. The theories diverge, however, over what is involved in being higher-order aware of one’s mental states.
Inner-sense theories take a subject’s higher-order awareness to be a type of perceptual awareness, akin to seeing, that is directed inwardly toward the mind as opposed to outwardly toward the world (Lycan 1996; Armstrong 1997). Since higher-order awareness is a species of perceptual awareness, on this view, it is not usually taken to require the capacity for higher-order thought or the possession of mental-state concepts. A subject need not be able to think that he is in pain or have the concepts I or pain, for example, in order for him to be higher-order aware of his pain. On the inner-sense theory, then, the mental states of animals will be conscious just in case they are higher-order aware of them by means of an inner perception.
Some inner-sense theorists have argued that since higher-order awareness does not require higher-order thought or the possession of mental-state concepts, it is quite consistent with what we know about animal behavior and brains that many animals may have such an awareness of their own mental states. Furthermore, there are recent studies in comparative psychology (Smith et al. 2003; Hampton et al. 2004) that suggest that monkeys, apes and dolphins actually have the capacity to be higher-order aware of their own states of certainty, memory, and knowledge. However, the results of these studies have not gone unchallenged (see Carruthers 2008).
The chief problem with inner-sense theories, however, is not so much their account of animal consciousness but their account of higher-order awareness. Some (Rosenthal 1986; Shoemaker 1996) have argued against a perceptual account of higher-order awareness on the grounds that (i) there is no dedicated perceptual organ in the brain for such a perception as there is for external perception; (ii) there is no distinct phenomenology associated with higher-order awareness as there is for all other types of perceptual modalities; and (iii) it is impossible to reposition oneself in relation to one’s mental states so as to get a better perception of them as one can do in the case of perception of external objects. And still others (Lurz 2003) have objected that the inner-sense theory cannot explain how concept-involving mental states, such as beliefs and desires, can be conscious, since to be aware of such states would require being aware of their conceptual contents, which cannot be done by way of a perceptual awareness that is not itself concept-involving.
Problems such as these have led a number of higher-order theorists (Rosenthal 1986; Carruthers 2000) to embrace some version or other of the higher-order thought theory. According to this theory, a mental state is conscious just in case one has (or is disposed to have) the higher-order thought that one is in such a mental state. Animals will have conscious mental states, on this theory, if and only if that they are capable of higher-order thoughts about themselves as having mental states. The question of animal consciousness, then, becomes the question of whether animals are capable of such higher-order thought.
A number of philosophers have argued that animals are incapable of such thought. Some have argued that since higher-order thoughts require the possession of the first-person I-concept, it is unlikely that animals are capable of having them. The selves of animals, the argument runs, are selves that experience numerous mental states at any one moment in time and that persist through various changes to their mental states. Thus, if an animal possessed the I-concept, it must be capable of understanding itself as such an entity—that is, it must be capable of thinking not only, I am currently in pain, for example, but I am currently in pain, am seeing, am hearing, am smelling, as well as be capable of thinking I was in such-and-such mental states but am not now. However, such thoughts appear to involve the mental equivalent of pronominal reference and past-tensed thoughts, both of which, it is argued, are impossible without language (see Quine 1995; Bermúdez 2003a; Bennett 1964, 1966, 1988).
Various objections have been raised against this argument on behalf of the higher-order theory and animal consciousness. Gennaro (2004, 2009) argues that that the I-concept involved in higher-order thoughts need be no more sophisticated than the concept this particular body or the concept experiencer of mental states, and that the results of various self-recognition studies with apes, dolphins and elephants, as well as the results of a number of episodic memory tests with scrub jays, suggest that many animals possess such minimal I-concepts (Parker et al. 1994; Clayton et al., 2003). Lurz (1999) goes further and argues that insofar as higher-order thoughts confer consciousness on mental states, they need not involve any I-concept at all. The idea here is that just as one can be aware that it is raining, where the “it” here is not used to express one’s concept of a thing or a subject—for there is no thing or subject that is raining—an animal can be aware that it hurts or thinks that p, where the “it” here does not express a concept of a thing or a subject that is thought to possess pain or to think that p. Animals, on this view, are thought to conceive of their mental states as we conceive of rain and snow—that is, as subject-less features placed at a time (see Strawson (1959) and Proust (2009) for similar arguments).
The most common argument against animals possessing higher-order thought, however, is that such thoughts requires linguistic capabilities and mental-state concepts that animals do not possess. Dennett (1991), for example, argues that the ability to say what mental state one is in is the very basis of one’s having the higher-order thought that one is in such mental state, and not the other way round. To think otherwise, Dennett argues, is to commit oneself to an objectionable Cartesian theater view of the mind. According to Dennett’s argument, since animals are incapable of saying what they are feeling or thinking, they are incapable of thinking that they are feeling or thinking. In reply, Carruthers (1996) has argued that there is a way of understand higher-order thoughts that is not tied to linguistic expression of any kind or committed to a Cartesian theater view of the mind.
In a somewhat similar vein of thought to Dennett’s, Davidson (1984, 1985) and Bermúdez (2003a) argue, although on different grounds, that since animals are incapable of speaking and interpreting a natural language, they cannot possess mental-state concepts for propositional attitudes and, therefore, cannot have higher-order thoughts about their own or others propositional attitudes (see sections 1c and 1d.iii above). This alone, of course, is not sufficient to prove that animals are incapable of higher-order thoughts about non-propositional mental states, such as bodily sensations and perceptual experiences. However, some have gone further and argued that animals are incapable of possessing any type of mental-state concept and, therefore, any type of higher-order thought. The argument for this view generally consist of the following two main premises: (1) if animals possess mental-state concepts, then they must have the capacity to apply these concepts to themselves as well as to other animals; and (2) animals have been shown to perform poorly in some important experiments designed to test whether they can apply mental-state concepts to other animals.
Premise (1) of this argument is sometimes supported (Seager 2004) by an appeal to Evan’s generality constraint (see section 1e.ii above); roughly, the argument runs, if an animal can think, for example, I am in pain, and can think of another animal that, for example, he walks, then the animal in question must be capable of thinking of another animal, he is pain, as well as be capable of thinking of himself, I walk. Others, however, have supported premise (1) on evolutionary grounds, arguing that animals would not have evolved the capacity to think with mental-state concepts unless their doing so was of some selective advantage, and the only selective advantage of thinking with mental-state concepts is its use in anticipating and manipulating other animals’ behaviors (Humphrey 1976). Premise (2) of this argument has been supported mainly by the results of a series of experiments conducted by Povinelli and colleagues (see Povinelli & Vonk 2004) which appear to show that chimpanzees are incapable of discriminating betweenseeing and not seeing in other subjects.
Various objections have been raised against such defenses of premises (1) and (2). Gennaro (2009), for example, has argued against the defense of premise (1) based on Evan’s generality constraint. Others have argued that, contrary to the evolutionary defense given for premise (1), the principal selective advantage of thinking with mental-state concepts is its use in recognizing and correcting errors in one’s own thinking, and that the results of various meta-cognition studies have shown that various animals are capable of reflecting upon and improving their pattern of thinking (Smith et al., 2003). (However, see Carruthers (2008) for a critique of such higher-order interpretations of these studies.) And with respect to premise (2), others have argued that, contrary to Povinelli’s interpretation, chimpanzees fail such discrimination tasks not because they are unable to attribute mental states to others but because the experimental tasks are unnatural and confusing for the animals, and that when the experimental tasks are more suitable and natural, such as those used in competitive paradigms (Hare et al. 2000; Hare et al. 2001; Santos et al. 2006), the animals show signs of mental-state attribution. However, see Penn and Povinelli (2007) for challenges to the supposed successes of mental-state attributions by animals in these new experimental protocols and for suggestions on how to improve experimental methods used in testing mental-state attributions in animals.
According to first-order theories, conscious mental states are those that make one conscious of things or facts in the external environment (Evans 1982; Tye 1995; Dretske 1995). Mental states are not conscious because one is higher-order aware of them but because the states themselves make one aware of the external world. Unconscious mental states, therefore, are mental states that fail to make one conscious of things or facts in the environment—although, they may have various effects on one’s behavior. Furthermore, mental states that make subjects conscious of things or facts in the environment do so, according to first-order theories, in virtue of their effecting, or being poised to effect, subjects’ belief-forming system. So, for example, one’s current perception of the computer screen is conscious, on such theories, because it causes, or is poised to cause, one to believe that there is a computer screen before one; whereas, those perceptual states that are involved in subliminal perception, for instance, are not conscious because they do not cause, nor are poised to cause, subjects to form beliefs about the environment.
First-order theorists argue (Tye 1997; Dretske 1995) that many varieties of animals, from fish to bees to chimpanzees, form beliefs about their environment based upon their perceptional states and bodily sensations and, therefore, enjoy conscious perceptual states and bodily sensations. Additional virtues of first-order theories, it is argued, are that they offer a more parsimonious account of consciousness than higher-order theories, since they do not require higher-order awareness for consciousness, and that they provide a more plausible account of animal consciousness than higher-order theories, since they ascribe consciousness to animals that we intuitively believe to possess conscious perceptual states (for example, bats and mice) but do not intuitively believe to possess higher-order awareness.
It has been argued (Lurz 2004, 2006), however, that first-order theories are at their best when explaining the consciousness of perceptual states and bodily sensations but have difficultly explaining the consciousness of beliefs and desires. Most first-order theorists have responded to this problem by endorsing a higher-order thought theory of consciousness for such mental states (Tye 1997; Dretske 2000, p. 188). On such a hybrid view, beliefs and desires are conscious in virtue of having higher-order thoughts about them, while perceptual states and bodily sensations are conscious in virtue of their being poised to make an impact on one’s belief-forming system. This hybrid view faces two important problems, however. First, on such a view, few, if any, animals would be capable of conscious beliefs and desires, since it seems implausible, for various reasons, to suppose that many animals are capable of higher-order thoughts about their own beliefs and desires. And yet it has been argued (Lurz 2002b) that there is intuitively compelling grounds for thinking that many animals are capable of conscious beliefs and desires, since their behaviors are quite often predictable and explainable in terms of the concepts beliefand desire of our folk psychology, which is a set of laws about the causal properties and interactions ofconscious beliefs and desires (or, at the very least, a set of laws about the causal properties and interactions of beliefs and desires that are apt to be conscious (Stich 1978)). However, see Carruthers (2005) for a reply to this argument.
The second problem for the hybrid view is that on its most plausible rendition it would ascribe consciousness to the same limited class of animals as higher-order thought theory and, thereby, provide no more of an intuitively plausible account of animal consciousness than its main competitor. For it seems intuitively plausible to suppose that a perceptual state or bodily sensation will be conscious only if it effects, or is poised to effect, a subject’s conscious belief-forming system. If it were discovered, for example, that the perceptual states involved in subliminal perception (or blindsight) caused subjects to form unconscious beliefs about the environment, no one but the most committed first-order theorist would conclude from this alone that these perceptual states were, after all, conscious. But if perceptual states and bodily sensations are conscious only insofar as they effect (or are poised to effect) a subject’sconscious belief-forming system, and conscious beliefs, on the hybrid view, require higher-order thought, then to possess conscious perceptions and bodily sensations, an animal would have to be, as higher-order thought theories maintain, capable of higher-order thought. What appears to be need here in order to save first-order theories from this problem is a first-order account of conscious beliefs and desires. See Lurz (2006) for a sketch of such an account.
There are many other important issues in the philosophy of animal minds in addition to those directly related to the nature and scope of animal thought, reason, and consciousness. Due to considerations of length, however, only a brief list of such issues with reference to a few relevant and important sources is provided.
The nature and extent of animal emotions has been, and continues to be, an important issue in the philosophy of animal minds (see Nussbaum 2001; Roberts 1996, 2009: Griffiths 1997), as well as the nature and extent of propositional knowledge in animals (see Korblith 2002). Philosophers have also been particularly interested in the philosophical foundations and the methodological principles, such as Lloyd Morgan’s canon, employed in the various sciences that study animal cognition and consciousness (see Bekoff et al. 2002; Allen and Bekoff 1997; Fitzpatrick 2007, 2009; Sober 1998, 2001a, 2001b, 2005). Philosophers have also been interested in the nature and justification of the practice of anthropomorphism by scientists and lay folk (Mitchell at al.1997; Bekoff & Jamieson 1996; Datson & Mitman 2005). And finally, there is a rich history of philosophical thought on animal minds dating back to the earliest stages of philosophy and, therefore, there has been, and continues to be, philosophical interest and issues related to the history of the philosophy of animal minds (see Sorabji, 1993; Wilson, 1995; DeGrazia, 1994).
Historical Works on Animal Minds
U. S. A.
Last updated: April 14, 2009 | Originally published: June/19/2008
Article printed from Internet Encyclopedia of Philosophy: http://www.iep.utm.edu/ani-mind/
Copyright © The Internet Encyclopedia of Philosophy. All rights reserved.