Functionalism

Functionalism is a theory about the nature of mental states. According to functionalism, mental states are identified by what they do rather than by what they are made of. This can be understood by thinking about artifacts like mousetraps and keys. In particular, the original motivation for functionalism comes from the helpful comparison of minds with computers. But that is only an analogy. The main arguments for functionalism depend on showing that it is superior to its primary competitors: identity theory and behaviorism. Contrasted with behaviorism, functionalism retains the traditional idea that mental states are internal states of thinking creatures. Contrasted with identity theory, functionalism introduces the idea that mental states are multiply realized.

Objectors to functionalism generally charge that it classifies too many things as having mental states, or at least more states than psychologists usually accept. The effectiveness of the arguments for and against functionalism depends in part on the particular variety in question, and whether it is a stronger or weaker version of the theory. This article explains the core ideas behind functionalism and surveys the primary arguments for and against functionalism.

In one version or another, functionalism remains the most widely accepted theory of the nature of mental states among contemporary theorists. Nevertheless, in view of the difficulties of working out the details of functionalist theories, some philosophers have been inclined to offer supervenience theories of mental states as alternatives to functionalism.

Table of Contents

  1. Functionalism Introduced
  2. The Core Idea
  3. Being as Doing
  4. The Case for Functionalism
  5. Searle’s Chinese Room
  6. Zombies
  7. Stronger and Weaker Forms of Functionalism
  8. Conclusion
  9. References and Further Reading
    1. References
    2. Suggested Reading

1. Functionalism Introduced

Functionalism is a theory about the nature of mental states. According to functionalists, mental states are identified by what they do rather than by what they are made of. Functionalism is the most familiar or “received” view among philosophers of mind and cognitive science.

2. The Core Idea

Consider, for example, mouse traps. Mouse traps are devices for catching or killing mice. Mouse traps can be made of most any material, and perhaps indefinitely or infinitely many designs could be employed. The most familiar sort involves a wooden platform and a metal strike bar that is driven by a coiled metal spring and can be released by a trigger. But there are mouse traps designed with adhesives, boxes, poisons, and so on. All that matters to something’s being a mouse trap, at the end of the day, is that it is capable of catching or killing mice.

Contrast mouse traps with diamonds. Diamonds are valued for their hardness, their optical properties, and their rarity in nature. But not every hard, transparent, white, rare crystal is a diamond—the most infamous alternative being cubic zirconia. Diamonds are carbon crystals with specific molecular lattice structures. Being a diamond is a matter of being a certain kind of physical stuff. (That cubic zirconia is not quite as clear or hard as diamonds explains something about why it is not equally valued. But even if it were equally hard and equally clear, a CZ crystal would not thereby be a diamond.)

These examples can be used to explain the core idea of functionalism. Functionalism is the theory that mental states are more like mouse traps than they are like diamonds. That is, what makes something a mental state is more a matter of what it does, not what it is made of. This distinguishes functionalism from traditional mind-body dualism, such as that of René Descartes, according to which minds are made of a special kind of substance, the res cogitans (the thinking substance.) It also distinguishes functionalism from contemporary monisms such as J. J. C. Smart’s mind-brain identity theory. The identity theory says that mental states are particular kinds of biological states—namely, states of brains—and so presumably have to be made of certain kinds of stuff, namely, brain stuff. Mental states, according to the identity theory, are more like diamonds than like mouse traps. Functionalism is also distinguished from B. F. Skinner’s behaviorism because it accepts the reality of internal mental states, rather than simply attributing psychological states to the whole organism. According to behaviorism, which mental states a creature has depends just on how it behaves (or is disposed to behave) in response to stimuli. In contrast functionalists typically believe that internal and psychological states can be distinguished with a “finer grain” than behavior—that is, distinct internal or psychological states could result in the same behaviors. So functionalists think that it is what the internal states do that makes them mental states, not just what is done by the creature of which they are parts.

As it has thus far been explained, functionalism is a theory about the nature of mental states. As such, it is an ontological or metaphysical theory. And this is how it will be discussed, below. But it is also worthwhile to note that functionalism comes in other varieties as well. Functionalism could be a philosophical theory about psychological explanations (that psychological states are explained as functional states) or about psychological theories (that psychological theories take the form of functional theories.) Functionalism can also be employed as a theory of mental content, both as an account of the intentionality of mental states in general (what makes some states intentional is that they function in certain ways) or of particular semantic content (what makes some state have the content “tree” is that it plays a certain role vis-à-vis trees.) Finally, functionalism may be viewed as a methodological account of psychology, the theory that psychology should be pursued by studying how psychological systems operate. (For detailed discussion of these variations, see Polger, 2004, ch. 3.)

Often philosophers and cognitive scientists have subscribed to more than one of these versions of functionalism together. Sometimes it is thought that some require others, or at least that some entail others when combined with certain background assumptions. For example, if one believes, following Franz Brentano, that “intentionality is the mark of the mental,” then any theory of intentionality can be converted into a theory of the ontological nature of psychological states. If so, intentional functionalism may entail metaphysical functionalism.

All this being said, metaphysical functionalism is the central doctrine and probably the most widely endorsed. So in what follows the metaphysical variety will be the focus.

3. Being as Doing

Before looking at the arguments for and against functionalism, it is necessary to clarify the idea that, for mental states, being is doing.

Plausibly a physical stuff kind such as diamond has a physical or structural essence, i.e., being a thing of a certain composition or constitution, quite independently of what they do or can be used to do. It happens that diamonds can cut glass, but so can many other things that are not diamonds. And if no diamond ever did or could cut glass (perhaps Descartes’ evil demon assures that all glass is impenetrable), then they would not cease to be diamonds.

But it is also plausible that not all stuffs are made up in this way. Some things may be essentially constituted by their relations to other things, and by what they can do. The most obvious examples are artifacts like mousetraps and keys. Being a key is not a matter of being a physical thing with a certain composition, but it is a matter of being a thing that can be used to perform a certain action, namely, opening a lock. Lock is likewise not a physical stuff kind, but a kind that exists only in relation to (among other things) keys. There may be metal keys, wood keys, plastic keys, digital keys, or key-words. What makes something a key is not its material composition or lack thereof, but rather what it does, or could do, or is supposed to do. (Making sense of the claim that there is something that some kinds of things are supposed to do is one of the important challenges for functionalists.)

The activities that a key does, could do, or is supposed to do may be called its functions. So one can say that keys are essentially things that have certain functions, i.e., they are functional entities. (Or the kind key is a functional kind.)

The functionalist idea is, in some forms, quite ancient. One can find in Aristotle the idea that things have their functions or purposes—their telos— essentially. In contemporary theories applied to the mind, the functions in question are usually taken to be those that mediate between stimulus (and psychological) inputs and behavioral (and psychological) outputs. Hilary Putnam’s contribution was to model these functions using the contemporary idea of computing machines and programs, where the program of the machine fixes how it mediates between its inputs and standing states, on one hand, and outputs and other standing states, on the other. Modern computers demonstrate that quite complex processes can be implemented in finite devices working by basic mechanical principles. If minds are functional devices of this sort, then one can begin to understand how physical human bodies can produce the tremendous variety of actions and reactions that are associated with our full, rich mental lives. The best theory, Putnam hypothesized, is that mental states are functional states—that the kind mind is a functional kind.

The initial inspiration for functionalism comes from the useful analogy of minds with computing machines, as noted above. Putnam was certainly not the first to notice that this comparison could be theoretically fruitful. But in his “functionalist papers” of the 1950s and 1960s, he methodically explored the utility, and oversaw the transition of the idea from mere analogy to comprehensive theory, culminating with his classic defense of the functional state theory in his 1967 paper, “The Nature of Mental States.” There Putnam advanced the case for functionalism as a serious theoretical hypothesis, and his argument goes beyond the mere claim that it is fruitful to think of minds as being in many ways similar to machines. This argument aims to establish the conclusion that the best theory is the one that holds that minds “just are” machines of a certain sort.

4. The Case for Functionalism

Many arguments for functionalism depend on the actuality or possibility of systems that have mental states but that are either physically or behaviorally distinct from human beings. These arguments are mainly negative arguments that aim to show that the alternatives to functionalism are unacceptable. For example, behaviorists famously held that psychological states are not internal states at all, whether physical or psychical. But, the argument goes, it is easy to imagine two creatures that are behaviorally indistinguishable and that differ in their mental states. This line of reasoning is one of a family of “perfect actor” or “doppelgänger” arguments, which are common fare in philosophy of mind:

P1. If behaviorism is true, it is not possible for there to be a perfect actor or doppelgänger who behaves just like me but has different mental states or none at all.

P2. But it is possible for there to be a perfect actor or doppelgänger who behaves just like me but has different mental states or none at all.

P3. Therefore, behaviorism is not true. (by modus tollens)

In a well-known version of this argument, one imagines that there could be “Super-Spartans” who never exhibit pain behavior (such as flinching, saying “ouch”) or even any dispositions to produce pain behavior (Putnam 1963).

The most famous arguments for functionalism are responses not to behaviorism but to the mind-brain identity theory. According to the identity theory, “sensations are brain processes” (Smart 1959). If mental state kinds are (identical to) kinds of brain states, then there is a one-to-one relation between mental state kinds and brain state kinds. Everything that has sensation S must have brain state B, and everything that has brain state B must have sensation S. Not only that, but this one-to-one correlation must not be accidental. It must be a law of nature, at least, and perhaps must hold with an even stronger sort of necessity. Put this way, the mind-brain identity theory seems to make a very strong claim, indeed. As Hilary Putnam notes,

the physical-chemical state in question must be a possible state of a mammalian brain, a reptilian brain, a mollusc’s brain (octopuses are mollusca, and certainly feel pain), etc. At the same time, it must not be a possible (physically possible) state of the brain of any physically possible creature that cannot feel pain. Even if such a state can be found, it must be nomologically certain that it will also be a state of the brain of any extraterrestrial life that may be found that will be capable of feeling pain before we can even entertain the supposition that it may be pain. (Putnam 1967: 436)

The obvious implication is that the mind-brain identity theory is false. Other mammals, reptiles, and mollusks can experience pain, but they do not have brains like ours. It seems to follow that there is not a one-to-one relation between sensations and brain processes, but rather a one-to-many relation. Mental states, then, are not uniquely realized (as the identity theory requires); they are instead multiply realized.

And even if (by chance) it turns out that mammals, reptiles, and mollusks all have similar brains (so that in fact there is a one-to-one correlation), certainly one can recognize the possibility that it might be discovered that terrestrial or extraterrestrial creatures who experience pains but do not have brains like those of human beings. So it is surely not necessary that there is a one-to-one relation between mental state kinds and brain states kinds, but that is exactly what the identity theory would require. This is bad news for the identity theory, but it is good news for functionalism. For functionalism says that what makes something a mental state is what it does, and it is fully compatible with the diverse brains of mammals, reptiles, and mollusks that they all have mental states because their different brains do the same things, that is, they function in the same ways. Functionalism is supported because it is a theory of mind that is compatible with the likely degree of multiple realization of mental states.

Another pair of arguments for functionalism are what can be called the Optimistic and Pessimistic Arguments. The optimistic argument leans on the possibility of building artificial minds. The Optimistic Argument holds that even if no one ever discovers a creature that has mental states but differs from humans in its brain states, surely one could build such a thing. That is, the possibility of artificial intelligence seems to require the truth of something like functionalism. Functionalism views the mind very much as an engineer does: minds are mechanisms, and there is usually more than one way to build a mechanism. The Optimistic Argument, then, is a variation on the multiple realization argument discussed above; but this version does not depend on empirical facts about how our world is in fact, as the multiple realization argument does.

The Pessimistic Argument claims that the alternatives to functionalism would leave people unable to know about and explain the mental states of one another, or of other creatures. After all, if two creatures function in the same ways, achieve the same results, have isomorphic internal states, etc., then what could justify the claim that one has mental states and the other does not? The identity theory says that the justification has to do with what kinds of stuff the creatures are made of—only the one with the right kind of brain counts as having mental states. But this flies in the face of our ordinary practices of understanding, attributing, and explaining mental states. If someone says, “I am in pain,” or “I believe that it is sunny outside,” one doesn’t have to cut the speaker open and find out whether they have a human brain in order to know that they have a pain or a belief. One knows that because the speaker not only produce those noises (as the behaviorist might say), but because they have internal states that function in certain ways. One can test this, as psychologists often do, by running experiments in a laboratory or, as ordinary people do, by asking questions and observing replies. That is, we can find out how the systems function. And if functionalism is correct, that is all we need to know in order to have knowledge of other minds. But if the identity theory is correct, then those methods are at best heuristics, and the observer may yet be wrong. One cannot know for certain that the speaker has pains or beliefs unless one knows what kind of brain the speaker has. Without knowing about brains, we can only infer that others have beliefs on the basis of the behavioral symptoms they exhibit, and we already know (see above, regarding behaviorism and Super-Spartans) that those can lead us astray. But that is crazy, the argument goes, and if one really believed it then (given that in general one doesn’t know what kinds of brains other people have) nobody would be justified in believing anything about the beliefs of other people and creatures . And that is crazy.

The trouble with the Optimistic Argument is that it is question-begging. It assumes that one can create artificial thinking things without duplicating the kinds of brain states that human beings have, and that is just what the identity theory denies. The trouble with the Pessimistic Argument is that it seems to exploits a very high standard for knowledge of other minds — namely infallibility or certainty. The objection gets its grip only if the requirement to infer facts about others minds does undermine the possibility of knowledge about those minds. But we regularly acquire knowledge by inference or induction, and there is no special reason to think that inferences about minds are more problematic than other inferences.

The multiple realization argument is much more nuanced. Its interpretation is a matter of some dispute. Although there has been increasing resistance to the argument lately, it remains the most influential reason for favoring functionalism over the alternatives. And even if the multiple realization argument is unsound, that result would only undermine one argument for functionalism and not the thesis itself.

The next two sections will consider two objections to functionalism that aim to show that the theory is untenable. Both objections assume that mental states are, as the functionalist insists, multiply realizable. The objections try to show that because of its commitment to multiple realization, functionalism must accept certain unpalatable consequences. The conclusion of each argument is that functionalism is false.

5. Searle’s Chinese Room

John Searle’s “Chinese Room Argument is aimed at computational versions of functionalism, particularly those that specify the relevant functions in terms of inputs and outputs without fixing the internal organization of the processes. Searle stipulates that “Strong AI” is the thesis than an appropriately programmed computer literally has mental states, and that its program thereby constitutes an explanation of its mental states and (following the functionalist inspiration) of human mental states (1980). Searle then describes a scenario in which a system that carries out the program consists in some books and pieces of paper, a pencil, he himself—John Searle—all inside a room. People on the outside pass questions written in Chinese into the room. And Searle, by following the directions (the program) in the books, is able to produce answers to those questions. But Searle insists that he does not understand Chinese and has no beliefs about the questions and answers. After all, one may suppose with him, he doesn’t even recognize that they are questions and answers written in Chinese, or any language at all for that matter. And he thinks it would be absurd to say that the room itself understands Chinese or has beliefs about the questions and answers. So, he concludes, the version of functionalism represented by Strong AI must be false. Having the right functions, at least when they are specified only by inputs and outputs, is not sufficient for having mental states.

Searle’s Chinese Room is a version of the “twin” or “doppelgänger” style objections to functionalism, in which some system is specified to be functionally isomorphic to a mental system, e.g., one that understands stories written in Chinese. Since functionalism holds that being is doing, two systems that do the same things (that is, that are functionally the same) should also be the same with respect to their mental states. But if Searle is correct, the system including the books and himself is functionally but not psychologically identical to a person who understands Chinese. And if so, this is incompatible with functionalism.

Searle considers a number of responses to his thought experiment, and offers his own replies. Probably the most serious response is that Searle begs the question when he asserts that the whole collection of stuff in the room including the books and himself, i.e., the whole system, does not understand. The “Systems Reply” holds that if functionalism is true then the whole system does understand Chinese, just as a Chinese speaker does even though it would be wrong to say that her brain or her tongue or some part of her understands Chinese by itself.

On the other hand, Searle’s example does dramatically illustrate a worry that has been expressed by others: Even if there are many ways of being a thinking thing, it does not follow that anything goes. In the Chinese Room thought experiment, nothing is specified about the details of instructions that Searle follows, the program. It is simply stipulated that it produces the correct outputs appropriate to the inputs. But many philosophers think that it would undermine the claim that the room understands if, for example, the program turned out to be a giant look-up table, a prepared list of all possible questions with the corresponding appropriate answer (Block 1978). The giant look-up table seems like too “dumb” a way to implement the system to count as understanding. So it’s not unreasonable to say that Searle has shown that input-output functionalism can’t be the whole story about mental states. Still, that’s a much more modest conclusion than Searle aimed for.

6. Zombies

Searle’s Chinese Room objection focuses on contentful mental states like belief and understanding, what are generally called intentional states. But some philosophers conclude that functionalism is a good theory of intentional states but that it nevertheless fails because it cannot explain other sorts of mental states—in particular, they say that it cannot explain sensations and other conscious mental states.

Putting the point in terms of Searle’s Chinese Room: the whole system might, in some sense, understand Chinese or produce responses that are about the questions; but, in Thomas Nagel’s famous phrase, there is nothing that “it is like” to be the Chinese Room. The whole system does not enjoy what it is doing, it does not experience sensations or emotions, and it does not feel pains or pleasures. But Searle himself does have experiences and sensations—he is a conscious being. So, the reasoning goes, even if functionalism works for intentional states, it does not work for consciousness.

Early versions of this concern were discussed under the name “absent qualia.” But the current fashion is to cast the discussion in term of twins or doppelgängers called zombies. (This terminology was introduced by Robert Kirk 1974, but has recently, for lack of a better expression, taken on a life of its own.) The general idea is that there might be two creatures which are physically or functionally identical but that differ in the mental states in a particularly dramatic way: one has normal conscious mental states, and the other has none at all. The second twin is the philosophical “zombie.”

The logical structure of the zombie argument is just the same as with the other twin and doppelgänger arguments, like the Super-Spartans discussed above:

P1*. If functionalism is true, it is not possible for me to have a zombie twin, i.e., a doppelgänger who functions just like me but has no mental states.

P2*. But it is possible for me to have a zombie twin.

P3*. Therefore, functionalism is not true. (by modus tollens)

There are several differences between the premises of the zombies argument and those of the earlier argument against behaviorism. First, while most versions of functionalism entail P1*, it is not obvious that all must. Fred Dretske, for example, endorses a version of functionalism that rejects P1* (1995). But more crucially, the justification for P2* is far less clear than that for P2. P2 makes a very weak claim, because mere behavior—movement, rather than what some philosophers would call action—is relatively easy to generate. This much as been commonplace among those who theorize about the mind at least as far back as Descartes’ familiarity with mechanical statues in European water gardens. P2* makes a potentially much stronger claim. It seems to suggest that the zombie could be not just behaviorally identical but also functionally identical in any arbitrary sense of function and in as much specificity as one might want. But this is quite controversial. In the most controversial form, one might suppose that “functional” identity could be arbitrarily fine-grained so as to include complete physical identity. In this variation, the twins would be physically identical creatures, one of whom has conscious mental states and the other of whom lacks consciousness altogether.

The challenge for the functionalist, as Ned Block has argued, is to find a notion of function and a corresponding version of functionalism that solve “the problem of inputs and outputs” (Block 1978). Functionalism must be specified in terms of functions (inputs and outputs) that are sufficiently general to allow for multiple realization of mental states, but sufficiently specific to avoid attributing mental states to just about everything. This is tricky. A version of functionalism that is too specific will rule out certain genuinely psychological systems, and thereby prove to be overly “chauvinistic.” A version of functionalism that is too general will attribute mental states to all sorts of things that one doesn’t ordinarily take to have them, and thereby prove to be overly “liberal.” Is there any non-arbitrary cut-off between liberalism and chauvinism? Is there any way to navigate between this Scylla and Charybdis? This is the big unanswered question for functionalists.

7. Stronger and Weaker Forms of Functionalism

At this point two clarifications are in order. These clarifications reveal some ways in which functionalism comes in stronger or weaker versions.

The first clarification pertains to the varieties of functionalism. As noted in Section 2, there are many versions of functionalism. Here the focus has been on metaphysical versions. But the variations described earlier (metaphysical, intentional, semantic, explanatory, methodological, and theoretical) represent only one dimension of the ways in which various functionalisms differ. Functionalist theories can also be distinguished according to which mental phenomena they are directed toward. The standard way of classifying mental states is as intentional (such as beliefs and desires) or conscious or qualitative (such as sensations and feelings.) Of course some philosophers and psychologists believe that all mental states turn out to be of one sort. (Most commonly they hold that all kinds of mental states are intentional states of one sort or another.) But that need not be a factor here, for the classification is only for expository purposes. Specifically, one can hold that functionalism is a theory of intentional states, of conscious states, or of both. The strongest claim would be that functionalism applies to all mental states. William Lycan (1987) seems to hold this view. Weaker versions of functionalism apply to only one sort of mental state or the other. For example, Jaegwon Kim (2005) appears to hold that something like functionalism applies to intentional states but not to qualitative states.

The second clarification pertains to the scope or completeness of a functionalist theory. Functionalism claims that the nature of mental states is determined by what they do, by how they function. So a belief that it is sunny, for example, might be constituted in part by its relations to certain other beliefs (such as that the sun is a star), desires (such as the desire to be on a beach), inputs (such as seeing the sun), and outputs (such as putting on sunglasses.) Now consider the other beliefs and desires (in the above example) that partially constitute the nature of the belief that it is sunny. In the strongest versions of functionalism, those beliefs and desires are themselves functional states, defined by their relations to inputs, outputs, and other mental states that are in turn functionally constituted; and so on. In this case, every mental state is completely or purely constituted by its relations to other things, without remainder. Nothing can exist as a mental state on its own, only in relation to the others. In contrast, weaker versions of functionalism could allow some mental states to be basic and non-functional For example, if functionalism applies to all mental states, one could hope to explain intentional states functionally while allowing for conscious mental states to be basic. Then the belief that it is sunny might be constituted, in part, by its relations to certain sensations of warmth or yellowness, but those sensations might not be functional states. Generally speaking, philosophers who do not specify otherwise are assuming that functionalism should be the strong or pure variety. Impure or weak versions of functionalism—what Georges Rey calls “anchored” versions—do not succeed in explaining the mental in terms of purely non-mental ingredients. So whatever other value they might have, they fall short as metaphysical theories of the nature of mental states. Some would deny that weak theories should count as versions of functionalism at all.

8. Conclusion

There are many more variations among functionalist theories than can be discussed herein, but the above clarifications are sufficient to give a flavor of the various nuances. It is safe to say that in one version or another, functionalism remains the most widely accepted theory of the nature of mental states among contemporary theorists. Nevertheless, recently, perhaps in view of the difficulties of working out the details of functionalist theories, some philosophers have been inclined to offer supervenience theories of mental states as alternatives to functionalism. But as Jaegwon Kim correctly pointed out, supervenience simply allows us to pose the question about the nature of mental states, it is not an answer. The question is: Why do mental states supervene on the physical states of the creatures that have them, or at least of the world altogether? Functionalism provides one possible answer: Mental states supervene on physical states because mental states are functional states, i.e., they are realized by physical states. Much remains to be said about such a theory, and to many philosophers the arguments for it do not seem as decisive as when they were initially offered. But there is no denying that it is an intriguing and potentially powerful theory.

9. References and Further Reading

a. References

  • Block, N. (ed.) 1980a. Readings in Philosophy of Psychology, Volume One. Cambridge, MA: Harvard University Press.
  • Block, N. (ed.) 1980b. Readings in Philosophy of Psychology, Volume Two. Cambridge, MA: Harvard University Press.
  • Block, N. and J. Fodor. 1972. What Psychological States Are Not. Philosophical Review 81: 159-181.
  • Chalmers, D. 1995. Facing up to the problem of consciousness. Journal of Consciousness Studies, 2, 3: 200-219.
  • Cummins, R. 1975. Functional analysis. The Journal of Philosophy LXXII, 20: 741-765.
  • Fodor, J. 1968. Psychological Explanation. New York: Random House.
  • Fodor, J. 1974. Special sciences, or the disunity of science as a working hypothesis. Synthese 28: 97-115. Reprinted in Block 1980a.
  • Kim, J. 2005. Physicalism, or Something Near Enough. Princeton: Princeton University Press.
  • Kirk, R. 1974. Zombies v. Materialists. Proceedings of the Aristotelian Society, 48: 135-152.
  • Lewis, D. 1970. How to Define Theoretical Terms. Journal of Philosophy 68: 203-211.
  • Lewis, D. 1972. Psychophysical and Theoretical Identifications. The Australasian Journal of Philosophy 50: 249-258.
  • Lewis, D. 1980. Mad Pain and Martian Pain. In Block (ed.) 1980b.
  • Lycan, W. 1981. Form, Function, and Feel. Journal of Philosophy 78: 24-50.
  • Lycan, W. 1987. Consciousness. Cambridge, MA: The MIT Press.
  • Millikan, R. 1989. In Defense of Proper Functions. Philosophy of Science 56: 288-302.
  • Polger, T. 2000. Zombies Explained. In Dennett’s Philosophy: A Comprehensive Assessment, D. Ross, A. Brook, and D. Thompson (Eds). Cambridge, MA: The MIT Press.
  • Putnam, H. 1960. Minds and Machines. In Hook (ed) Dimensions of Mind (New York: New York University Press). Reprinted in Putnam (1975c).
  • Putnam, H. 1963. Brains and Behavior. Analytical Philosophy, Second Series, ed. R. J. Butler (Oxford: Basil Blackwell): 211-235. Reprinted in Putnam (1975c).
  • Richardson, R. 1979. Functionalism and Reductionism. Philosophy of Science 46: 533-558.
  • Richardson, R. 1982. How not to reduce a functional psychology. Philosophy of Science, 49, 1: 125-137.
  • Searle, J. 1980. Minds, Brains, and Programs. The Behavioral and Brain Sciences 3, 3: 417-424.
  • Shapiro, L. 2000. Multiple Realizations, The Journal of Philosophy, 97, 635-654.
  • Shapiro, L. 2004. The Mind Incarnate, Cambridge, MA: The MIT Press.
  • Shoemaker, S. 1975. Functionalism and Qualia. Philosophical Studies 27: 291-315. Reprinted in Block (1980a).
  • Shoemaker, S. 1984. Identity, Cause, and Mind. New York: Cambridge University Press.
  • Smart J. J. C. 1959. Sensations and Brain Processes. Philosophical Review, LXVIII: 141-156.
  • Sober, E. 1985. Panglossian Functionalism and the Philosophy of Mind. Synthese 64: 165-193.
  • Wright, L. 1973. Functions. Philosophical Review 82, 2: 139-168.

b. Suggested Reading

  • Block, N. 1978. Troubles with functionalism. C. W. Savage (ed.), Minnesota Studies in the Philosophy of Science, Vol. IX (Minneapolis, MN: University of Minnesota Press). Reprinted in Block (1980a).
  • Block, N. 1980c. Introduction: What is functionalism? In Block (1980b).
  • Kim, J. 1996. Philosophy of Mind. Boulder, CO: Westview.
  • Polger, T. 2004. Natural Minds. Cambridge, MA: The MIT Press.
  • Putnam, H. 1967. Psychological Predicates. Reprinted in Block (1980) and elsewhere as “The Nature of Mental States.”
  • Rey, G. 1997. Contemporary Philosophy of Mind. Boston: Blackwell Publishers.
  • Shoemaker, S. 1981. Some Varieties of Functionalism. Philosophical Topics 12, 1: 83-118. Reprinted in Shoemaker (1984).
  • Van Gulick, R. 1983. Functionalism as a Theory of Mind. Philosophy Research Archives: 185-204.

Author Information

Thomas W. Polger
Email: thomas.polger@uc.edu
University of Cincinnati
U. S. A.