Category Archives: Metaphysics & Epistemology


Reductionists are those who take one theory or phenomenon to be reducible to some other theory or phenomenon. For example, a reductionist regarding mathematics might take any given mathematical theory to be reducible to logic or set theory. Or, a reductionist about biological entities like cells might take such entities to be reducible to collections of physico-chemical entities like atoms and molecules. The type of reductionism that is currently of most interest in metaphysics and philosophy of mind involves the claim that all sciences are reducible to physics. This is usually taken to entail that all phenomena (including mental phenomena like consciousness) are identical to physical phenomena. The bulk of this article will discuss this latter understanding of reductionism.

In the twentieth century, most philosophers considered the question of the reduction of theories to be prior to the question of the reduction of entities or phenomena. Reduction was primarily understood to be a way to unify the sciences. The first section below will discuss the three traditional ways in which philosophers have understood what it means for one theory to be reducible to another. The discussion will begin historically with the motivations for and understanding of reduction to be found in the logical positivists, particularly Rudolf Carnap and Otto Neurath, and continue through more recent models of inter-theoretic reduction. The second section will examine versions of reductionism, as well as the most general and currently influential argument against reductionism, the argument from multiple realization. Although many philosophers view this argument as compelling, there are several responses available to the reductionist that will be considered. The final section will discuss two ways of reducing phenomena rather than theories. With the decline of logical positivism and the rise of scientific realism, philosophers’ interest in reduction has shifted from the unity of theories to the unity of entities. Although sometimes reduction of one class of entities to another is understood as involving the identification of the reduced entities with the reducing entities, there are times when one is justified in understanding reduction instead as the elimination of the reduced entities in favor of the reducing entities. Indeed, it is a central question in the philosophy of mind whether the correct way to view psychophysical reductions is as an identification of mental entities with physical entities, or as an elimination of mental phenomena altogether.

Table of Contents

  1. Three Models of Theoretical Reduction
    1. Reduction as Translation
    2. Reduction as Derivation
    3. Reduction as Explanation
  2. Reductionism: For and Against
    1. Versions of Reductionism
    2. The Argument from Multiple Realization
    3. Replies
  3. Reduction of Entities: Identification vs. Elimination
  4. References and Further Reading

1. Three Models of Theoretical Reduction

In what follows, the theory to be reduced will always be referred to as the target theory (T). The theory to which one is attempting to reduce the target theory will be known as the base theory (B).

There are three main ways in which reduction has been understood since the 1920s. These may generally be stated as follows:

  1. Theory T reduces to theory B when all of the truths of T (including the laws) have been translated into the language of B.
  2. Theory T reduces to theory B when all of the laws of T have been derived from those of B.
  3. Theory T reduces to theory B when all of the observations explained by T are also explained by B.

The general goal of a theoretical reduction is to promote the unity of science. All of these models provide some sense in which science may become more unified. For sciences may become unified by being expressed in the same language. This allows one to see that there is only one language that is required to express all truths in the theories. Sciences may also become unified when the laws of one theory are shown to be derivable from those of another theory. This allows one to see that there is only one basic set of principles that is required to account for the other truths in the theories. Finally, sciences may become unified when the observations explained by one theory are shown to be also explainable by another theory. This allows one to see that only one of the theories is really necessary to explain the class of phenomena earlier thought to need the resources of two theories to explain.

The first section will examine three conceptually distinct models of reduction: the translation model, the derivation model, and the explanation model. These models need not compete with one another. As will be seen in the following sections, depending on how one understands translation, derivation, and explanation, these models may complement each other. Historically, the translation model is associated with the early logical positivists Carnap and Neurath, the derivation model with the later logical empiricists Carl Hempel and Ernest Nagel, and the explanation model with John Kemeny and Paul Oppenheim.

a. Reduction as Translation

Carnap describes the translation model of reduction in the following way:

An object (or concept) is said to be reducible to one or more objects if all statements about it can be transformed into statements about these other objects. (1928/1967, 6)

In order to see why one should be interested in achieving reductions in this sense, one must first clarify what it was that the positivists, in particular, Carnap and Neurath, wanted out of reductionism.

In The Logical Structure of the World, Carnap tries to reduce all language to phenomenalist language, i.e. that of immediate experience (1928/1967). Shortly after this, influenced largely by his discussions with Neurath, Carnap changed his position regarding the sort of language into which all meaningful sentences should be translated. In his short monograph, The Unity of Science, he generally speaks of reducing all statements to a physical language, but his official position is that it does not matter which language all statements are translated into as long as they are all translated into one common, universal language (1963, 51). Carnap thought that physical language, understood as the language of objects in space and time (rather than the language of physics per se), was one salient contender for this universal language. (1934, 52).

So, Carnap’s main reductionist thesis is that:

… science is a unity, that all empirical statements can be expressed in a single language, all states of affairs are of one kind and are known by the same method. (1934, 32)

One may now ask two questions. First, why should one want to translate all statements into one common language? And second, what relationship does this translation have with the program of unifying the sciences?

For Carnap and Neurath, one common interest in unification stemmed from a frustration with the methods of philosophy and the social sciences. They argued that these disciplines too often rely on subjective methods of verification such as intuition and in the social sciences, empathy and verstehen. The positivists found these methods problematic and in need of replacement with the methods used in the physical sciences. Methods that relied on data and statements referring to the subjective states of individual observers could not be verified intersubjectively and thus could not be used to make intersubjectively testable predictions. For example, since the methodological use of empathy was rampant in actual practice, the statements (and methods) used by social scientists needed to be reinterpreted within an intersubjectively understandable framework. In his “Sociology and Physicalism,” Neurath argues that this is possible:

If someone says that he requires this experience of “organic perceptions” in order to have empathy with another person, his statement is unobjectionable … That is to say, one may speak of “empathy” in the physicalistic language if one means no more by it than that one draws inferences about physical events in other persons on the basis of formulations concerning organic changes in one’s own body … When we analyze the concepts of “understanding” and “empathy” more closely, everything in them that is usable in a physicalistic way proves to be a statement about order, exactly as in all sciences. (1931/2/1959, 298)

The idea, which Carnap also defended, was that all sciences, insofar as their statements were meaningful, could be translated into a common language. Once this translation was carried out, scientists in all disciplines could make predictions that were verifiable intersubjectively. So, following this strand of reasoning, the task of unifying (i.e. reducing) the sciences was important so that all sciences could be assimilated to a language in which it was possible to make intersubjectively understandable explanations and predictions - one of the central goals of developing scientific theories in the first place.

So far, nothing has been said that would provide motivation for reduction of all statements to the language of one science. One might grant that it is important that all theories be formulated in a language amenable to intersubjective understanding, however, why must all of the sciences be formulated in the same language? Why could the physical sciences not be formulated in one intersubjectively understandable language and psychology in another and biology in yet another? Why is reduction in the sense of translation of all statements to the one common language something anyone should care about? For these early reductionists, the main motivation was practical.

The Vienna Circle, a group of philosophers and scientists of which Carnap and Neurath formed part of the core, met and formulated their ideas at a time when Europe had just survived one war and was about to embark on another. At this time, particularly with anti-Semitism and fascism intensifying, scientists were being forced to disperse. Previously, Vienna had been a fertile center of scientific research, but political developments were making it necessary for many of the prominent scientists to scatter to other areas in Europe, the Soviet Union, and the United States. With this geographical separation, the concern was that this would prompt a rift in scientific dialogue, and that scientists both within and across disciplines would have a hard time sharing their ideas. As Jordi Cat and Nancy Cartwright have recently argued, for Neurath, this interdisciplinary sharing of ideas was crucial for several reasons (Cartwright et. al. 1995; Cartwright et. al. 1996). This discussion will focus on three.

First, it is common to look at scientists as engaged in the task of developing a complete account of the world. What is needed is a theory (or group of theories) that will be able to account for all phenomena. In other words, for each event that has occurred, this account should be able to give a complete explanation of it. And for each event that is to occur, the theory should be able to predict that it will occur. What Neurath noted was that as science developed it became more and more fragmented and as a result of ever-increasing specialization, it was impossible for any one practitioner to be versed in what was going on in all of the separate subdisciplines. This allowed for the possibility that large gaps between theories might develop, leaving events that no research program was engaged in trying to explain, thus preventing the sciences from giving a complete picture. Relatedly, Neurath was also concerned that the inability of any one researcher to see the big picture of the sciences would make room for contradictions to appear between the explanations different disciplines gave of one set of phenomena (1983, 140). If the sciences were unified in such a way that allowed scientists to see the big picture (outside of their own subdisciplines), this would begin to remedy the issues of both (a) gaps and (b) contradictions between different theories.

Another one of Neurath’s motivations for unifying the sciences was to eliminate redundancy between disciplines. He argues:

… the special sciences themselves exhibit in various ways the need for such a unification. For example, the different psychological theories employ so many different terms and phrases that it becomes difficult to know whether they are dealing with the same subject or not… One of the most important aims of the Unity of Science movement is therefore concerned with the unification of scientific language. Distinct terms occur in different disciplines which nevertheless may have the same function and much fruitless controversy may arise in trying to find a distinction between them… A large collection of terms have been gathered by the various sciences during the centuries, and it is necessary to examine this collection from time to time, for terms should not be multiplied beyond necessity. (1983, 172-3)

Two related ideas are motivating Neurath’s desire to eradicate redundancy between theories. The first is clearly expressed in the last line above. Neurath would like to minimize the number of terms used in the theory, to encourage theoretical simplicity. One should not introduce more language into our theories than is necessary, and so it is important to decide whether one can do without some of the terms used by a particular theory. This will make science as a whole simpler (and as a result, more digestible). One obvious way of eradicating such linguistic redundancy would be by translating all theories into a common language and this is precisely what Neurath proposes.

A second point that Neurath raises is a desire to see the different sciences as all describing a common subject matter. To use one of his examples, one might ask if the terms ‘stimulus’ and ‘response’ in biology are just different words for the same phenomena discussed in the physical sciences using the terms ‘cause’ and ‘effect’. Here, Neurath seems to be relying on an implicit metaphysical conviction that all of the sciences describe one world and not disparate spheres of reality. However, there may be reason to think that while Neurath professed an aversion toward asking metaphysical questions (and using metaphysical terminology like ‘world’ and ‘reality’), there does seem to be an implicit unified metaphysics underlying his desire to see scientific language unified.

It is important to noteone last aspect of Neurath’s interest in reduction of theories to a common language. This motivation is related to Neurath’s, and later Carnap’s, adoption of a coherentist picture of truth and justification. According to Neurath, statements are not justified in terms of their matching some external reality. This would require presupposing some kind of metaphysical picture of reality, which is something that Neurath would have rejected. Instead, statements are only justified insofar as they are confirmed by, or cohere with, other statements. To explain his view, Neurath appealed to his now famous metaphor of sailors having to rebuild their ship while at sea:

Our actual situation is as if we were on board a ship on an open sea and were required to change various parts of the ship during the voyage. We cannot find an absolute immutable basis for science; and our various discussions can only determine whether scientific statements are accepted by a more or less determinate number of scientists and other men. New ideas may be compared with those historically accepted by the sciences, but not with an unalterable standard of truth. (1983, 181)

Again, there is no world that one can compare statements to in order to confirm them and see that they are justified. The only basis for justification is other “historically accepted” statements. Once one understand this, it is easy to see why the translational unification of the sciences would be important. Communication of scientists across disciplines provides further confirmation of their theories. The better a theory coheres with other theories and the more theories with which it coheres, the more justified it will be. Thus, one should look for a common language so that such communication and connections can be established across disciplines.

It was previously stated that Carnap and Neurath wanted all theories to be translated into a language free of subjective terms, one that could be used to make testable predictions. In addition, it was also important that this common language could allow for communication across all disciplines. This would encourage (i) the filling in of gaps and elimination of contradictions between theories, (ii) the elimination of redundancy and enhancement of simplicity, and (iii) the possibility of a stronger justification for theories. Neurath spent the last years of his life beginning what was to be the unfinished project of the International Encyclopedia of Unified Science. This was to be a series of volumes in a common, physicalist language that could be used to encourage interaction between scientists. The first set of volumes would discuss issues in the general methodology of science, while the later volumes would include up-to-date discussions of research in all different areas of the sciences. Reading these would give researchers a better picture of science as a whole and promote the three virtues (i-iii) just mentioned.

It is important to emphasize Neurath and Carnap’s motivations for their reductionist project. This will allow us to consider the benefits of reductionism and what this perspective does not entail. Examining these previous motivations, one can see that there is very little that is required of a common language of unified science. Carnap says that, “[i]n order to be a language for the whole of Science, the physical language needs to be not only intersubjective but also universal” (1934, 67). So it must be the case that for a language to be the common language, it must not include any subjective terms, such as those referring to the intrinsic qualities of one’s own experiences. It must also be possible to translate all other statements from scientific theories into the common language. In addition, the universal language should also be nonredundant.

These minimal requirements on a universal language of science do not require that the language into which all theories are capable of being translated be the language of physics. Unlike the physicalist reductionism that is the orthodoxy of today, the thesis of reductionism advocated by Carnap and Neurath did not require that all sciences reduce to physics.

It is worth emphasizing this feature of the positivist’s conception of reductionism because it allows one an opportunity to recognize the independence of the original movement of reductionism from a need to see physics as the science to which all others reduce. Sometimes reductionism is dismissed as a theory of the world that is overly conservative, not making room for a plurality of sciences and resultant methodologies. However, this is simply not the case. What the positivists were interested in was seeing scientists of different disciplines cooperate in a way that would expedite and expand research, and better confirm theories. An attempt to translate all sciences into a common language would help achieve the goal of the unification of science. Translation of all theories into the language of physics would be preferable to a translation of all theories to phenomenalistic language since the latter fails to generally be intersubjectively understandable. However, any intersubjective language that was sufficiently universal in scope would serve their purposes.

b. Reduction as Derivation

After Carnap and Neurath, reduction as translation of terms to a common language was still discussed, but reduction also came to be understood in the two other ways mentioned above – as the explanation of all observations in terms of one base theory, and as the derivation of all theories from one base theory. For example, Carl Hempel saw the reduction of a theory as involving two tasks. First, one reduces all of the terms of that theory, which involves translation into a base language. As Hempel notes, “the definitions in question could hardly be expected to be analytic… but … may be understood in a less stringent sense, which does not require that the definiens have the same meaning, or intension, as the definiendum, but only that it have the same extension or application” (1966, 103). Then, one reduces the laws of the theory into those of a base theory by derivation (1966, 104).

The best known model of reduction as derivation is found in Ernest Nagel’s The Structure of Science. According to Nagel, a reduction is effected when the laws of the target science are shown to be logical consequences of the theoretical assumptions of a base science (1961, 345-358). Once this is accomplished, one can see that there is only one basic set of principles that is required to account for truths in both theories. For Nagel, one goal of reduction is the move science closer to the ideal of “a comprehensive theory which will integrate all domains of natural science in terms of a common set of principles” (1961, 336).

Unlike Hempel, Nagel did not think that all reductions would first require a translation of terms. He distinguished between homogeneous reductions and heterogeneous reductions, and only in the latter case does the target science include terms that are not already included in the base science (1961, 342). However, he does concede that in the cases of interest to him, the target science will contain terms that do not occur in the theory of the base discipline, so the reduction will be heterogeneous. This does not necessarily mean that one must translate terms from the target science into the language of the base science. For example, one interested in reducing psychology to physics will notice that psychological theories contain terms like ‘belief’, ‘desire’, and ‘pain’, which do not occur in the base, physical theory. In these cases, assumptions must be added to the laws of the base science (physics) stating relations between these (psychological) terms and the terms already present in the base science. These assumptions, often called ‘bridge laws’, will then allow one to derive the laws and theorems of the target science from the theory of the base discipline. They need not be thought of as providing translations, as will be explained shortly (Nagel 1961, 351-354).

Abstractly, one may consider the derivations that constitute Nagelian reductions as taking the following form. Where ‘B1’ and ‘B2’ are terms in the language of the base science and ‘T1’ and ‘T2’ are terms in the language of the science that is the target of the reduction,

The occurrence of a B1 causes the occurrence of a B2 (a law in the base science).

If something is a B1, then it is a T1. (bridge law)

If something is a B2, then it is a T2. (bridge law)


The occurrence of a T1 causes the occurrence of a T2 (a law in the target science) (Hempel 1966, 105).

The conclusion here is the law in the target science that one wanted to reduce to the laws of the base science. Some caveats are in order regarding this way of representing Nagelian reductions. First, there is nothing in the account requiring the laws to involve causal language. Causal terminology is used in the above example merely for ease of exposition. Moreover, the precise nature of the bridge laws required for a reduction is controversial. Philosophers have differed in what they regard as necessary for something to be the kind of bridge law to facilitate a legitimate reduction.

As a matter of logic, all that is required for a successful derivation are bridge laws that take the form of conditionals. However, the derivation would also be successful if the connectives in the bridge laws were not conditionals but biconditionals or identity statements. In his discussion of reduction, Nagel mentions that these bridge laws could have any one of the following statuses. They could be (1) logical or analytic connections between terms, (2) conventional assumptions created by fiat, or (3) empirical hypotheses (1961, 354). Only when the first case obtains is it usually plausible to say that the reduction is partially constituted by an act of translation. Nagel does not question whether conditional bridge laws are enough to effect legitimate reductions. In the post-positivistic, realist aftermath of Nagel’s book, most philosophers have held that in order for such derivations to help one achieve a more unified science (genuine reductions), the bridge laws may not merely have any of these statuses, but ought to have the strength of identities, e.g. Sklar (1967). So, to constitute a genuine reduction, a derivation ought to look something like the following:

The occurrence of a B1 causes the occurrence of a B2 (a law in the base science).

Something’s being a B1 = its being a T1. (bridge law)

Something’s being a B2 = its being a T2. (bridge law)

Therefore, The occurrence of a T1 causes the occurrence of a T2 (a law in the target science).

To illustrate the idea, consider a putative reduction of the theory of thermal conductivity to a theory of electrical conductivity. The Wiedemann-Franz Law is a simple physical law stating that the ratio of the thermal conductivity of a metal to the electrical conductivity of a metal is proportional to the temperature. Given this law, which one might try to take as a bridge law, it is possible to systematically derive facts about a metal’s electrical conductivity from facts about its thermal conductivity. It is inferred from the law that a metal has a certain electrical conductivity and it is a certain temperature if and only if it has such and such a thermal conductivity. If reductions could be carried out simply using conditionals or even biconditionals as bridge laws, then one would have thereby reduced the theory of electrical conductivity to the theory of thermal conductivity (and vice versa). However, some have argued that this would be the wrong result. This case serves as a counterexample to the view that derivations involving bridge laws with the status of biconditionals (or conditionals for that matter) constitute legitimate reductions. As Lawrence Sklar has put it:

Does this law establish the reduction of the theory of heat conduction to the theory of the conduction of electricity? No one has ever maintained that it does. What does explain both the electrical and thermal properties of matter, and the Wiedemann-Franz law as well, is the reduction of the macroscopic theory of matter to the theory of its atomic microscopic constitution. Although the correlation points to a reduction it does not constitute a reduction by itself. (1967, 119)

Jaegwon Kim has also argued that unless the bridge laws have the status of identities they cannot serve as part of genuine reductions:

It is arguably analytic that reduction must simplify; after all, reductions must reduce… On this score bridge laws of the form [Something is a T1 if and only if it is a B1] apparently are wanting in various ways. Since [Something is a T1 if and only if it is a B1] is supposed to be a contingent law, the concepts [T1] and [B1] remain distinct; hence bridge laws yield no conceptual simplification. Further, since we have only a contingent biconditional “iff” connecting properties [T1] and [B1], [T1] and [B1] remain distinct properties and there is no ontological simplification…. If we want ontological simplification out of our reductions, we must somehow find a way of enhancing bridge laws… into identities. (Kim 1998, 96-7)

The view is that only in cases where there are bridge laws with the status of identities do the derivations of laws constitute reductions.

This is a common point made in the philosophy of mind literature and it is due mainly to the aims of these reductionists - they want to solve the mind-body problem and thus what they are primarily interested in is not the unity of scientific theories (what drove Carnap and Neurath) but rather ontological simplification. This is why they think that bridge laws must have the status of identities. For others working in philosophy of science, there are reasons to think that theoretical reductions are valuable in themselves even if they do not lead to ontological reductions (i.e. identities). These philosophers of science endorse Nagel’s original position that reductions are legitimate even in the case where the bridge laws do not have the status of identities or even biconditionals. So long as one can establish derivations between two theories, one has unified them by establishing inter-connections. For this purpose, it is sufficient, as Nagel thought, that the bridge laws have the status of conditionals (See for example Ladyman, Ross, and Spurrett (2007, 49) who reject the suggestion that Nagelian reductions require identities.)

Before moving on, it is important to note some refinements that have been made to Nagel’s model of reduction since its original conception. In the 1960s, under the influence of the work of Thomas Kuhn, as philosophers of science began to focus more on constructing their theories with detailed study of cases from the history of science. Some suggested that Nagel’s model of reduction was implausible in at least a couple of ways. If one focused on actual examples of reductions from the history of science, like the reduction of physical optics to Maxwell’s electromagnetic theory, or even Nagel’s own example of the reduction of thermodynamics to statistical mechanics, Nagel’s model didn’t quite apply. The most compelling critique of Nagel’s account was made by Kenneth Schaffner (1967). Schaffner pointed to two specific problems with Nagel’s model of reduction.

The first problem with Nagel’s model was that it presupposed the simple deducibility of the target theory from the base theory and bridge laws, whereas in fact, in order for a derivation to be successful, the target theory often had to be modified somewhat. This might be because the target theory said some things that, in light of the base theory, one could now see were false. For example, Schaffner points out that once Maxwell’s theory was developed, one could see that a central law of physical optics, Fresnel’s law of intensity ratios, was not exactly correct (the ratios were off by a small but significant factor owing to the magnetic properties of the transmitting medium). In addition, in order to effect a reduction, target theories are often modified by incorporating certain facts about the range of phenomena to which the theory applies. In the case of physical optics, one must add to the theory the fact that it does not apply to all electromagnetic phenomena, but only those within a certain frequency range. So, in reducing physical optics to electromagnetic theory, what actually gets derived is not the target theory itself, but a slightly corrected version into which certain limiting assumptions are built.

The second worry Schaffner had about Nagel’s model of reduction was that it assumed what he called a ‘conceptual invariance’ between the target and base theories, while he noted that a certain amount of concept evolution always occurs in the process of reduction. Certain concepts in the target theory may turn out to be understood in a new manner or even rejected once one considers the base theory. Schaffner charts the evolution of the concept of ‘gene’ in Mendelian genetics as a result of the reduction of the theory to biochemistry (1967, 143). Thus, the theory that is actually derived from the base theory in the case of an actual reduction may not have all of the same laws or assumptions as the original theory and the derived theory may also contain a different set of concepts from the original target theory.

With these points in mind, Schaffner proposed a revised version of the derivation model of reduction intended to be more faithful to actual reductions in the history of science. According to Schaffner, reduction of a theory T to another theory B involves the formulation of a corrected, reconceptualized analog of the target theory: T*. Bridge laws are formulated linking all terms in T* with terms in the base theory B. Then T* is derived from B and these bridge laws. Indeed, even this model is perhaps not dynamic enough because as Schaffner himself notes, in reductions the base theory itself is also often modified as it is being considered as reduction base for T (or T*). So likely, there are two new theories developed: T* which is the corrected analog of the original target theory, and B*, a modified version of the base theory. Then it is B* that is used to derive T* with the aid of bridge laws.

Although this model is clearly intended as a correction to Nagel’s model, it shares much in common with the original model, and is often what is referred to when considering “Nagelian” reduction. Schaffner’s account agrees with Nagel’s on this important point: inter-theoretic reduction is the derivation of one theory from another theory, with the aid of bridge laws tying any terms in the derived theory that do not appear in the base theory to terms in that theory. Section 2 will consider the main reason that many philosophers reject reductionism, since they think that such bridge laws are impossible to find. But, first one other influential model of reduction will be considered.

c. Reduction as Explanation

There is one last model of reduction that was very influential in the past century. This explanatory model of reduction is historically associated with John Kemeny and Paul Oppenheim and is defended in their article, “On Reduction” (1956). The definition of ‘reduction’ that Kemeny and Oppenheim defend says that:

A theory T is reduced to another theory B relative to a set of observational data O if:

(1) The vocabulary of T contains terms not in the vocabulary of B.

(2) Any part of O explainable by means of T is explainable by B.

(3) B is at least as well systematized as T (paraphrased from their 1958, 13).

The set of observational data O is understood as relativized to that which requires explanation at the particular moment the reduction is attempted.

There are several elements of this definition that need explanation, particularly the notion of systematization that is employed by Kemeny and Oppenheim. The systematization of a theory is a measure of how well any complexity in the theory is compensated for by the additional strength of the theory to explain and predict more observations. It is clear why (3) is needed then in an account of theory reduction. If it was not the case, then one could just introduce the observations of T into the base theory, and create a new theory, T+o, thereby effecting an ad hoc reduction. Of course, this is not how reductions work. Instead, the base theory is expected to have the virtues typical of scientific theories, and be at least as systematized as the target.

This aspect of reduction is crucial to Kemeny and Oppenheim’s view regarding the motivations for reduction within science. They say that “the role of a theory is not to give us more facts but to organize facts into a practically manageable system” (1958, 11). The goal of a reduction is to streamline our overall scientific picture of the world and cast out theories whose observational domain can be just as systematically covered in a more encompassing theory. Thus, it is easy to see how the results of a Kemeny/Oppenheim reduction would serve well the stated aims of their predecessors in the unified science movement, Carnap and Neurath. They state the main motivation for reduction in the following way:

Anything we want to say about actual observations can be said without theoretical terms, but their introduction allows a much more highly systematized treatment of our total knowledge. Nevertheless, since theoretical terms are in a sense a luxury, we want to know if we can get along without some of them. It is, then, of considerable interest to know that a set of theoretical terms is superfluous since we can replace the theories using these by others in which they do not occur, without sacrificing the degree of systematization achieved by science to this day. (1958, 12)

Reduction helps one eliminate those terms and theories that are explanatorily superfluous. So, a direct justification for pursuing reduction in science is to achieve a greater level of theoretical parsimony.

This eliminative aspect of the Kemeny and Oppenheim proposal is something that need not be taken up by all philosophers who endorse the general idea of the explanatory model of reduction. Many later philosophers who built on this approach rejected the eliminative aspect of the model while retaining the idea that reductions essentially involve showing that all of the observations explained by a reduced theory can also be explained by the base theory. For example, in Paul Oppenheim and Hilary Putnam’s paper “Unity of Science as a Working Hypothesis,” this model of reduction is employed in the context of a larger metaphysical scheme that is not eliminative. The phrase ‘reduction of theories’ may seem to imply the idea that what one is doing is reducing the number of theories by getting rid of some, but this is not essential, nor is it obviously desirable. This issue of elimination versus retention of reduced theories (or entities) will be explored more fully in the last section of the present entry.

One familiar with the metaphysics and philosophy of mind literature will notice that the Kemeny/Oppenheim model is not one that is often discussed when philosophers are concerned with reductionism. One reason for this seems to be issues with the distinction on which it relies between theory and observation. This is a distinction that has been called into question in post-positivist philosophy of science owing to the purported theory-ladenness of all observation. It is worth considering the issue of whether the spirit of the Kemeny/Oppenheim model really requires maintaining this discredited distinction. Likely, a version of the view could be refined that replaced the notion of explaining observations with an appeal to explaining phenomena more generally.

John Bickle’s recent work, defending what he calls a ‘ruthless reductionism’, appeals to a notion of reduction that bears many similarities to the Kemeny/Oppenheim account without relying on a strict theory/observation dichotomy (Bickle 2006, 429). Considering the case of the reduction of psychology to neuroscience, Bickle describes reduction as involving the following simple practice: intervene neurally, and track behavioral effects (2006, 425). Bickle’s view is that in practice, reductions are accomplished when an experimenter finds a successful way of intervening at the chemical or cellular level, to cause a change in behavior that manifests what one would ordinarily recognize as cognitive behavior. He then argues:

When this strategy is successful, the cellular or molecular events in specific neurons into which the experiments have intervenes… directly explain the behavioral data. These explanations set aside intervening explanations, including the psychological, the cognitive/information processing, even the cognitive-neuroscientific… These successes amount to reductions of mind to molecular pathways in neurons… (2006, 426)

For Bickle, as for Kemeny and Oppenheim, reductions work when we find a theory (in this case a neural theory describing molecular or cellular mechanisms) that can explain the data of another theory (in this case, some aspect of psychology).

One might wonder about the relationship between the Kemeny/Oppenheim and derivation models of reduction. It is not obvious that the two accounts are in competition. Indeed, Schaffner (1967) argued that his own version of the derivation model allowed it to subsume the Kemeny/Oppenheim model in certain cases. Recall that according to Schaffner, Nagel’s derivation model must be augmented to accommodate the fact that in real cases of theory reduction, what actually gets derived is not the original version of the target theory T, but instead a corrected analog of the original theory, T*. Schaffner notes that in some cases to facilitate a derivation, the original theory will have to be corrected so much that the analog only very remotely resembles T. In these cases, what occurs is something very much like a Kemeny/Oppenheim reduction: an initial theory T is replaced by a distinct theory T* which is able to play an improved explanatory role.

2. Reductionism: For and Against

It is now time to examine the prospects for reductionism. Is it plausible to think that the various special sciences could be reduced to physics in any of these senses? The term ‘special sciences’ is usually taken to refer to the class of sciences that deal with one or another restricted class of entities, such as minds (psychology) or living things (biology). These sciences are distinguished from the one most general science (physics) that is supposed to deal with all entities whatsoever. In the metaphysics and philosophy of mind literature, reductionism is usually taken to be the view that all sciences are reducible to physics, or even that all entities are reducible to entities describable in the language of physics.

a. Versions of Reductionism

Reductionism is no longer understood as the view that makes use of the logical positivist’s sense of reduction as translation. One reason for this is probably that a comprehensive translation of all terms into the language of physics is standardly understood as a lost cause. Although one might allow that physical science contains many terms that are correlates of terms in the special sciences (to use Nagel’s example, ‘heat’ and ‘mean molecular motion’), it is rarely supposed that these correlates are synonymous. Nagel, as discussed above, already noted this point. Even in the case where one might find two terms that refer to the same phenomenon, the terms themselves may differ somewhat in meaning, the identity of their referents being established empirically.

When one claims that a special science is reducible to physics today, sometimes this is intended in the sense of the derivation model of reduction. The view of the reductionist is often that the laws of all of the special sciences are derivable from physics (with the help of bridge laws). This then requires the discovery of physical correlates of all terms that appear in the laws of the special sciences. ‘Cell’, ‘pain’, ‘money’: all of these must have their physical correlates, so that one may formulate bridge laws to facilitate the derivations (of biology, psychology, economics). Reductionists in metaphysics and philosophy of mind, following the points of Sklar and Kim discussed above, typically believe that these bridge laws must have the status of identities. Thus, the reduction of all special science theories to physics is thought to bring with it the reduction (qua identification) of all entities to entities describable in the language of physics.

There is also a large class of philosophers thought of as reductionists who do not think of their view as entailing theoretical reductions in any of the senses described above. Those identity-theorists like U.T. Place (1956) or J.J.C. Smart (1959) who believe that mental phenomena (in Place’s case: processes, in Smart’s: mental types) are identical to physical phenomena (processes or types) are often thought of as reductionists in virtue of accepting such identities, whatever they may think of reduction in the traditional sense of theory reduction. Though they do not speak of reduction in the sense of Nagel (indeed their work predates Nagel’s seminal The Structure of Science) Place and Smart are explicit about denying the plausibility of reductions in the sense of translations; they deny that sentences involving psychological terms in general may be translated into sentences involving purely physical terms. As Smart puts it:

Let me first try to state more accurately the thesis that sensations are brain-processes. It is not the thesis that, for example “after-image” or “ache” means the same as “brain process of sort X”… It is that, in so far as “after-image” or “ache” is a report of a process, it is a report of a process that happens to be a brain process. It follows that the thesis does not claim that sensation statements can be translated into statements about brain processes. (1959, 144)

For Smart and Place, the truth of reductionism about the mind is something that one learns through observation. It isn’t something that one can simply come to by reflecting on the meanings of psychological terms. Although they might reject reductionism in the translational sense and do not discuss theoretic reduction in the sense of either the Nagel or Kemeny/Oppenheim models, their account does involve an ontological reductionism - mental phenomena just are physical phenomena.

Not everyone however thinks that the mere obtaining of identities is sufficient for the success of reductionism. As Jaegwon Kim has argued, even if one had a complete set of identity claims linking terms in the special sciences with physical science terms such that one could complete a derivation of the special sciences from physical science or facilitate reductions, one would still not have truly reduced the special sciences to physical science (1998, 97-9). The problem is that reductions are supposed to be explanatory, and the completion of all of the derivations would not have shown one why it is that the bridging identities obtain.

To see Kim’s worry, consider the reduction of thermodynamics to statistical mechanics described by Nagel. Assume that in order to derive thermodynamics from statistical mechanics, physicists utilized the following bridge law:

Heat = mean molecular motion

This then allowed them to derive the heat laws of thermodynamics from the laws of statistical mechanics governing the motion of molecules. Kim’s worry is that even if this Nagel reduction succeeds, one will still not understand how thermodynamics is grounded in statistical mechanics because the identity statement is not explained. As he puts it:

I don’t think it’s good philosophy to say, as some materialists used to say, “But why can’t we just say that they are one and the same? Give me good reasons why we shouldn’t say that!” I think that we must try to provide positive reasons for saying that things that appear to be distinct are in fact one and the same. (1998, 98)

What needs to happen according to Kim (and for many others in the literature including Frank Jackson (1998) and David Chalmers (1996)), is that these identities need themselves to be grounded in what is known as a functional reduction.

Functional reductions work in two stages. In the first stage, one takes the special science phenomenon that is supposed to be reduced and “primes” it for reduction. One does this by construing it relationally. For example, if one is trying to reduce a chemical phenomenon like boiling, one might construe it as the property a substance has when there are bubbles on its surface and a resulting vapor. In the second stage of a functional reduction, one seeks the property figuring in the base science that could ground the obtaining of this relational description. Once this is accomplished, one is able to identify the phenomenon in the special science with the phenomenon in the base science. Continuing with the same example, it might be found in physics (and obviously this is to oversimplify) that when the atoms in a substance reach a certain average momentum, and the pressure in the substance is less than the atmospheric pressure in the substance’s environment, they are able to escape the surface. One can then see how this would produce bubbles on the substance’s surface and a resulting vapor. Once this explanation has been given, one can identify x’s boiling as identical with x’s being such that x’s atoms have reached a certain momentum, and x’s internal pressure is less than the pressure of x’s external environment. And it will be clear why this identity obtains. This is because the latter is just the physical phenomenon that is required for x to boil, given how boiling was construed in the first stage of the reduction.

In sum, functional reductions allow one to see why it is the case that identities obtain. They can be used therefore to supplement an identity theory of the kind endorsed by Smart and Place, or to supplement a Nagelian reduction to explain bridge laws with the status of identities. Many discussions of reductionism assume that the view requires functional reductions of this kind. Bickle has noted that this is most often the case in discussions of reductionism by anti-reductionists .

The following section will discuss the main argument that has been thought to refute reductionism of this kind, as well as any kind based on the notion of reduction centrally involving identity statements: the argument from multiple realization. The focus will be on this particular argument because it provides the most general critique of reductionism, applying to many different sciences. That is, unlike other arguments against reductionism, the argument from multiple realization is thought to show that for any special science (or special science phenomenon), it cannot be reduced to physical science (or a physical phenomenon). There are also many less general arguments that have been advanced to show that one particular kind of science cannot be reduced, for example, the arguments of Thomas Nagel (1979), Frank Jackson (1982), and David Chalmers’ (1996) against the physical reducibility of consciousness. These arguments will not be discussed in this entry.

b. The Argument from Multiple Realization

The multiple realization argument is historically associated with Hilary Putnam and Jerry Fodor (Putnam 1975; Fodor 1974). What Putnam and Fodor argued was that in general it would not be possible to find true identity statements of the kind required for reductions of the special sciences. For simplicity, the present discussion will focus on the case of reducing psychology to physical science. If this reduction is going to be successful, then one must find physical correlates for all psychological terms such that there are true identity statements linking each psychological term with a physical term. For example, for some physical property P, there must be a true identity statement of the form:

For all x (x’s being in pain = x’s instantiating physical property P),

Or more generally:

For all x (x’s instantiating special science property S = x’s instantiating physical property P)

As Putnam points out, it is a great challenge for the reductionist to find physical properties that will serve this purpose. He says:

Consider what the [reductionist] has to do to make good on his claims. He has to specify a physical-chemical state such that any organism (not just a mammal) is in pain if and only if (a) it possesses a brain of a suitable physical-chemical structure; and (b) its brain is in that physical-chemical state. This means that the physical-chemical state in question must be a possible state of a mammalian brain, a reptilian brain, a mollusc’s brain…, etc… Even if such a state can be found, it must be nomologically certain that it will also be a state of the brain of any extra-terrestrial life that may be found that will be capable of feeling pain… it is at least possible that parallel evolution, all over the universe might always lead to one and the same physical “correlate” of pain. But this is certainly an ambitious hypothesis. (Putnam 1975, 436)

The problem for the reductionist is that too many very physically different kinds of things satisfy the predicate ‘is in pain’ for one to have the hope of specifying a kind of physical property P that all and only the things that are in pain instantiate. This is not only a problem for the reductionist who requires that there be identities linking terms in the special sciences with terms in the language of physics. Putnam’s point works equally well against the obtaining of bridge laws with biconditional form, for example:

For all x (x is in pain if and only if x instantiates physical property P) Putnam, and later Fodor, argued that this argument generalizes to show that one would not be able to find true identity statements linking special science predicates with predicates from physical science. The types of things satisfying a given special science predicate are just too physically diverse. The view Putnam and Fodor advocated, instead of reductionism, was (a nonreductive version of) functionalism. They claimed that special science predicates typically denote causal or functional properties. That is, what it is for something to fall within the extension of a particular special science predicate is for it to play some specific causal role. So, for example, to fall under the psychological predicate ‘pain’ is roughly to be in an internal state that is caused by tissue damage and tends to cause withdrawal behavior, moans, and so on. If this functionalism about ‘pain’ is true, then anything that instantiates this causal role will fall under the extension of the predicate ‘pain’. The metaphysical upshot of this is that pain is a functional property that has many different realizers. These may include states of humans, mollusks, and Martians, whatever is the type of thing that has an internal state caused by tissue damage and which tends to cause withdrawal behavior, moans, and so on. But there is no one physical property with which the property of being in pain may be identified.

c. Replies

Reductionists have tried several ways of responding to the argument from multiple realization. To begin, it must be noted that this argument only succeeds against a version of reductionism claiming that there are identities or assumptions with the status of biconditional linking terms in the special sciences with physical terms. As was noted above, many philosophers of science, including Nagel himself, do not believe that the reduction of a theory to physical science requires that there be bridge laws with the status of identities or biconditionals, so long as assumptions strong enough to facilitate derivations obtain. The arguments of Putnam and Fodor do nothing to undermine claims of the following form:

For all x (if x instantiates physical property P, then x instantiates special science property S),

where ‘P’ denotes some actual realization of a special science property in humans or some other creature. Thus, reduction of all special sciences to physical science may still be carried out in the sense of Nagel reduction.

Alternatively, the reductionist may point to the fact that there are derivation models of reduction that do away with the appeal to bridge laws altogether. For example, C.A. Hooker (1981) developed a derivation model of reduction that builds on the insights of Nagel and Schaffner. Like Schaffner (1967), Hooker argued that in actual cases of reduction what gets derived from the base theory is not the original target theory, but instead a corrected analog T* of the original theory. On Hooker’s model however, this analog is formulated instead within the linguistic and conceptual framework of the base theory B. Thus, no bridge laws are required in order to derive T* from B. He then notes that once T* has been derived from B, one can claim that T has been reduced in virtue of an analog relation A that T* bears to T (1981, 49). It is of course a difficult matter to spell out what these analog relations must come to for there to have been a legitimate reduction of T to B, in virtue of the derivation of T* from B. Still the fact that it is T* that is derived from B, a theory in B’s own language implies that bridge laws of any form (be they identity statements, biconditionals, or conditionals) are not required for reductions in Hooker’s sense. Therefore, if one holds a theory of reductionism based on Hooker reduction (as in Bickle (1998), for example), one is immune to objections from multiple realization.

Still, it has been noted that many reductionists, for example Place and Smart, argue that there are identities linking the entities of the special sciences with physics. Indeed for many reductionists, such identities are a central part of their views. Still, there are ways even for these reductionists to respond to the arguments of Putnam and Fodor. Jaegwon Kim (1998), for example, has made two suggestions.

One suggestion is for the reductionist to hold onto the claim that there are truths of the form:

For all x (x’s instantiating special science property S = x’s instantiating physical property P)

However, she may maintain that P is a disjunctive property. For example, if pain is realized in humans by C-fiber stimulation, in octopi by D-fiber stimulation, in Martians by E-fiber stimulation, and so on, then P will be:

the property of instantiating C-fiber stimulation in humans or D-fiber stimulation in octopi or E-fiber stimulation in Martians or .…

This approach is generally unpopular as reductionists (e.g. D.M. Armstrong (1997)) and anti-reductionists (e.g. Fodor (1997)) alike are skeptical about the existence of such disjunctive properties.

A second approach suggested by Kim (1998, 93-94) has been more popular. This approach is also associated with David Lewis following his suggestions in his “Mad Pain and Martian Pain” (Lewis 1980). The response concedes to Putnam and Fodor that there is no property of pain simpliciter that can be identified with a property from physical science. However, there are true “local” identity statements that may be found. Kim suggests that there may be a physical property discovered that is identical to pain-in-humans, another discovered that is identical to pain-in-octopi, so on. What motivates the multiple realization argument is the compelling point that there is little physically similar among different realizers of pain across species. However, within a species, there are sufficient physical similarities to ground a species-specific (this is what Kim means by ‘local’) reduction of pain. Or so the reductionist may argue. Kim himself does not endorse reductionism about pain, even if he thinks most other special science properties can be reduced in this way.

3. Reduction of Entities: Identification vs. Elimination

Up to now, reduction has been treated as involving unification of theories or identity of phenomena (properties, types, or processes). In the case of theoretical reductions, according to the Nagelian models, it has been assumed that when a reduction is effected, previously disunified theories become unified and in the case of entities, when a reduction is effected, entities that were previously seen as distinct are shown to be identical.

However, this is not how reductions always proceed. Indeed this is implied by the term ‘reduction’ itself. Shouldn’t reduction involve a decrease in the number of theories or entities in the world? Doesn’t the reduction of psychology to physics analytically entail the elimination of psychology? Doesn’t the reduction of pain to a physical phenomena analytically entail its elimination? Several authors have emphasized the eliminative aspects of many reductions in practice (especially Schaffner (1967), Churchland (1981), Churchland (1986), Bickle (2003)).

Return to the derivation model of theoretical reduction. It was noted earlier that to effect reductions in the derivation sense, it is often necessary to create a new, modified version of the target theory in order to get something actually derivable from something like the base theory. In the Schaffner model, this proceeds by formulating a new version of the target theory, in its original language, supplemented by bridge laws. In the case of the Hooker model, this proceeds by formulating a new version of the target theory, but in the language of the base theory, thus avoiding the need for bridge laws. Either way, the result is that it is not entirely clear whether what has been reduced is a legitimate version of the original target theory T (in other words, whether a retentive reduction has been achieved), or a different theory altogether (whether what has been achieved is instead a replacement reduction) (Hooker 1981, Bickle 1998). In the move to unification, in accomplishing the reduction, has one been able to retain the original target theory? Or has one instead been forced to replace it with a different theory? There is surely a continuous spectrum of possible reductions from those of the more retentive kind to those that are clearly replacements (see Bickle 1998, for a diagram charting this spectrum). At a certain point, the theory that actually gets derived may be so different from the original target theory, that one may be forced to say that the reduction of the original has instead proceeded by something more like the Kemeny/Oppenheim model. The original theory is being replaced with another able to accommodate the original’s phenomena. In the history of science, there have been reductions of many different kinds. The standard example of the reduction of chemistry to atomic physics was an example of a retentive reduction. Most if not all of the claims of chemistry before the reduction are still taken to be true, even if some had to be modified for a derivation of the theory from atomic physics to go through. On the other hand, the reduction of phlogiston theory to modern chemistry was a replacement reduction. Enough of the claims of the phlogiston theory were forced to be changed that one can justifiably say that that theory was replaced altogether, not retained.

The hope in the philosophy of mind is that whatever psychological theory actually gets reduced to physics, it will be sufficiently similar to the original psychological theory that the psychophysical reduction is retentive. However, there are some reductionists, in particular Churchland (1986) and Churchland (1981), who think this hope is unlikely to be fulfilled.

The spectrum from theoretical reductions that are retentive to theoretical reductions that are eliminative parallels another spectrum of kinds of reductions of phenomena. In some cases where the reduction of a phenomenon is carried out, one is justified in characterizing this as an identification. In other cases, one wants to say that the phenomenon has rather been eliminated as a result of the reduction. It is plausible that whether reductions should be seen as eliminative or not has to do with whether or not the theory that mentioned that entity has been reduced in a more or less retentive manner. Whether or not a reduction of all mental phenomena can be achieved that most philosophers will view as retentive is still very much up in the air. However, for the reductionist, the hope is that for all phenomena, they will either be identified with entities of physical science or eliminated altogether in favor of the entities of a superior theory.

4. References and Further Reading

  • Armstrong, D.M. 1997. A World of States of Affairs. Cambridge, Cambridge University Press.
  • Bickle, John. 1998. Psychoneural Reduction: The New Wave. Cambridge, Massachusetts: MIT Press.
  • Bickle, John. 2006. “Reducing mind to molecular pathways: explicating the reductionism implicit in current cellular and molecular neuroscience.” Synthese, 151, 411-434.
  • Carnap, Rudolf. 1928/1967. The Logical Structure of the World and Pseudoproblems in Philosophy. Berkeley, California: University of California Press.
  • Carnap, Rudolf. 1934. The Unity of Science. London: Kegan Paul, Trench, Trubner, and Co.
  • Carnap, Rudolf. 1963. Autobiography. The Philosophy of Rudolf Carnap, P.A. Schilpp, ed. LaSalle, Illinois: Open Court.
  • Cartwright, Nancy, Jordi Cat, Lola Fleck, and Thomas Uebel. 1995. Otto Neurath: Philosophy between Science and Politics. Cambridge: Cambridge University Press.
  • Cartwright, Nancy, Jordi Cat, and Hasok Chang. 1996. “Otto Neurath: Politics and the Unity of Science.” The Disunity of Science: Boundaries, Contexts, and Power, P. Galison and D. Stump, eds. Stanford, California: Stanford University Press.
  • Chalmers, David. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press.
  • Churchland, Patricia. 1986. Neurophilosophy. Cambridge, Massachusetts: MIT Press.
  • Churchland, Paul. 1981. “Eliminative Materialism and the Propositional Attitudes.” The Journal of Philosophy, 78, 67-90.
  • Dennett, Daniel C. 1991. Consciousness Explained. London: Little, Brown and Co.
  • Fodor, Jerry. 1974. “Special Sciences, or the Disunity of Science as a Working Hypothesis.” Synthese, 28, 97-115.
  • Fodor, Jerry. 1997. “Special Sciences: Still Autonomous After All These Years.” Philosophical Perspectives, 11, 149-163.
  • Hempel, Carl. 1966. Philosophy of Natural Science. Englewood Cliffs, New Jersey: Prentice Hall.
  • Hooker, C.A. 1981. “Towards a General Theory of Reduction. Part I: Historical and Scientific Setting. Part II: Identity in Reduction. Part III: Cross-Categorial Reduction.” Dialogue, 20, 38-59, 201-236, 496-529.
  • Jackson, Frank. 1982. “Epiphenomenal Qualia.” Philosophical Quarterly, 32, 127-36.
  • Jackson, Frank. 1998. From Metaphysics to Ethics: A Defence of Conceptual Analysis. Oxford: Oxford University Press.
  • Kemeny, John and Paul Oppenheim. 1956. “On Reduction.” Philosophical Studies, 7, 6-19.
  • Kim, Jaegwon. 1998. Mind in a Physical World. Cambridge, Massachusetts: MIT Press.
  • Ladyman, James and Don Ross (with John Collier and David Spurrett). 2007. Every Thing Must Go: Metaphysics Naturalized. Oxford: Oxford University Press.
  • Lewis, David. 1980. “Mad Pain and Martian Pain.” Readings in the Philosophy of Psychology, Vol. 1. N. Block, ed. Cambridge, Massachusetts: Harvard University Press, 216-222.
  • Nagel, Ernest. 1961. The Structure of Science: Problems in the Logic of Scientific Explanation. New York: Harcourt, Brace, and World.
  • Nagel, Thomas. 1979. “What is it Like to be a Bat?” Mortal Questions. Cambridge: Cambridge University Press.
  • Neurath, Otto. 1931/2/1959. “Sociology and Physicalism.” Logical Positivism, A.J. Ayer, ed. New York: The Free Press. Originally published in Erkenntnis, 2.
  • Neurath, Otto. 1983. Philosophical Papers, 1913-1946. Dordrecht: Reidel.
  • Oppenheim, Paul and Hilary Putnam. 1958. “Unity of Science as a Working Hypothesis.” Minnesota Studies in the Philosophy of Science, 2, 3-36.
  • Place, U.T. 1956. “Is Consciousness a Brain Process?” British Journal of Psychology, 47, 44-50.
  • Putnam, Hilary. 1975. “The Nature of Mental States.” Mind Language and Reality: Philosophical Papers, Vol. 2. Cambridge: Cambridge University Press. Originally published as: “Psychological Predicates.” Art, Mind, and Religion, W. Capitan and D. Merrill, eds. Pittsburgh: University of Pittsburgh Press, 1967.
  • Schaffner, Kenneth. 1967. “Approaches to Reduction.” Philosophy of Science, 34, 137-147.
  • Sklar, Lawrence. 1967. “Types of Inter-theoretic Reduction.” The British Journal for the Philosophy of Science, 18, 109-124.
  • Smart, J.J.C. 1959. “Sensations and Brain Processes.” The Journal of Philosophy, 68, 141-156.

Author Information

Alyssa Ney
University of Rochester
U. S. A.

Epistemology of Testimony

We get a great number of our beliefs from what others tell us. The epistemology of testimony concerns how we should evaluate these beliefs. Here are the main questions. When are the beliefs justified, and why? When do they amount to knowledge, and why?

When someone tells us p, where p is some statement, and we accept it, then we are forming a testimonially-based belief that p. Testimony in this sense need not be formal testimony in a courtroom; it happens whenever one person tells something to someone else. What conditions should be placed on the recipient of testimonially-based beliefs? Must the recipient of testimony have beliefs about the reliability of the testifier, or inductive support for such a belief? Or, on the other hand, is it enough if the testifier is in fact reliable, and a recipient may satisfy his epistemic duties without having a belief about that reliability? What external environmental conditions should be placed on the testifier? For the recipient to know something, must the testifier know it, too?

For our basic case of testimonially-based belief, let us say that person T, our testifier, says p to person S, our epistemic subject, and S believes that p. This article will first survey arguments related to S-side issues, then those related to T-side issues.

Table of Contents

  1. Some Terminology, Abbreviations, and Caveats
  2. Recipient (S)-Side Questions
    1. Characterizing the Debate
    2. Arguments in Favor of Demands on Testimonially-Based Beliefs
      1. T’s Ability to Deceive
      2. Individual Counterexamples and Intuitions about Irresponsibility and Gullibility
      3. S’s Ability Not to Trust T
      4. Operational Dependence on Other Sources
      5. Defeasibility of Testimonially-Based Beliefs by Other Sources
      6. From a No-Defeater Condition to Positive-Reason-to-Believe Condition
      7. S’s Higher-Order Beliefs About T
    3. Arguments Against Demands on Testimonially-Based Beliefs
      1. Insufficient Inductive Base
      2. Analogies to Perception
      3. Analogies to Memory
      4. Skepticism about Over-Intellectualization and Young Children
      5. The Assurance View as a Basis for Lessened Demands on S
    4. A Priori Reasons in Support of Testimonially-Based Beliefs
      1. Coady’s Davidsonian Argument from the Comprehensibility of Testimony
      2. Burge’s Argument from Intelligible Presentation
      3. Graham’s A Priori Necessary Conceptual Intuitions
  3. Testifier (T)-Side Questions: Testimony and the Preservation of Knowledge
    1. Background
    2. The Cases
      1. Untransmitted Defeaters
      2. Zombie Testifiers
      3. High-Stakes T, Low-Stakes S
      4. False Testimony
      5. Reconceptualization from T to S
      6. Unreliable Testimony
  4. Some Brief Notes on Other Issues
    1. Connections between S-side and T-side issues
    2. The Nature of Testimony
  5. References and Further Reading

1. Some Terminology, Abbreviations, and Caveats

This article considers the epistemology of testimonially-based belief. Let’s unpack that phrase. Discussing the basis of different beliefs presupposes that one important way we should categorize beliefs is by where they came from. The basis of a belief is its source or root. When we look across the room and see a chair, we form a perceptually-based belief that there is a chair nearby. When we believe that p and believe that p entails q, and then conclude that q, we form a deductively-based belief that q. When we observe that gravity has operated in the past and we infer that it will continue to operate in the future, we form an inductively-based belief about gravity. When we remember what we ate this morning, we form a memorially-based belief about our breakfast. And when someone tells us that p, and we accept it, we form a testimonially-based belief that p. Testimony in this sense need not be formal testimony in a courtroom, but happens whenever one person tells something to someone else.

It will be helpful to use the same terminology throughout this article. For our basic case of testimonially-based belief, let us say that T, our testifier, says p to S, our epistemic subject, and S believes that p. Different permutations will be considered, but this will be the terminology for the basic case.

Actual beliefs might not, of course, have only one basis. A belief might be partly testimonially-based and partly perceptually-based, just as it might be partly inductively-based and partly memorially-based. However, an understanding of pure cases, which we will pursue in this article, should illuminate hybrid instances.

Now, the epistemology of a belief is a particular sort of evaluation. Epistemologists assign honors like “knowledge” or “justification” to beliefs based on whether those beliefs are up to snuff epistemically. The epistemology of testimonially-based belief, then, concerns the epistemic status of S’s belief that p. Is it justified? Is it rational? Is it warranted? Is it sufficiently supported by evidence? Is S entitled to believe it? Does S know that p?

One way to speak of the epistemology of testimonially-based belief is to speak directly of the epistemic status at issue: we can talk about testimonially-based knowledge, testimonially-based justification, or testimonial evidence.

Many of the contemporary disputes in the epistemology of testimony occur in two broad fields. One dispute, or set of disputes, concerns the extent of the internal conditions placed on testimonially-based belief related to the recipient, S. (To phrase the debate in terms of internal conditions is not to beg the question against epistemic externalism the externalist is characterized precisely by his failure to place such demands regarding the internal accessibility. See, for instance, the title of Bergmann 2006b: Justification Without Awareness: A Defense of Epistemic Externalism.) When is a testimonially-based belief justified, or rational, or reasonable, or permissible, or within our epistemic entitlements? Is testimonially-based justification really a special case of inferentially-based justification, or is it (instead) analogous to perceptually- or memorially-based justification? What sorts of epistemic demands do we properly place on those who believe what others tell them? Coady 1973 uses the terms “reductionism” and “anti-reductionism” to describe approaches to these issues. Speaking broadly, reductionism views testimony as akin to inference and places a relatively heavy burden on the recipient of testimony, while anti-reductionism views testimony as akin to perception or memory and places a relatively light burden on the recipient of testimony.

A second area involves the external conditions on the testifier, T, in order for S to know that p. Must T know that p herself? Must T’s testimony even be true? Must T reliably testify that p?

This article will first survey arguments related to S-side issues, then those related to T-side questions. These two areas do not by any means exhaust the topics of great interest to epistemology, but are a useful first place to begin.

As noted in the final section of this article, there are some important disputes about exactly what counts as “testimony.” For the most part, this article will make do with a rough “T told S that p” formulation. However, especially in T-side issues, a key issue is frequently whether a proposed counterexample counts as testimonially-based belief. This article can only suggest some of the relevant considerations to that issue, rather than canvassing it in detail.

This article focuses chiefly on the epistemology of testimony in general, rather than the epistemology of human testimony. Because there is considerable controversy about what is required, as a conceptual matter, for testimonially-based knowledge or justification or rationality, it seems wisest to get as clear a view of the nature of testimonial justification and testimonial knowledge, as such, before proceeding to more obviously practical considerations related to an evaluation of particular actual testimonially-based beliefs. To the extent that we only consider the epistemology of testimony in general, our conclusions may be relatively thin and unsatisfying. However, controversy regarding the basic nature of epistemic phenomena across the universe of possible testimonially-based beliefs means that this sort of preliminary brush-clearing is important.

2. Recipient (S)-Side Questions

a. Characterizing the Debate

The most prominent debate in the epistemology of testimony is between “reductionism” and “non-reductionism,” terms due to Coady 1973. The earliest clear statements of these positions appear in David Hume and Thomas Reid. Hume said, “[T]here is no species of reasoning more common, more useful, and more necessary to human life, than that which is derived from the testimony of men, and the reports of eye-witnesses and spectators. … [O]ur assurance in any argument of this kind is derived from no other principle than our observation of the veracity of human testimony, and of the usual conformity of facts to the reports of witnesses.” (Hume 1748, section X, at 74.) Hume’s picture is that we properly form beliefs based on testimony only because we have seen other confirmed instances. Testimonially-based justification is therefore reducible to a combination of perceptually-, memorially-, and inferentially-based justification. (In theory, one might also include a priori insight among the sources to which testimonial justification is reduced, though Hume does not do so.)

Reid, however, argued that children properly trust others even when they lack any past inductive basis in their experience: “[I]f credulity were the effect of reasoning and experience, it must grow up and gather strength, in the same proportion as reason and experience do. But, if it is the gift of Nature, it will be strongest in childhood, and limited and restrained by experience; and the most superficial view of human nature shews, that the last is really the case, and not the first. … [N]ature intends that our belief should be guided by the authority and reason of others before it can be guided by our own reason.” (Reid 1764, chapter 6, section 24, at 96.) Reid suggests that we have an innate faculty, unconfirmed by personally-observed earlier instances, which properly causes us to trust those who testify. Testimonially-based justification flows from the reliability of this faculty, and so it is not reducible to perceptually- and inferentially-based justification.

The reducibility of testimonially-based justification is thus one way to characterize the debate between Hume and Reid and their modern successors over the internal conditions on testimonially-based beliefs. A second way to characterize such disputes is to ask to what extent testimonially-based beliefs are implicitly inferential. A Humean approach holds that we infer the reliability of a present bit of testimony from the reliability of earlier instances, while a Reidian approach holds that testimonially-based beliefs are properly non-inferential, or direct. The inferentialist sees testimonially-based belief as the acceptance (or the hypothetical acceptance) of an argument like this:

  1. T is telling me that p;
  2. T, or people like T, have generally been reliable in the past telling me, or other people, things like p; so
  3. T is probably reliable on this occasion; so
  4. p.

The non-inferentialist sees testimony as less like an invitation to an argument and more like the input to a machine. T tells S that p, and, seizing upon T’s act of communication, S’s testimony-processing faculty causes S to believe that p.

(Audi 1997 helpfully distinguishes between hypothetical and actual inferences. He holds that testimonially-based beliefs are formed directly, but are nonetheless justified on the basis of other beliefs; such beliefs could be used to support the testimonially-based belief, but need not be part of its actual genesis.)

Lackey 2006a gives relatively full recent lists of the adversaries in the S-side literature in terms of reductionism (at 183 n.3) versus nonreductionism (at 186 n.19), while Graham 2006:93 does the same in terms of inferential versus direct views. These lists appear below, just before the bibliography.

A third way to characterize disputes over testimonially-based beliefs is to ask to what extent testimonially-based justification is analogous to perceptually-based justification. The Humean-reductionist tradition sees strong disanalogies, while the Reidian-non-reductionist tradition sees a strong analogy between the sources. See, for instance, Lackey 2005:163 (“non-reductionists maintain that testimony is just as basic a source of justification (knowledge, warrant, entitlement, and so forth) as sense-perception, memory, inference, and the like”); Graham 2004:n.4 (“The central claim the Anti-Reductionist makes is that the epistemologies of perception, memory, and testimony should all look more or less alike.”).

None of these formulations captures contemporary debates perfectly well. Few contemporary philosophers will endorse Hume’s reductionist or inferentialist approach to testimonially-based belief in anything close to full form. Some philosophers would demand that S have positive reasons to believe in T’s reliability, or place other demands on S, but almost all of them stop short of insisting that S have a sufficiently-large inductive base to justify an inference that p from other beliefs, or to reduce testimonially-based justification to perceptually-, memorially-, and inferentially-based beliefs. Regarding the analogy between the epistemology of perceptually- and testimonially-based beliefs, even Reid, the prototype non-reductionist, saw significant disanalogies between beliefs based on perception and testimony. See Reid 1785 (article 2, chapter 20, at 203): “There is no doubt an analogy between the evidence of the senses and the evidence of testimony. … But there is a real difference between the two as well as a similarity. When we believe something on the basis of someone’s testimony, we rely on that person’s authority. But we have no such authority for believing our senses.”

Rather than characterizing the internal dispute solely in terms of reductionism, or inferentialism, or a perceptual-testimonial analogy, this article will simply consider arguments in favor of a relatively demanding approach to testimony versus arguments in favor of a relatively less demanding approach. Details about exactly which demands different authors would make on testimonially-based belief are best explained individually. Rather than applying labels like “Reductionist” or “Inferentialist,” this article simply uses “Liberal” and “Conservative.” Liberals are less demanding on testimonially-based justification and allow testimonially-based beliefs to count as justified, or as knowledge, more liberally; conservatives are more demanding and dispense testimonially-based epistemic honors more conservatively. In considering each demand, this article will also ask whether the demand might also reasonably be placed on perceptually-based beliefs as well.

The usage of “liberal” and “conservative” here has a kinship with the technical use of these terms in Graham 2006:95, but it is not the same. Graham uses the labels “reactionary,” “conservative,” “moderate,” and “liberal” to refer to those who accept or reject specific basic principles of epistemic justification. Graham’s “reactionary” accepts only principles regarding a priori insight, internal experiences, and deduction, rejecting principles related to memory, enumerative induction, inference to the best explanation, perception, and testimony. Graham’s “conservative” rejects only principles regarding perception and testimony; his “moderate” rejects only the principle regarding testimony, while his “liberal”—Graham’s own view—accepts the principle for testimony as well. Graham’s use of these principles in comparing testimony to perception and memory is discussed below.

Some philosophers place demands on testimonially-based beliefs regarding some epistemic honors, but not others. For instance, Audi 1997 is relatively demanding regarding testimonially-based justification, but because he does not think justification is required for knowledge, he is relatively lenient regarding testimonially-based knowledge. Burge 1993:458-59 is relatively lenient regarding what he calls testimonial “entitlement,” but reserves the label “justification” for instances where S is aware of an entitlement. Graham 2006:104ff. is relatively lenient regarding testimonially-based “pro tanto” justification—that is, he allows testimonially-based beliefs to have some justification relatively easily—but more demanding when considering whether S would have enough pro-tanto justification to have a justified belief. Plantinga 1993:82 similarly distinguishes between S having some testimonially-based evidence from having enough for S to have knowledge: “Testimonial evidence is indeed evidence; and if I get enough and strong enough testimonial evidence for a give fact … the belief in question may have enough warrant to constitute knowledge.”

Finally for preliminaries, we should distinguish arguments about what demands to place on testimonially-based beliefs from arguments about how those demands might be satisfied. Coady, Burge, and Graham suggest in different ways that we have a priori reason to accept testimonially-based beliefs, but they are all liberal about whether to place a general demand that testimonially-based beliefs be based on reasons such as the ones they offer. This article very briefly surveys their three approaches in a separate section.

b. Arguments in Favor of Demands on Testimonially-Based Beliefs

i. T’s Ability to Deceive

Faulkner 2000 argues that the fact that testimony comes from a person, rather than an inanimate object, is a reason to be more demanding on testimonially-based beliefs than on perceptually-based beliefs. Lackey 2006a:176 and 188 n.44 also endorses this argument. People like T can lie, but the matter in our perceptual environment cannot. See also Audi 2006:40: “[T] must in some sense, though not necessarily by conscious choice, select what to attend to, and in doing so can also lie or, in a certain way, mislead … For the basic sources, there is no comparable analogue of such voluntary representation of information.”

One way to make the point more precise is to claim that because free actions are particularly indeterministic—that is, because determinism is false, and so the past plus laws is not enough to guarantee future free actions—the environment for a testimonially-based belief cannot be regular and law-governed in the way that the environment for a perceptually-based belief can be. Graham 2004 considers such an argument in detail. He argues, however, that the presence of human freedom in testimonial cases is not a significant reason to prefer a conservative approach. He argues that if a libertarian approach to human freedom undermines the predictability of human actions, then it would also undermine a conservative approach to testimony; if T’s actions were unpredictable, then S could never have a proper basis on which to believe that T is likely to be honest, for instance. However, Graham argues that if libertarianism does not undermine predictability—either because it is false, or because counterfactuals of freedom are nonetheless somehow true—then testimonial liberalism is not threatened by human freedom, because the environments for testimonially-based beliefs can in fact be as predictable as the environments for perceptually-based beliefs.

Green 2006:82ff. argues that freedom is not distinctive of testimonially-based beliefs. Faulkner and Lackey both refer to this factor as a reason to distinguish perceptually-based beliefs from testimonially-based beliefs. However, perceptually-based beliefs can also suffer from the influence of deception. Fake objects, for instance, can be the result of deception, and perceptual-based beliefs about fake objects can obviously go awry because of the influence of agency on a perceptual environment. If the possibility of deception is a good reason to think that S requires positive reasons to believe T, then there seems to be equally strong reason to require that S have positive reasons to believe that the objects of her perceptually-based beliefs are genuine. The conservative might respond that deception may sometimes be at stake in a perceptually-based belief, but deception is always a possibility for testimonially-based ones. However, this seems clearly untrue as a conceptual matter; it is at least possible for T to be a reliable robot lacking freedom. And even among common human experience, there are cases where people lack the time to deliberate about deception; human free human action is not always at stake in testimonially-based belief.

ii. Individual Counterexamples and Intuitions about Irresponsibility and Gullibility

While she criticizes reductionism, Lackey 2006a argues that S does need positive reasons to believe T’s testimony. She relies on an example in which T is an extraterrestrial alien, dropping what appears to S to be a diary written in English, describing events on T’s home planet. Because, Lackey thinks, S has no reason to believe that the diary really is English, is not ironic, and so on, S’s belief is unjustified. “[H]earers need positive reasons in order to acquire testimonial justification, thereby avoiding the charge of … gullibility and intellectual irresponsibility.” Lackey 2006a:179; compare the title of Fricker 1994, “Against Gullibility.”

Testimonial liberals might respond to Lackey’s counterexample by simply reporting different intuitions. S is entitled to believe even reports from aliens that are apparently in English, and may assume without evidence (and in the absence of counter-evidence) that they are sincere and so on. Intuitions about the vice of gullibility may differ: liberals might say that it is in fact a vice to be too skeptical of others’ reports when there is no positive reason to doubt them.

Green 2006:67ff. argues that a perceptual analogue to the alien case can be constructed. S is suddenly transported to an unfamiliar perceptual environment and seems to see certain objects outside what looks like a window. But S may have no reason to think that the window is not, for instance, a television screen showing a greatly-magnified image of a scene far away, rather than a window opening onto nearby ordinary-sized objects. If S’s perceptually-based beliefs in that scenario do not required positive reasons to believe that his perceptual environment and faculties are functioning normally, then it is not clear why S need such reasons in the testimonial case.

In arguing against gullibility, Fricker 1994 argues in favor of S’s duty to monitor T for signs of untrustworthiness, suggesting that neglecting such a duty makes S gullible. Those who advocate S's presumptive right to trust T, she argues, must dispense with any duty in S to monitor T for signs of untrustworthiness. Goldberg and Henderson 2005 argue, however, that the testimonial non-reductionist can also countenance a requirement that S be sensitive to signs of T’s untrustworthiness; Fricker 2006c responds. Particularly after Fricker’s reply, it is not immediately obvious that the dispute between Goldberg and Henderson and Fricker is over anything epistemically substantive; at first glance the dispute is merely over the label “anti-reductionism” would properly apply to a view that imposes on S a robust duty to monitor T. However, the substantive issue about how best to characterize and understand the epistemic significance of the sensitivity to defeaters is of relevance even if it does not push toward either testimonial liberalism or conservatism.

iii. S’s Ability Not to Trust T

Fricker 2004:119 suggests that S has an unusual amount of freedom related to the formation of testimonially-based beliefs. The action of trusting a testifier is one which is taken in a self-aware way, unlike the formation of a perceptually-based belief. Audi 2006:40 makes a similar suggestion: “[S] commonly can withhold belief, if not at will then indirectly, by taking on a highly cautionary frame of mind.”

Green 2006:64 argues that we have similar freedom to reject even perceptually-based beliefs. We can indulge skeptical scenarios, like being a brain in a vat, without much difficulty. Further, there might be beings who accept testimony as readily as we accept the deliverances of our senses; there does not seem to be anything inherent about testimony that makes us freer to reject it.

iv. Operational Dependence on Other Sources

Strawson 1994:24 suggests that testimony as a source of beliefs requires other sources, such as perception: “[T]he employment of perception and memory is a necessary condition of the acquisition and retention of any knowledge (or belief) which is communicated linguistically…” Audi 2006:31 notes, “In order to receive your testimony about the time, I must hear you or otherwise perceive—in some perhaps very broad sense of ‘perceive’—what you say… [T]estimony is … operationally dependent on perception.” Audi 2002:80 says, “[A]part from perceptual justification for believing something to the effect that you attested to p, I cannot acquire justification for believing it on the basis of your testimony.”

For human beings, S’s sensations that accompany her reception of T’s testimony will also supply ground for perceptually-based beliefs. However, it seems possible to imagine beings who go directly from sensations to the formation of testimonially-based beliefs, lacking even the ability to form perceptually-based beliefs on the basis of those sensations. They would have the ability to receive testimony, but not necessarily the ability to form related perceptually-based beliefs. They might reason inductively about these testimonially-based beliefs through forming higher-order beliefs about the existence of the sensations.

Burge 1993:460 offers a related response. He argues that an a priori entitlement like the belief in a mathematical proof might be dependent on sense perception in the sense that, for instance, I must see the writing on a page in order to understand the proof. However, he argues that such a role for perception does not contribute to the “rational or normative force behind [such] beliefs.” Likewise, perceptually-based beliefs might allow human beings to obtain testimonially-based beliefs without contributing to the justification or other epistemic status of such beliefs. If that is correct, then the operational dependence that Strawson and Audi highlight is not of epistemic consequence.

v. Defeasibility of Testimonially-Based Beliefs by Other Sources

Plantinga 1993 and Audi 2006 suggest that testimony differs from sources like perception in the way in which testimonially-based beliefs can be defeated by other sources, or the way in which other sources of evidence can trump testimonially-based evidence. Plantinga says (at 87), “[I]n many situations, while testimony does indeed provide warrant, there is a cognitively superior way. I learn by way of testimony that first-order logic is complete…. I do even better, however, if I come to see these truths for myself…” Audi says (at 39), “[W]e cannot test the reliability of one of these basic sources [that is, for Audi, a source like perception or memory, but not testimony] or even confirm an instance of it without relying on that very source. … With testimony, one can, in principle, check reliability using any of the standard basic sources.”

One response to Plantinga and Audi is to point out instances in which perceptually- or memorially-based beliefs could be checked, or trumped, by testimonially-based beliefs. For instance, S might see a strange phenomenon, strange enough that S asks others nearby if they are seeing what S thinks he’s seeing. S might be worried about his perceptual or memorial faculties, and so seek testimony to confirm them. Graham 2006:102 makes a similar point. After listing several ways in which sources besides testimony can be defeated, he notes, “That a source is a source of defeaters for beliefs from another source, or even from itself, does not show that the other source depends for justification on inferential support from another source, or even itself. … The fact that my perception defeats your testimony does not show that testimony is inferential and not direct. Indeed, the fact that testimony-based beliefs sometimes defeat perceptual beliefs does not show that testimony is prior to perception.”

vi. From a No-Defeater Condition to Positive-Reason-to-Believe Condition

Most testimonial liberals include a defeater condition on testimonially-based knowledge or justification. S’s entitlement to believe T is defeasible, if other contrary information about p, or about T, is available to S. A conservative could argue, in line with the well-known approach of BonJour, that including such a requirement, but not a requirement of positive reasons to believe in T’s reliability, would be inconsistent, or an “untenable half-way house.” BonJour 1980 and 2003 consider an S informed by a reliable clairvoyant faculty that p, but who also has either (a) strong evidence that ~p, or (b) strong evidence that his clairvoyant power is unreliable, or (c) no evidence to believe that the faculty is reliable. While a defeater condition could handle cases (a) or (b), BonJour argues that those who say that knowledge or justification is defeated in these cases should also say that it is defeated in case (c). Replacing the clairvoyant faculty with T, we can construct an exactly parallel argument that those testimonial liberals who admit that S lacks justification or knowledge where S has evidence that ~p, or evidence that T is unreliable, should also concede that S lacks knowledge or justification where S has no evidence that T is reliable. (Compare Lackey 2006a:168 and 186 n.21, noting that the way in which accounts of testimony typically add a defeater condition is the same as the way they add such a condition in response to BonJour’s counterexamples.)

The testimonial liberal can resist this argument, however, in the same way that BonJour’s opponents resist his claims in general, by reporting contrary intuitions on his examples. Green 2007 offers one attempt to defend the tenability of an approach to either knowledge or justification that imposes a no-defeater requirement, but not a positive-reasons-to-believe-in-reliability condition, based on the way that the law handles fraud cases. The law holds that plaintiffs who sue for fraud lack “justified reliance” if they have defeaters for their fraudulently-induced belief, but not if they merely lack a reason to believe that the defendant is reliable. (Compare Bergmann 2006a:691 (“One perfectly sensible externalist reply is to say that although the no-defeater requirement seems intuitively obvious, the awareness requirement does not.”)).

vii. S’s Higher-Order Beliefs About T

When T tells S that p, one might demand that S have (on pain of “ignorant” or “unjustified” status) other beliefs concerning T or T’s trustworthiness. The existence or epistemic quality of these higher-order beliefs would matter regarding the evaluation of S’s underlying belief that p. Fricker 2006b:600 suggests that in forming testimonially-based beliefs by trusting T, S typically has a higher-order belief about T and his trustworthiness: “Once a hearer forms belief that [p] on a teller T’s say-so, she is consequently committed to the proposition that T knows that [p]. But her belief about T which constitutes this trust, antecedent to her utterance, is something like this: T is such that not easily would she assert that [p], vouch for the truth of [p], unless she knew that [p].” Weiner 2003 (chapter 3 at 5) likewise suggests that testimonially-based beliefs, unlike perceptually-based ones, are typically attended by beliefs about T: “When we form beliefs through perception, we may do so automatically, without any particular belief about how our perceptual system works. When we form beliefs through testimony, at some level we are aware that we are believing what a person says, and that this person is presenting her testimony as her own belief.”

Green 2006:87ff. argues, however, that it is not clear that testimony is really different from perception in this respect. Many recipients of testimony have a vague belief about T, but for many others this belief is at best implicit, and for others it is hard to say that even an implicit belief arises. Likewise for perceptually-based belief: many perceivers form beliefs that they are receiving information from their perceptual environments and their perceptual faculties; for others this belief is either vague, or implicit, or not really there at all. There does not seem to be any necessary inhibition of higher-order beliefs from the very nature of perception, nor any necessary production of higher-order beliefs from the very nature of testimony.

c. Arguments Against Demands on Testimonially-Based Beliefs

i. Insufficient Inductive Base

The most common objection to putting greater demands on testimonially-based beliefs is that these heightened demands simply cannot be satisfied in cases that, intuitively, do amount to knowledge or justified belief. Plantinga 1993:79 puts the point this way:

Reid is surely right in thinking that the beliefs we form by way of credulity or testimony are typically held in the basic way, not by way of inductive or abductive evidence from other things I believe. I am five years old; my father tells me that Australia is a large country and occupies an entire continent all by itself. I don’t say to myself, “My father says thus and so; most of the time when I have checked what he says has turned out to be true; so probably this is; so probably Australia is a very large country that occupies an entire continent by itself.” I could reason that way and in certain specialized circumstances we do reason that way. But typically we don’t. Typically we just believe what we are told, and believe it in the basic way. … I say I could reason in the inductive way to what testimony testifies to; but of course I could not have reasoned thus in coming to the first beliefs I held on the basis of testimony.

Relatedly, Lackey 2006a argues that a general inductive basis for belief in “testimony” would fail because the category of testimonially-based beliefs is too heterogeneous to support the relevant induction. The inference from particular instances of confirmed testimony to new cases is only as strong as the basis for believing that new instances will be similar to old ones. But those who testify about, say, events in Greece 2500 years ago, will be very different from those who testify about middle-sized dry goods in the next room.

A kindred point that liberals make in favor of the insufficient-inductive-base argument is to point out Hume’s mistaken explanation for why our testimonialy-based beliefs are supported inductively. For instance, Coady 1992:79-82 documents several places where Hume, in describing the inductive base for a belief in the reliability of testimony, actually uses evidence drawn from other people. As Van Cleve 2006:67 summarizes the argument, “the vast majority (or perhaps even the totality) of what passes for corroboration of testimony itself relies on other testimony. Compare Shogenji 2006:332: “[I]n justifying the epistemic subject’s trust in testimony the reductionist cannot cite other people’s perception and memory—for example, the reductionist cannot cite perception and memory of the person who provides the testimony. Only the epistemic subject’s own perception and memory are relevant to the justification of her trust in testimony.”

Van Cleve responds to this argument, however, by suggesting that corroboration of testimony is not inherently dependent on others; over the course of his life, Van Cleve says he has verified a great number of instances of testimony—both the existence of the Grand Canyon and Taj Mahal, but also “thousands of more quotidian occurrences of finding beer in the fridge or a restroom down the hall on the right after being told where to look.” He concludes that it is not necessary that our inductive base is necessarily weak: “[W]hat matters is not the proportion of testimonial beliefs I have checked, but the proportion of checks taken that have had positive results.” Van Cleve 2006:68.

Shogenji 2006 makes a unique defense of a conservative approach to testimonially-based beliefs. He argues that if Coady is right that we need to believe in the general reliability of testimony in order to interpret testimonial utterances—a Davidsonian argument that this article considers below—then if S has a non-testimonial basis for interpreting a statement in a particular way, S can likewise infer the general reliability of testimony from that basis. Shogeni says (at 339-340),

[B]y the time the epistemic subject is in possession of testimonial evidence by interpreting people’s utterances, her belief in the general credibility of their testimony is well supported. For, unless the hypothesis that testimony is generally credible is true, the epistemic subject is unable to interpret utterances and hence has no testimonial evidence. … The unintelligibility of testimony without general credibility is … not an objection to reductionism about testimonial justification, but a consequence of the dual role of the observation used for interpretation—the observation confirms the interpretation of utterances and the credibility of testimony at the same time. … [E]ven a young child’s trust in testimony can be justified by her own perception and memory. In order for people’s utterances to be testimonial evidence for her, the child must have interpreted the utterances, but the kind of experience that allows her to interpret the utterances is also the kind of experience that supports the general credibility of testimony.

Shogeni also argues that the ubiquity of testimonially-based beliefs—and therefore the ubiquity of reliance on the reliability of testimony—can be used to give greater confirmation for the reliability of testimony. Because the general reliability of testimony is implicated in so many of our beliefs, we have a large number of opportunities to add small bits of confirmation to the hypothesis that testimony is reliable. He says (at 343-344),

Beliefs based on testimony are part of the web of beliefs we regularly rely on when we form a variety of expectations. This means that the hypothesis that testimony is credible plays a crucial role when we form these expectations. As a result, even if we do not deliberately seek confirmation of the credibility hypothesis, it receives tacit confirmation whenever observation matches the expectations that are in part based on the credibility hypothesis. Even if the degree of tacit confirmation by a single observation is small, there are plenty of such observations. Their cumulative effect is substantial and should be sufficient for justifying our trust in testimony.

Interestingly, Shogeni does not argue that we should be more demanding of testimonially-based beliefs than we are for perceptually-based beliefs; he notes (at 345 n.15) that Shogenji 2000 “uses essentially the same reasoning as described here to show that the reliability of perception can be confirmed by the use of perception without circularity.”

What can the liberal say in response to such an argument? One response would be to abandon Coady’s Davidsonian argument that interpreting testimonial utterances requires an assumption that testimony is reliable. If that is not right—as liberals such as Graham and Plantinga have argued—then the possibility of interpretation is not enough to justify belief in the reliability of testimony.

Finally, even if the inductive base for testimonially-based beliefs is poor, the conservative can reply to this sort of argument by simply denying that we have very much testimonially-based justification or testimonially-based knowledge. Van Cleve 2006:68 suggests this route for children, suggesting that they do, in fact, lack epistemic justification for their testimonially-based beliefs: “Children … go through a credulous phase during which they believe without reason nearly everything they are told. As reductionists, however, we must hold that these beliefs are justified only in a pragmatic sense, not in an epistemic sense.”

ii. Analogies to Perception

Some liberals support lenient principles to govern testimonially-based beliefs on the basis of their great similarity to principles that many people believe govern perceptually-based beliefs.

For instance, Graham 2006:95ff. considers those who believe what he calls PER (“If S’s perceptual system represents an object as F (where F is a perceptible property), and this causes or sustains in the normal way S’s belief of x that it is F, then that confers justification on S’s belief that x is F”) and MEM (“If S seems to remember that [p] and this causes or sustains in the normal way S’s belief that [p], then that confers justification on S’s belief that [p]”), but who reject what he calls TEST (“If a subject S (seemingly) comprehends a (seeming) presentation-as-true by a (seeming) speaker that [p], and if that causes or sustains in the normal way S’s belief that [p], then that confers justification on S’s belief that [p]”). Graham then defends TEST against those who accept PER and MEM. He notes (at 101-102) that those who accept PER and MEM would already reject the idea that a difference in the degree of reliability should amount to a difference in epistemic kind, and would also already accept that perceptual or memorial beliefs can be direct, even though they can be defeated by other sorts of beliefs. He likewise argues (at 100) that the reasons to adopt PER, rather than seeing perceptual beliefs as inferential, are directly parallel to the reasons to adopt TEST as well.

Green 2006 argues that testimonially-, memorially-, and perceptually-based beliefs are on an epistemic par, in the sense that, over the universes of possible beliefs based on the three sources, the set of explanations of the epistemic status of those beliefs displays the same structure. (He excludes beliefs that cannot be perceptually-based, but could be testimonially- or memorially-based; we cannot literally perceive mathematical facts, but we can be told them, or remember them.) Green argues first that such parity is a more economical account of epistemic phenomena—and so an account more likely to be true—than accounts that distinguish sharply between the three sources. Second, he argues (at 218 ff.) that the epistemic parity of these sources follows from the epistemic innocence of certain transformations which will turn instances of testimonially-based beliefs into instances of beliefs based on the other two sources, or vice-versa—that is, the claim that such transformations preserve the structure of the explanation of epistemic status.

Turning perceptually-based beliefs into testimonially-based beliefs requires anthropomorphizing our sense faculties and environments—considering a possible world in which our sense faculties are monitored and operated by little persons who present messages to us about our environment, by causing perceptual sensations just like the ones in normal perceptually-based beliefs. Green suggests that the structure of the explanation for the epistemic status of such testimonially-based beliefs would have the same structure as the explanations for the epistemic status of perceptually-based beliefs before the transformation. The mere fact that a faculty for obtaining information is operated by a person, Green claims, should not make a difference in how that source of information produces justified beliefs and knowledge. The opposite transformation—from testimonially-based beliefs into perceptually-based beliefs—requires treating our testifier T as a machine, akin to, say, a telescope. This transformation would treat human beings as an environmental medium through which information about the world passes in complicated ways. Deception is possible when we get information from a testifier, but it is also possible when we get information from a telescope (for instance, if someone has put a fake picture on the end of it).

The conservative could respond to Green’s argument by claiming that these transformations are, in fact, not epistemically innocent. Anthropomorphizing our sense faculties would inherently introduce the element of human agency, and treating T as a perceptual device would remove it. As summarized above, however, Green argues that agency is already potentially at stake in cases of perception, for instance because of the possibility that someone else has substituted a fake object.

iii. Analogies to Memory

Several thinkers likewise draw analogies between testimonially-based beliefs and memorially-based ones. Dummett 1994, for instance, quoted above on relationship between the T-side and S-side debates, suggests that both memory and testimony are both merely means of preserving or transmitting knowledge, not of creating it, and are similarly direct and lacking need for supporting beliefs. Schmitt 2006 argues that transindividual reasons—that is, reasons that T has, but which also count as reasons for S’s belief—are no more problematic than the transtemporal reasons at stake in memory—that is, reasons that S has at time 1, but which also count as reasons for S’s belief at time 2. Foley 2001 argues that trust in others, at stake in testimony, is no less justified than trust in oneself, at stake in memory.

As noted above, Green 2006 argues that testimony and memory are also on an epistemic par. Green’s method of transforming testimonially-based beliefs into memorially-based beliefs is to treat the testifier T as S’s epistemic agent, and then to apply the fiction of the law of agency, qui facit per alium, facit per se—“he who acts through another, acts himself.” If T’s earlier actions are treated as if they were actually S’s own actions, then the transfer of information from T to S will be the same sort of transfer of information that happens when, using memory, S at time 1 transfers information to S at time 2. Green’s claim is that this transformation keeps the structure of the explanation of epistemic status of the resulting belief the same. On the other hand, turning memorially-based beliefs into testimonially-based beliefs requires treating S at time 1 as a different person from S at time 2. If the earlier time slice is someone else, and we treat the recovery of information from a memory trace as the interpretation of a message from that person, then memorially-based beliefs are transformed into testimonially-based ones. Green’s claim is that that transformation should not create or preserve epistemic status, or affect the structure of its explanation.

As with the response to Green’s argument for an analogy between perception and testimony, the conservative could claim that there is something inherently different between relying on one’s own earlier efforts and relying on someone else’s; replacing “S at time 1” with “T,” or vice versa, inherently changes the structure of the explanation of beliefs’ epistemic status.

iv. Skepticism about Over-Intellectualization and Young Children

Another argument against demands on testimonially-based beliefs is that, even if those demands might be able to be satisfied by those who are particularly careful in considering earlier cases of confirmation, it is improper to place too many intellectual demands on people’s everyday beliefs. Graham 2006:100 puts it this way: “[E]ven if the reduction is possible, requiring it is overly demanding; the requirement to reduce hyper-intellectualizes testimonial justification.” Young children, for instance, lack the intellectual capacity to consider complicated issues regarding the reliability of their parents or others who give them testimonially-based beliefs, and so it is improper to place epistemic demands on them.

Lackey 2005 defends a conservative approach to testimony against the infants-and-young-children objection by considering whether a similar problem could afflict any approach to testimonial-based justification that includes a non-defeater condition. No one suggests that testimonially-based justification is indefeasible; rather, S is only justified on the basis of T’s testimony if S lacks a defeater for her belief that p. For instance, if T tells S that p, but S already believes that q and if q then ~p, she cannot just add the belief that p, rendering her beliefs inconsistent. Defeaters can be standardly divided into doxastic, normative, and factual defeaters. Doxastic defeaters are like those in the case we just considered: other beliefs that S has that make it improper for her to believe p, or to accept testimony that p from T. Normative defeaters are other beliefs that S would have, if she performed her epistemic duties. Factual defeaters defeat S’s justification in virtue of being true. The standard example is the fake barn; if S just happens to see the one real barn amidst a countryside full of fakes, S’s belief about the barn is not justified, or at least does not count as knowledge. Similarly, if S just happens to meet T, the one reliable testifier in a sea of unreliable ones, then she has a factual defeater. Some epistemologists, though, are fake-barn-case skeptics, and think that these cases are not obviously cases where justification or knowledge fails.

Lackey’s argument is that if young children, or animals, are not capable of satisfying a positive-reasons demand on testimonially-based beliefs because they are not capable of appreciating reasons, then for the same reason they are likewise not capable of satisfying a no-defeater condition, either regarding normative or doxastic defeaters. Those who are not capable of understanding a reason for a belief presumably also cannot understand either a conflict in beliefs, as required by an appreciation of doxastic defeaters.

The liberal can resist Lackey’s argument in at least three ways. One way would be to deny that the existence of a no-defeaters condition requires a defeater-recognition capacity. It is true, this response would go, that young children must deal properly with any doxastic and normative defeaters in order to be justified, but young children simply lack such defeaters. Young children who lack the capacity to appreciate reasons or the resolution of conflicting claims lack the epistemic obligations presupposed by normative defeaters. They lack the ability to investigate for defeaters, but fortunately they also lack the duty to do so. This route, however, is unattractive to Lackey, because she thinks it quite clear that if young children are exposed to enough counterevidence for one of their beliefs, they become unjustified in holding that belief. The liberal might attempt to resist that intuition, however.

A second route for the liberal would be to retreat from the suggestion that children lack the capacity to appreciate reasons at all. Rather, he might insist that young children, while in principle capable of appreciating reasons or defeaters, have a particularly bad inductive base with respect to confirmed reports. It is not the cognitive incapacity of the child, but her evidentiary incapacity, that undermines the reasonableness of a demand for inductively-based reasons to believe T. All of the confirmed reports of a young child, for instance, are likely confined to a very small part of the world and to only a few testifiers. The leap to believe what his parents tell him about other subjects seems inductively very weak. This sort of response would dodge Lackey’s argument only by reconstruing the argument as a special form of the bad-inductive-base argument.

A third route for the liberal, taken in Goldberg 2008, would stress the role of reliable caretakers in shielding children from improper testimonially-based beliefs. While children themselves may not be able to appreciate the significance of defeating evidence, for instance, their parents can. Goldberg argues that the presence of such an external defeater-detection system is critical for testimonially-based knowledge in young children. Goldberg draws (at 29) the lesson he regards as radical: that “the factors in virtue of which a young child’s testimonial belief amounts to knowledge include information-processing that takes place in mind/brains other than that of the child herself.”

v. The Assurance View as a Basis for Lessened Demands on S

Moran 2005, Ross 1986, and Hinchman 2005 and 2007 argue that, because the testifier T has assumed responsibility for the truth of p, S’s responsibilities are necessarily lessened. In telling S that p, T is not offering S evidence that p, but instead asking S to trust him. Because the reception of testimony is inconsistent with S basing his belief on evidence, S’s responsibilities are necessarily lessened when he forms a testimonially-based belief. To trust T is to rely on his assurance, not to assume responsibility for the truth of p oneself. Hinchman 2007:3 summarizes the argument: “[H]ow could [T] presume to provide this warrant [for S’s belief that p]? One way you could provide it is by presenting yourself to A as a reliable gauge of the truth. … The proposal … simply leaves out the act of assurance. Assuring [S] that p isn’t merely asserting that p with the thought that you thereby give [S] evidence for p, since you’re such a reliable asserter (or believer). That formula omits the most basic respect in which you address people, converse with people—inviting them to believe you, not merely what you say.”

However, Goldberg 2006 argues that both reductionists and non-reductionists—both liberals and conservatives, in the terminology of this article—can subscribe to a buck-passing principle, very similar to the assumption-of-responsibility view. Even if T has assumed the responsibility for certain epistemic desiderata regarding p, S may have very demanding responsibilities of his own. For instance, S may have an epistemic duty to select those most worthy of buck-passing, much as a client has a duty to select a proper lawyer, even though the client does not know as much about the law as the lawyers he selects. On Green 2006’s suggestion that T is S’s epistemic agent or employee, it is consistent to say both (a) that T takes responsibilities for handling particular areas of S’s epistemic business, but (b) that S has responsibilities to select T properly—just as employees assume responsibility for particular functions of their employees, but employers still retain critical responsibilities to select employees well. Weiner 2003b has similarly argued that the view of testimony as an assurance does not contradict a requirement that S have evidence for his testimonially-based beliefs.

d. A Priori Reasons in Support of Testimonially-Based Beliefs

i. Coady’s Davidsonian Argument from the Comprehensibility of Testimony

Some testimonial liberals contend that there is good a priori reason to believe that testimonially-based beliefs are justified. Coady 1992 argues, building on Donald Davidson’s views about radical interpretation, that we must presuppose the reliability of testifiers in order to interpret their utterances. If we were to encounter a group of Martians interacting with each other using bits of language in response to external stimuli, we could not interpret the Martians’ language unless we were to assume that the bits of language that correlate with particular external stimuli are bits of language that refer to those stimuli. Unless we assume that the language used by the Martians generally tracks the world in which they live, we could not begin to interpret their utterances. Hence testimony, in order to be interpreted, must be generally reliable.

Graham 2000c argues, however, that it is possible for testifiers to be generally unreliable, even though they interpret each others’ statements on the assumption that they are incorrect. He imagines (at 702ff.) a group of people who are both honest and good at interpreting each others’ utterances, but who because of perceptual failures, or failures in memory, have mostly false beliefs about the world outside their immediate perceptual environment. These people could interpret utterances fine, but would still be unreliable testifiers. (For a response to a similar argument from Davidson, see Plantinga 1993:80f.)

ii. Burge’s Argument from Intelligible Presentation

Burge 1993 argues that S is a priori entitled to accept T’s statement, because it is, on its face, intelligible and presented as true. He summarizes his argument (at 472–473):

We are a priori entitled to accept something that is prima facie intelligible and presented as true. For prima facie intelligible propositional contents prima facie presented as true bear an a priori prima facie conceptual relation to a rational source of true presentations-as-true: Intelligible propositional expressions presuppose rational abilities and entitlement; so intelligible presentations-as-true come prima facie backed by a rational source or resource of reason; and both the content of intelligible propositional presentations-as-true and the prima facie rationality of their source indicate a prima facie source of truth. Intelligible affirmation is the face of reason; reason is a guide to truth. We are a priori prima facie entitled to take intelligible affirmation at face value.

One response to Burge’s argument is to suggest that he seems to be skipping over the assumption that T’s rational faculties are functioning properly. It may be that if S sees a T statement and sees that it is intelligible, S may be entitled to think that it came from a process that is geared toward presenting true statements; part of what it is to understand that something is a piece of testimony is to see that it is malfunctioning if it turns out to be false, or to have been unreliably produced. But the critic can ask why, without more, we should be entitled to assume that this process has turned out well. Absent the assumption that T is in an environment conducive to proper function of T’s truth-seeking processes—an assumption that is false in many possible worlds—it would seem that S should not be entitled to rely on T’s word, simply from the fact that it is the presentation of a rational source.

Burge might respond that the worlds in which T’s truth-seeking faculties are not functioning properly are worlds that we may ignore, because they are not relevant alternatives (like, for instance, the brain-in-a-vat worlds that non-skeptics feel entitled to ignore). However, Burge’s argument does not depend on whether we are in a possible world where testifiers tend to be reliable. It would seem to work just as well in worlds where they are not. But it does not seem plausible that everyone in any possible world is entitled to believe that they are in worlds where testifiers are usually reliable.

iii. Graham’s A Priori Necessary Conceptual Intuitions

Graham 2006 argues that TEST, his principle that T’s statement supplies pro tanto justification, is an a priori necessary conceptual truth, even though testifiers are not reliable in all possible worlds. Such a view of testimony fits with Graham’s general metaepistemological view that epistemic principles should be necessary a priori conceptual truths about the proper aim of our beliefs. However, Plantinga 1993:80 criticizes the suggestion that testimony is necessarily evidence. He argues, in accord with Reid’s statements about the provisions of “Nature,” that testimony only supplies evidence the contingent human design plan provides—in line with an environment in which testifiers generally speak the truth—that properly functioning human beings trust statements from others.

3. Testifier (T)-Side Questions: Testimony and the Preservation of Knowledge

a. Background

For S to come to know that p by relying on T’s testimony, S must satisfy whatever internal conditions there are for knowledge, but this is not enough. P must actually be true, of course, but T must also be properly connected to the fact that p; as Gettier 1963 teaches, there is also some sort of environmental condition on our testifier T in order for S to know. Several authors give a relatively simple answer to the environmental condition: T must, himself, know that p. Others give other similar conditions, such as someone knowing that p on a non-testimonial basis. Lackey 2003 gives an extensive list of such thinkers, whom we might call testimonial knowledge-preservationists. The discussion, like much of the post-Gettier literature, revolves around the discussion of counterexamples and principles intended to cover them.

If S’s testimonially-based knowledge that p requires T’s (or someone’s) knowledge that p, it would seem that testimony is “a second-class citizen of the epistemic republic,” as Plantinga 1993:87 puts it, because, unlike perception, testimony is not a source of knowledge for the epistemic community as a whole; it is only a way of spreading knowledge around that community. Much as a political libertarian might see government as a tool useful only for redistributing wealth, but not creating it, knowledge-preservationists might see testimony as a tool useful only for spreading knowledge, but not creating it.

In general, someone attracted to knowledge-preservationism—the thesis that S’s testimonially-based knowledge that p requires T to know that p—can resist counterexamples in three ways. First, he can deny that, as described, S really knows that p (the “Ignorant-S” response). Second, he can claim that T, as described, really does know that p (the “Knowing-T” response). Third, he can deny that S’s belief that p is really based on T’s testimony that p (the “Not-Testimony” response). More generally, where a different account of the testimonial environmental condition is at stake, and a counterexample claims to find an S who knows that p, but in which that environmental condition fails, the defender of the account has the same three options: deny that S knows, argue that the environmental condition is actually met, or deny that the case is the proper sort of testimonially-based belief. If none of the responses is available, of course, the counterexample is effective, and the environmental condition needs revision.

If knowledge by T is not the key environmental desideratum to S’s knowledge, what is? Several thinkers propose substituting a focus on information. Goldberg 2001:526 argues that his example should convince epistemologists of testimony to “widen our scope of interest from an exclusive focus on content-preserving cases of [testimonially-based] belief and knowledge to include all cases in which information is conveyed in a testimonially-based way from speaker to hearer.” The alternative account to the testimonial environmental desideratum, then, is that T possess information that p. (Goldberg’s 2005 counterexamples might, however, undermine even that account.) Graham 2000:365 takes a similar view, explaining it at length: “According to the model I prefer, knowledge is not transferred through communication, rather Information is conveyed.” Green 2006:47ff. follows Graham and suggests that positional warrant is the key environmental desideratum: information sufficient to support a belief that p, if a doxastic subject were present.

b. The Cases

i. Untransmitted Defeaters

Lackey 1999 presents cases in which T does not know that p, because either T has personal doubts about p, or because T should have doubts about p, but in which T still reliably passes along the information that p to S. T’s defeaters are not necessarily transmitted to S.

Her first example is a biology teacher who does not believe her lesson about evolution, but passes it on reliably because the school board requires her to do so. Because the children reliably believe their lesson, Lackey says, they know it, despite the fact that their testifier does not. Both the Ignorant-S and Not-Testimony responses have some plausibility here. Audi 2006:29 suggests the Ignorant-S response: “If … [the students] simply take [the teacher’s] word, they are taking the word of someone who will deceive them when job retention requires it…. It is highly doubtful that this kind of testimonial origin would be an adequate basis of knowledge.” Schoolchildren who discovered that their teacher did not actually believe her own lesson would presumably be startled and unsettled. They perhaps relied on a premise like “My teacher knows the truth about this lesson,” and while it might be possible to get knowledge by reasoning on the basis of a falsehood, this is not obviously such a case. Teachers depend on their students viewing them as trustworthy sources of information. A teacher who refuses to believe her own lesson is like a host who refuses to eat the meal he serves a guest. “If the teacher doesn’t believe the lesson,” a student could reason, “why should I?” To attempt a Not-Testimony response—perhaps termed in this case a Not-Testimony-From-T response—we might recharacterize the case as testimony from the school board, rather than the teacher. A school teacher who tells students what she doesn’t believe isn’t really testifying, the suggestion might go; she is merely acting as a conduit for the real testifier, the school board, who does in fact know the lesson.

Lackey has defended her intuitions in the biology teacher case by suggesting that, even though T does not know or believe that p, it is still perfectly proper for her to assert that p, disputing the account of knowledge as the norm of assertion contained in Williamson 2000. Because the reliability of her lessons means that the teacher is behaving properly in telling her students that p, there is likewise nothing epistemically amiss in her students then believing that p on her say-so. A full discussion of whether knowledge is the norm of assertion, however, is not possible here.

Lackey’s second example is someone with matching misperceptions and pathological lies. For instance, whenever she sees a zebra, she thinks it is an elephant, but has a pathological urge to tell people that what she thinks are elephants are zebras, and so on. The Ignorant-S response seems possible; it is not at all obvious that relying on someone like that is a way to gain knowledge. Such a T seems close to insane, and even if someone who is insane happens to be a reliable speaker about what she has seen, S would have to know that in order to gain knowledge from her statements. A similar response seems possible for Lackey’s third and fourth examples, where T is gripped by skeptical worries or by the belief that her perceptual abilities are faulty. If T is really and seriously worried about whether she is a brain in a vat, or has radically unreliable powers of perception, such that we would conclude that she does not know everyday things about his environment, then it is hard to see how S could come to know those things by relying on his say-so. Lackey’s last example is someone who is presented with evidence that her powers of perception are radically unreliable, but who retains her perceptually-based beliefs anyway. In response, the knowledge-preservationist could argue that defeating evidence serious enough to make T’s belief that p improper would, it seems, be serious enough to make T’s testimony that p similarly improper, and likewise S’s reliance on that testimony. (For a defense of these suggested responses to Lackey’s examples, based on the idea that S takes T as his agent, and so an S who trusts a relevantly misbehaving T should be charged with T’s misbehavior, see Green 2006:137ff.)

Graham 2000a:379ff. promotes an example similar to Lackey’s misperceptions-and-pathological lies case. T has been raised in an environment where the word “blue” refers to the color red, “red” to blue, “green” to yellow, and “yellow” to green. Scientists aware of T’s malady install spectrum-reversing glasses on T, so that his testimony now comes out right. Unlike someone who looks at a zebra, thinks it is a giraffe, but has a pathological desire to call it a zebra, we might think such a T is sane. Still, there is some reason to think that the Ignorant-S response may work. If S were to learn that when T looks at the sky, it seems red to him, S would be very alarmed, and would not likely trust what T tells him about the colors of nearby objects. That fact suggests that S has a defeater for his belief based on T’s testimony now; it implicitly relies on the false premise that T is using words and perceiving colors normally. The fact that there are two large errors in S’s assumptions, albeit matching errors that cause T’s color reports to come out true, makes the status of S’s knowledge shaky.

ii. Zombie Testifiers

Green 2006:27ff. argues that T can testify to S, and support knowledge, even if T entirely lacks phenomenology entirely, and so is a zombie, or a machine. For instance, we might receive a phone call from our credit card company noting suspicious behavior in our account, but it could be a computer-generated voice speaking to us. (In a possible world without phishing scams, we might also receive such a message through email.) If beliefs require conscious phenomenology, such testifiers would know nothing, and so would not know p. Possible cases of machine testimony might be phenomenologically indistinguishable from normal cases of testimonially-based beliefs. The Ignorant-S response, denying that such beliefs would be knowledge, seems clearly closed. We can surely get knowledge from a machine. The Knowing-T response, by affirming knowledge in T, would require knowledge without any phenomenal beliefs, which seems very implausible. The Not-Testimony response is the most promising route for the knowledge-preservationist: denying that beliefs based on the testimony of machines would really be “testimonially-based belief.” Machines that cannot know things likewise cannot perform speech acts, and testimony is a speech act.

In defense of his view that machine testimony really is testimony, Green (at 36ff.) relies on his intuition that if two beliefs (a) have the same epistemic status, (b) have the same contents, (c) are the result of the exercise of the same cognitive ability by S, and (d) have the same phenomenology for S, then the two beliefs should be regarded by the epistemologist as similarly based; we should regard either both, or neither, as testimonially-based. “Testimonially-based belief” is, on this view, an epistemic tool, and describing the full range of epistemic phenomena would be unnecessarily duplicative if we were required to use two different terms or concepts to cover such similar beliefs. Further, epistemic principles like those defended by Graham 2006:95 would cover zombies or machines. Graham includes broad conditions in TEST: “If a subject S (seemingly) comprehends a (seeming) presentation-as-true by a (seeming) speaker that [p] ….” Green at 41 also argues that beliefs that come from the linguistic output of machines need to be categorized in some way, and using a category other than “testimonially-based belief” seems to multiply epistemic categories beyond necessity. On the other hand, the intuition that testimony is a type of speech act, requiring that T be conscious, is very strong in some people. To the extent that such thinkers would retain “testimonially-based belief” as an epistemic concept, such thinkers would reach beyond epistemic status, content, cognitive ability, and phenomenology to determine that concept’s application.

iii. High-Stakes T, Low-Stakes S

Hawthorne 2004 and Stanley 2005’s interest-sensitive approaches to knowledge suggest another way in which S might know, but T would not. For instance, T’s life might depend on getting to the bank tomorrow—the mob wants its money, won’t take a check, and will kill him if it doesn’t get it by the Saturday deadline. By Hawthorne and Stanley’s lights, T might not know that the bank is open tomorrow, even if he has a fairly-clear recollection that banks in this town are open on Saturdays, because knowledge requires enough certainty to satisfy a particular subject’s needs. But S, who does not owe the mob any money, but who would like to have enough cash in his pocket to buy his kids an ice-cream cone in the park on Saturday afternoon, can make do with less certainty than can T. If T tells S that the bank is open tomorrow, then, assuming other factors work out, T could presumably pass along his between-ice-cream-cone-and-mob-repayment-level certainty to S. That amount of certainty would be enough for S to come to know, though it wasn’t enough for T. Put abstractly, T might properly tell S that p, aware knowing that, given S’s stakes, S only needs a relatively low amount of Grahamian pro tanto justification, or relatively Plantingian little warrant, in order for S to know, even though T himself might be in a much higher stakes situation, and so would not have enough justification to know that p. On this sort of view, T may assert that p if T has enough certainty for his audience’s needs, but which might not be enough for T’s own. (See Green 2006:142.)

Denying the Hawthorne-Stanley interest-sensitive view of knowledge is, of course, one easy way to resist this sort of counterexample. Another way to defend knowledge-preservationism against such an attack is to insist that asserter’s knowledge is the norm of assertion: T should only assert that p if he has enough certainty for T’s own needs. The idea might be that S, hearing T say that p, will assume that T has enough evidence for himself, and would normally be shocked and disturbed were he to learn that T thought that his evidence was insufficient for T’s own purposes, but passed along the statement that p anyway. Likewise, we might be attracted to the intuition that a low-stakes T, with enough certainty that p for his own purposes, should have every right to assert that p, no matter the audience (for instance, by asserting that p on the internet, where anyone might read it, including a high-stakes S).

iv. False Testimony

Goldberg 2001 presents a case where T testifies falsely, but S still gains testimonially-based knowledge. T tells S that q: “T saw Jones wearing a pink shirt last night at the party.” But S knows that Jones was out of town last night, and so decides that T must have mistaken someone else for Jones. So S instead believes p: “T saw someone wearing a pink shirt last night at the party.”

The knowledge-preservationist might respond with a combination of the Knowing-T and Not-Testimony responses. T does, of course, also believe p, that he saw someone with a pink shirt. Did he tell S that? If so, then T told S that p, and spoke truly and knowingly. If, however, we regard T as not telling S that p, but only that q, it seems plausible to say that S actually inferred that p from T’s testimony that q (and in a manner unlike the way that conservatives, discussed above, argue that inference is involved in ordinary testimonially-based beliefs). So the knowledge-preservationist can argue that either T knew and testified that p, in which case the example has door-#2 problems, or else T didn’t tell S that p, in which case the example has door-#3 problems.

v. Reconceptualization from T to S

Green 2006:30 discusses an instance where T conceptualizes the object of belief differently than does S. T tells S that some object m is F, not knowing that object m is the same as object n. S knows that m is n and does not distinguish the two, and so believes that n is F. But T didn’t know that. For instance, Lois Lane knows that Superman is Clark Kent, but Jimmy Olsen does not. Jimmy tells Lois that Clark’s favorite ice cream flavor is chocolate, and Lois now knows Superman’s favorite ice cream flavor, which Jimmy did not. We might stipulate that Lois does not know that Jimmy distinguishes Clark and Superman; Jimmy tells her something about Clark, and Lois just assimilates that information into a single “Clark/Superman” file.

The knowledge-preservationist might argue, as in the reply to Goldberg’s case above, that S’s belief is either inferentially-based, or that T somehow did tell S that n is F. However, it seems plain that T, not knowing that n is m, or perhaps not knowing about n at all, could not know that n is F—Jimmy did not know that Clark was Superman, and he wasn’t talking about Superman. So the Knowing T response seems blocked. Could this case be seen as inferentially-based, rather than testimonially-based? Here, unlike in Goldberg’s case, S may not even be conscious that he is conceiving of the object differently than T. In the Jones-wasn’t-there case, though, S explicitly modifies T’s statement that p, because he knows why q is the more reasonable belief to form. Because differences between how T and S conceptualize the object of their beliefs may not be noticed, there is stronger ground for saying that the presence of such a difference would not prevent S’s beliefs from being testimonially-based. However, if S’s belief that m is F is receiving epistemic benefits from his background knowledge that n is m, then there may be some plausibility in saying that S’s belief is somehow based in part on that knowledge, even if it is non-inferential. Lois is utilizing, even unwittingly and unconsciously, her knowledge that Clark is Superman. (Cf. Heck 1995:99 (“[O]ne can not come to know things about George Orwell from assertions containing ‘Eric Blair.’”).

vi. Unreliable Testimony

Goldberg 2005 presents a case where even unreliable testimony produces testimonially-based knowledge. T sees evidence that p which is usually misleading, but is luckily not misleading on this occasion—in Goldberg’s example, the evidence is an opaque carton of milk which A, an eccentric writer, usually replaces each morning with an empty carton, but A forgot this morning; p is “there is milk in the fridge.” T tells S that p, an observer of the testimony, A, is nearby, and would have corrected T’s testimony had it been incorrect. S’s belief is, Goldberg thinks, safe, because A’s presence would have prevented T’s false testimony from being believed, but T’s testimony itself is unsafe, because it is based on evidence that, in the circumstances, is usually misleading.

The Not-Testimony response is an option here. Even though S’s belief is formed in response to T telling him that p, an essential part of S’s belief-sustaining environment is A’s safety-guaranteeing presence. Goldberg (at 308) gives his defense of S’s knowledge by considering a case in which S knows about A’s role. It seems quite plausible that in that case, S is not relying solely on T, but on the T-in-A’s-presence hybrid. In the case where S does not know that A is guaranteeing the reliability of his belief that p, Goldberg still thinks that S knows that p—A’s guaranteeing function alone, and not S’s explicit reliance on that function, is enough. It might seem a bit odd to suggest that S’s belief is not testimonially-based, when S herself has no other conscious basis for her belief than the fact that T told her that p. However, if, unknown to S, S’s belief receives epistemic benefits because on A’s guaranteeing function, it also seems possible for S’s belief to be differently based because of A’s guaranteeing function. The actual reason why S has the belief she has is partly T, and partly A. If we understand the case this way, Goldberg’s case is a case where beliefs partly based on defective testimony can amount to knowledge, precisely because the other part of the basis of that belief cures the defect in the testimony.

Knowing T—the response that T herself knows that p, and in fact that her testimony is reliable—is also a possibility, if we pay close attention to T’s belief and testimony over time. Suppose T tells S that p at time t, and that it would take A at least time Δt to correct T’s testimony, had it in fact been false. If S believes T straightaway, then at time t, before A’s correction mechanism could have worked in any event, it does not seem right to say that S’s belief is safe. Only after A has had a chance to correct the testimony, but has not, would S’s belief amount to knowledge. S’s belief at time t+Δt may be knowledge, but not his belief at time t. But what about T? T’s belief that p is unreliable at time t, and so is his testimony that p, because it was based on evidence that is usually misleading. But at time t+Δt, T has as much right as S to rely on A’s failure to correct the testimony that p. So at time t+Δt, T also knows that p. We could say the very same thing about T’s testimony: it is unsafe and unreliable at time t, but at time t+Δt, it is itself safe and reliable—or at least as safe and reliable as S’s belief based upon it. In other words, T and S are ignorant, and T’s testimony unreliable, at time t, but T and S know that p, and T’s testimony is reliable, at time t+Δt.

Goldberg 2007:322ff. discusses a similar case in which S receives clues about T’s reliability in addition to T’s testimony itself. Due to wishful thinking, T always believes that the Yankees have won, and always says so. Sometimes, however, the Yankees do win, and T reads so in the newspaper. When T’s belief is based on wishful thinking, he displays tell-tale signs, such as failing to look S in the eye, which would lead S not to believe him. When T’s belief is based on genuine information that the Yankees won, these signs are absent, and S would believe him. As a result, Goldberg says that S’s belief in the Yankees-actually-won case is safe and should count as knowledge, even though T’s belief is not. The Not-Testimony response is again possible: S’s belief is based not on T’s testimony alone, but on the signs that would indicate unreliability.

Graham 2000b:371ff. discusses a similar case. T has trouble distinguishing two twins, A and B, but S does not. T tells S that A knocked over a vase, and S knows that B could not have done it. T’s testimony is unreliable, because T cannot tell A from B, and B might as easily have knocked over the vase. The Not-Testimony response is somewhat plausible here: S’s belief is not based simply on T’s testimony, but also on his knowledge that B did not knock over the vase. As with Goldberg’s case, S may not be aware of the fact that T is unreliable, and so may not be aware of the contribution of S’s additional knowledge about B in sustaining S’s belief about A knocking over the vase. But also as in Goldberg’s case, there is some reason to think that if an additional source provides epistemic benefits to S’s belief, it can also make a difference in the basis for S’s belief, albeit a difference of which S may be unaware.

4. Some Brief Notes on Other Issues

As noted above, the S-side and T-side questions are far from an exhaustive map of the important issues in the epistemology of testimony. This section does not give a full map of other issues, but notes two particularly prominent ones.

a. Connections between S-side and T-side issues

One interesting issue is the extent to which the two main issues discussed above are related. Some philosophers connect their views on the internal and external questions, but they do so in both directions. For instance, Fricker 2006b:603 argues that knowledge-preservationism regarding testimonial knowledge fits best with a relatively demanding approach to testimonial justification in which S has a second-order belief about T’s knowledge:

When the hearer [S] … believes [T] because she takes his speech at face value, as an expression of knowledge, then … [S]’s belief in what she is told is grounded in her belief that T knows what he asserted. … Several writers have endorsed the principle that a recipient of testimony can come to know what is testified to only if the testifier knows whereof she speaks. In my account this fact is … derived from a description of the speech act of telling….

On the other hand, Dummett 1994:264 suggests that knowledge-preservationism fits best with a less demanding approach, because it suggests a strong analogy with memory:

In the case of testimony … if the concept of knowledge is to be of any use at all, and if we are to be held to know anything resembling the body of truths we normally take ourselves to know, the non-inferential character of our acceptance of what others tell us must be acknowledged as an epistemological principle, rather than a mere psychological phenomenon. Testimony should not be regarded as a source, and still less as a ground, for knowledge: it is the transmission from one individual to another of knowledge acquired by whatever means.

Among thinkers who have considered both issues in detail, all four possible sorts of view are represented.

Conditions on Testifier for Testimonially-Based Knowledge
(T-side issues)
Relatively more demanding (Knowledge-Preservationism) Relatively less demanding (Anti-Knowledge-Preservationism)
Conditions on Recipient for Testimonially-Based Justification (S-side issues) Relatively more demanding (Reductionism) Audi
Relatively less demanding (Anti-Reductionism) Burge

b. The Nature of Testimony

An extensive literature exists on the general nature of the epistemic relationship between the testifier T and our epistemic subject S. For instance, Reid 1785 says that testimony is distinguished by S relying on T’s authority for the proposition that p. Goldberg 2006 says that forming a testimonially-based belief allows S (in the right conditions) to “pass the epistemic buck” to T. Moran 2006, Watson 2004, Hinchman 2007, Ross 1986, Fried 1978, and Austin 1946 all promote variants of the view that in testifying, T is offering an assurance to S that p is true, akin to a promise. Schmitt 2006 says that testimonially-based beliefs involve “transindividual reasons,” such that T’s initial reasons are transferred to S, though S may not comprehend what they are. (Related to Schmitt’s view on this issue is the large question, unfortunately beyond the scope of this article at this time, of whether testimony requires an irreducibly social account of epistemology. For an introduction to some of these issues, see the articles in Schmitt 1994.) Green 2006 says that testimonial relationships are a form of epistemic agency, such that T’s actions on S’s behalf should be considered the action of S’s agent, and so subject to the legal maxim qui facit per alium, facit per se (he who acts through another acts himself).

One issue is whether these views really compete with one another. These characterizations might conceivably all be true: in testifying, T might be giving an assurance, thereby offering to serve as an epistemic agent, thereby transferring his reasons to S, and allowing S to rely on T’s authority and pass the epistemic buck to him.

Related to the general characterization of the testimonial link between T and S is what counts as “testimony.” For instance, Graham 1997 defends a relatively broad characterization of testimony. He argues that T testifies if his statement that p is offered as evidence that p. He criticizes Coady 1992, who holds that T testifies only if he actually has the relevant competence and T’s statement that p is directed to those in need of evidence, for whom p is relevant to some disputed or unresolved question. Lackey 2006b defends a hybrid view of testimony, distinguishing “hearer testimony” from “speaker testimony.” The former takes place if the latter takes place if T reasonably intends to convey the information that p in virtue of the communicable content of an act of communication, while the latter takes place if S reasonably takes T’s act of communication as conveying the information that p in virtue of the communicable content of an act of communication.

5. References and Further Reading

  • Adler, Jonathan E., 1994. “Testimony, Trust, Knowing,” Journal of Philosophy 9:264-75.
  • Adler, Jonathan E., 2002. Belief’s Own Ethics. Cambridge: MIT Press.
  • Audi, Robert, 1997. “The Place of Testimony in the Fabric of Knowledge and Justification,” American Philosophical Quarterly 34:405-22.
  • Audi, Robert, 2002. “The Sources of Belief,” in Paul Moser, ed., Oxford Handbook of Epistemology. Oxford: Oxford University Press.
  • Audi, Robert, 2004. “The A Priori Authority of Testimony,” Philosophical Issues 14:18-34.
  • Audi, Robert, 2006. “Testimony, Credulity, and Veracity,” in Lackey and Sosa 2006.
  • Audi, Robert, 2006. “Testimony, Credulity, and Veracity,” in Lackey and Sosa 2006.
  • Austin, J.L., 1946. “Other Minds,” in Philosophical Papers, 3rd ed., 1979. Oxford: Oxford University Press.
  • Bergmann, Michael, 2006a. “BonJour’s Dilemma,” Philosophical Studies 131:679-693.
  • Bergmann, Michael, 2006b. Justification Without Awareness: A Defense of Epistemic Externalism. Oxford: Oxford University Press.
  • BonJour, Laurence, 1980. “Externalist Theories of Empirical Knowledge,” Midwest Studies in Philosophy 5:53-73.
  • BonJour, Laurence, 2003. “A Version of Internalist Foundationalism,” in Laurence BonJour and Ernest Sosa, Epistemic Justification: Internalism vs. Externalism, Foundations vs. Virtues. Blackwell Publishing.
  • Burge, Tyler, 1993. “Content Preservation.” Philosophical Review 102:457-488.
  • Burge, Tyler, 1997. “Interlocution, Perception, Memory,” Philosophical Studies 86:21-47.
  • Burge, Tyler, 1999. “Comprehension and Interpretation,” in L. Hahn, ed., The Philosophy of Donald Davidson. LaSalle: Open Court.
  • Coady, C.A.J., 1973. “Testimony and Observation.” American Philosophical Quarterly 10:149-155.
  • Coady, C.A.J., 1992. Testimony: A Philosophical Study. Oxford: Clarendon Press.
  • Coady, C.A.J., 1994. “Testimony, Observation, and ‘Autonomous Knowledge,” in Matilal and Chakrabarti 1994.
  • Dummett, Michael. “Testimony and Memory,” in Matilal and Chakrabarti 1994.
  • Evans, Gareth, 1982. The Varieties of Reference. Oxford: Clarendon Press.
  • Faulkner, Paul, 2000. “The Social Character of Testimonial Knowledge,” Journal of Philosophy 97:581-601.
  • Foley, Richard, 1994. “Egoism in Epistemology,” in Frederick F. Schmitt, Socializing Epistemology: The Social Dimensions of Knowledge. Lanham: Rowman and Littlefield.
  • Foley, Richard, 2001. Intellectual Trust in Oneself and Others. Cambridge: Cambridge University Press.
  • Fricker, Elizabeth, 1987. “The Epistemology of Testimony,” Proceedings of the Aristotelian Society Supplement 61:57-83.
  • Fricker, Elizabeth, 1994. “Against Gullibility,” in Matilal and Chakrabarti 1994.
  • Fricker, Elizabeth, 1995. “Telling and Trusting: Reductionism and Anti-Reductionism in the Epistemology of Testimony,” Mind 104:393-411 (critical notice of Coady 1992).
  • Fricker, Elizabeth, 2002. “Trusting Others in the Sciences: a priori or Empirical Warrant?”, Studies in History and Philosophy of Science 33:373-83.
  • Fricker, Elizabeth, 2004. “Testimony: Knowing Through Being Told,” in I. Niiniluoto, Matti Sintonen, and J. Wolenski, eds., Handbok of Epistemology. New York: Springer.
  • Fricker, Elizabeth, 2006a. “Testimony and Epistemic Autonomy,” in Lackey and Sosa 2006.
  • Fricker, Elizabeth, 2006b. “Second-Hand Knowledge.” Philosophy and Phenomenological Research 73:592-618.
  • Fricker, Elizabeth, 2006c. “Varieties of Anti-Reductionism About Testimony—A Reply to Goldberg and Henderson,” Philosophy and Phenomenological Research 72:618-28.
  • Gettier, Edmund, 1963. “Is Justified True Belief Knowledge?” Analysis 23:121-123.
  • Goldberg, Sanford, 2001. “Testimonially Based Knowledge From False Testimony.” The Philosophical Quarterly 51:512-526.
  • Goldberg, Sanford, 2005. “Testimonial Knowledge Through Unsafe Testimony.” Analysis 65:302-311.
  • Goldberg, Sanford, 2006. “Reductionism and the Distinctiveness of Testimonial Knowledge,” in Lackey and Sosa 2006.
  • Goldberg, Sanford, 2007. “How Lucky Can You Get?” Synthese 158:315-327.
  • Goldberg, Sanford, 2008. “Testimonial Knowledge in Early Childhood, Revisited.” Philosophy and Phenomenological Research 76:1-36.
  • Goldberg, Sanford, and Henderson, David, 2005. “Monitoring and Anti-Reductionism in the Epistemology of Testimony,” Philosophy and Phenomenological Research 72:600-17.
  • Goldman, Alvin, 1999. Knowledge in a Social World. Oxford: Clarendon Press.
  • Graham, Peter J., 1997. “What is Testimony?,” The Philosophical Quarterly 47: 227-232.
  • Graham, Peter J., 2000a. “Transferring Knowledge,” Noûs 34:131–152.
  • Graham, Peter J., 2000b. “Conveying Information,” Synthese 123:365-392.
  • Graham, Peter J., 2000c. “The Reliability of Testimony,” Philosophy and Phenomenological Research 61:695-709.
  • Graham, Peter J., 2004. “Metaphysical Libertarianism and the Epistemology of Testimony,” American Philosophical Quarterly 41:37-50.
  • Graham, Peter J., 2006. “Liberal Fundamentalism and Its Rivals,” in Lackey and Sosa 2006.
    • Graham 2006:93 gives similar, but not identical, lists of supporters of direct and non-direct views of testimony. Graham lists as supporting a direct view Burge 1993, 1997, and 1999, Coady 1973 and 1992, Dummett 1994, Goldberg 2006, McDowell 1994, Quinton 1973, Reid 1764, Ross 1986, Rysiew 2000, Stevenson 1993, Strawson 1994, and Weiner 2003a. Graham lists as supporting a non-direct view Adler 2002, Audi 1997, 2002, 2004, and 2006, Hume 1739, Kusch 2002, Lackey 2003 and 2006, Lehrer 1994, Lyons 1997, Faulkner 2000, Fricker 1987, 1994, 1995, 2002, and 2006a, and Root 1998 and 2001.
  • Green, Christopher R., 2006. The Epistemic Parity of Testimony, Memory, and Perception. Ph.D. dissertation, University of Notre Dame.
  • Green, Christopher R., 2007. “Suing One’s Sense Faculties for Fraud: ‘Justifiable Reliance’ in the Law as a Clue to Epistemic Justification,” Philosophical Papers 36:49-90.
  • Hardwig, John, 1985. “Epistemic Dependence,” Journal of Philosophy 82:335-49.
  • Hardwig, John, 1991. “The Role of Trust in Knowledge,” Journal of Philosophy 88:693-708.
  • Hawthorne, John, 2004. Knowledge and Lotteries. Oxford: Oxford University Press.
  • Heck, Richard, 1995. “The Sense of Communication.” Mind 104:79-106.
  • Hinchman, Edward, 2005. “Telling as Inviting to Trust,” Philosophy and Phenomenological Research 70:562-87.
  • Hinchman, Edward, 2007. “The Assurance of Warrant.” Unpublished manuscript
  • Hume, David, 1739. A Treatise of Human Nature. 1888 edition, L.A. Selby-Bigge, ed., Oxford: Clarendon Press.
  • Hume, David, 1748. An Enquiry Concerning Human Understanding. 1977 edition, Indiannapolis: Hackett Publishing Company.
  • Insole, Christopher J., 2000. “Seeing Off the Local Threat to Irreducible Knowledge by Testimony.” Philosophical Quarterly 50:44-56.
  • Kusch, Martin, 2002. Knowledge by Agreement. Oxford: Oxford University Press.
  • Lackey, Jennifer, 1999. “Testimonial Knowledge and Transmission,” The Philosophical Quarterly 49:471-490.
  • Lackey, Jennifer, 2003. “A Minimal Expression of Non-Reductionism in the Epistemology of Testimony,” Noûs 37:706-23.
  • Lackey, Jennifer, 2005. “Testimony and the Infant/Child Objection,” Philosophical Studies 126:163-90.
  • Lackey, Jennifer, 2006a. “It Takes Two to Tango: Beyond Reductionism and Non-Reductionism in the Epistemology of Testimony,” in Lackey and Sosa 2006.
  • Lackey, Jennifer, 2006b. “The Nature of Testimony,” Pacific Philosophical Quarterly 87:177-97.
  • Lackey, Jennifer, 2006c. “Learning From Words.” Philosophy and Phenomenological Research 73:77-101.
  • Lackey, Jennifer, and Ernest Sosa, eds., 2006. The Epistemology of Testimony. Oxford: Oxford University Press.
    • Lackey gives lists of testimonial reductionists (at 183 n.3) and non-reductionists (at 186 n.19). Lackey lists as supporting forms of non-reductionism Austin 1946, Welbourne 1979, 1981, 1986, and 1994, Evans 1982, Ross 1986, Hardwig 1985 and 1991, Coady 1992 and 1994, Reid 1764, Burge 1993 and 1997, Plantinga 1993, Webb 1993, Dummett 1994, Foley 1994, McDowell 1994, Strawson 1994, Williamson 1996 and 2000, Goldman 1999, Schmitt 1999, Insole 2000, Owens 2000, Rysiew 2002, Weiner 2003a, and Goldberg 2006. Lackey lists as supporting forms of reductionism Hume 1739, Fricker 1987, 1994, 1995, and 2006a, Adler 1994 and 2002, Lyons 1997, Lipton 1998, and Van Cleve 2006. Lackey 2006 lists as preservationists (that is, T-must-know-that-p-ists) Welbourne 1979, 1981, and 1994, Hardwig 1985 and 1991, Ross 1986, Burge 1993 and 1997, Plantinga 1993, McDowell 1994, Williamson 1996, Audi 1997, Owens 2000, and Dummett 1994. Fricker 2006a is a recent addition to the preservationist camp.
  • Lehrer, Keith, 1994. “Testimony and Coherence,” in Matilal and Chakrabarti 1994.
  • Lipton, Peter, 1998. “The Epistemology of Testimony,” British Journal for the History and Philosophy of Science 29:1-31.
  • Lyons, Jack, 1997. “Testimony, Induction, and Folk Psychology,” Australasian Journal of Philosophy 75:163-78.
  • Matilal, Bimal Krishna, and Chakrabarti, Arindam, 1994. Knowing From Words: Western and Indian Philosophical Analysis of Understanding and Testimony. Dordrecht: Kluwer Academic Publishers.
  • McDowell, John, 1998. “Knowledge By Hearsay,” in Matilal and Chakrabarti 1994.
  • Moran, Richard, 2006. “Getting Told and Being Believed,” in Lackey and Sosa 2006.
  • Owens, David, 2000. Reason Without Freedom: The Problem of Epistemic Normativity. London: Routledge.
  • Plantinga, Alvin, 1993. Warrant and Proper Function. Oxford: Oxford University Press.
  • Quinton, Anthony, 1973. “Autonomy and Authority in Knowledge,” in Thoughts and Thinkers. London: Duckworth.
  • Reid, Thomas, 1764. An Inquiry into the Human Mind on the Principles of Common Sense. Excerpts in 1975 edition, Indianapolis: Hackett Publishing Company.
  • Reid, Thomas, 1785. Articles on the Intellectual Powers of Man. Excerpts in 1975 edition, Indianapolis: Hackett Publishing Company.
  • Root, Michael, 1998. “How to Teach a Wise Man,” in Kenneth Westphal, ed., Pragmatism, Reason, and Norms. New York: Fordham University
  • Root, Michael 2001. “Hume on the Virtues of Testimony,” American Philosophical Quarterly 38:19-35.
  • Ross, Angus, 1986. “Why Believe What We Are Told?” Ratio 28:69-88.
  • Rysiew, Patrick, 2000. “Testimony, Simulation, and the Limits of Inductivism,” Australasian Journal of Philosophy 78:269-274.
  • Schmitt, Frederick F., ed., 1994. Socializing Epistemology. Lanham, MD: Rowman and Littlefield.
  • Schmitt, Frederick F., 1999. “Social Epistemology,” in John Greco and Ernest Sosa, The Blackwell Guide to Epistemology. Oxford: Blackwell Publishers.
  • Schmitt, Frederick F., 2006. “Testimonial Justification and Transindividual Reasons,” in Lackey and Sosa 2006.
  • Shogenj, Tomoji, 2000. “Self-Dependent Justification Without Circularity,” British Journal for the Philosophy of Science 51: 287-98.
  • Shogenj, Tomoji, 2006. “A Defense of Reductionism about Testimonial Justification of Beliefs,” Noûs 40: 331-46.
  • Stanley, Jason, 2005. Knowledge and Practical Interests. Oxford: Oxford University Press.
  • Stevenson, Leslie, 1993. “Why Believe What People Say?” Synthese 94:429-51.
  • Strawson, P.F., 1994. “Knowing From Words,” in Matilal and Chakrabarti 1994.
  • Van Cleve, James, 2006. “Reid on the Credit of Human Testimony,” in Lackey and Sosa 2006.
  • Webb, Mark Owen, 1993. “Why I Know About As Much As You: A Reply to Hardwig,” Journal of Philosophy 90:260-70.
  • Weiner, Matthew, 2003a. “Accepting Testimony,” Philosophical Quarterly 53:256-64.
  • Weiner, Matthew, 2003b. “The Assurance View of Testimony.” Unpublished manuscript, available at
  • Welbourne, Michael, 1979. “The Transmission of Knowledge,” Philosophical Quarterly 29:1-9.
  • Welbourne, Michael, 1981. “The Community of Knowledge,” Philosophical Quarterly 31:302-14.
  • Welbourne, Michael, 1986. The Community of Knowledge. Aberdeen: Aberdeen University Press.
  • Welbourne, Michael, 1994. “Testimony, Knowledge, and Belief,” in Matilal and Chakrabarti 1994.
  • Williamson, Timothy, 1996. “Knowing and Asserting,” Philosophical Review 105:489-523.
  • Williamson, Timothy, 2000. Knowledge and its Limits. Oxford: Oxford University Press.

Author Information

Christopher R. Green
University of Mississippi
U. S. A.

Natural Theology

Natural theology is a program of inquiry into the existence and attributes of God without referring or appealing to any divine revelation. In natural theology, one asks what the word “God” means, whether and how names can be applied to God, whether God exists, whether God knows the future free choices of creatures, and so forth. The aim is to answer those questions without using any claims drawn from any sacred texts or divine revelation, even though one may hold such claims.

For purposes of studying natural theology, Jews, Christians, Muslims, and others will bracket and set aside for the moment their commitment to the sacred writings or traditions they believe to be God’s word. Doing so enables them to proceed together to engage in the perennial questions about God using the sources of evidence that they share by virtue of their common humanity, for example, sensation, reason, science, and history. Agnostics and atheists, too, can engage in natural theology. For them, it is simply that they have no revelation-based views to bracket and set aside in the first place.

This received view of natural theology was a long time in the making. Natural theology was born among the ancient Greeks, and its meeting with ancient Judeo-Christian-Muslim thought constituted a complex cultural event. From that meeting there developed throughout the Middle Ages for Christians a sophisticated distinction between theology in the Christian sense and natural theology in the ancient Greek sense. Although many thinkers in the Middle Ages tried to unite theology and natural theology into a unity of thought, the project frequently met with objections, as we shall see below. The modern era was partly defined by a widespread rejection of natural theology for both philosophical and theological reasons. Such rejection persisted, and persists, although there has been a significant revival of natural theology in recent years.

Table of Contents

  1. Historical Beginnings of Theology and Philosophy
  2. Ancient Philosophy and the First Principle
  3. Ancient Jewish and Early Christian Theology
  4. Distinction between Revealed Theology and Natural Theology
  5. Thomas Aquinas
  6. Modern Philosophy and Natural Theology
  7. Natural Theology Today
  8. References and Further Reading
    1. Primary Sources
      1. Ancient Mediaeval Theology
      2. Mediaeval Natural Theology
      3. Modern Natural Theology
      4. Contemporary Natural Theology
    2. Secondary Sources

1. Historical Beginnings of Theology and Philosophy

The story of natural theology begins where theology begins. For the Greeks the term theology originally referred to inquiry into the lives and activities of the gods or divinities. In the Greek world, theology and mythology were the same concept. The theologians were the poets whose task it was to present accounts of the gods in poetic form. In the same age when the gods dominated popular thinking, however, another movement was growing: philosophy. The first philosophers, the pre-Socratics, undertook a quest to find the first principle of things. “First principle” here means the ultimate source or origin of all things. The pre-Socratic quest is often described as “purely rational” in the sense that it proceeded without making reference or appeal to the authority of poets or stories of the gods. The pre-Socratic philosophers entertained various candidates as to the first principle, for example, water, fire, conflicting dualities, number, or simply “being.” Both the mythology of the gods (already defined by the name of theology) and the purely rational quest for the first principle (later defined by the name of philosophy) constituted the cultural heritage of Plato and Aristotle – the two thinkers who would most greatly influence the development of natural theology. Plato and Aristotle each recognized the distinction between the two ways of inquiring into ultimate truth: the poetic-mythological-theological way and the purely rational way.

2. Ancient Philosophy and the First Principle

Plato (427 – 347 B.C.E.) in his well-known “Allegory of the Cave” in Book VII of The Republic, provides an image of what education consists in. True education consists in being led from the bondage of sensory appearances into the light of knowledge afforded by the form of the Good. The form of the Good is the cause of all being and all knowledge (the first principle). Knowledge of the form of the Good is arrived at through the struggle of dialectical argumentation. The dialectical arguments of philosophy do not prove the existence of the form of the Good, but contribute to inducing a non-inferential perception of it. Although Plato himself does not identify the form of the Good as God, later thinkers surely did.

Aristotle (384 – 322 B.C.E.) offers arguments for the existence of God (a God beyond the gods so to speak). Aristotle’s arguments start from the observable fact of motion or change in things around us. On the basis of his theory of motion, change, and causality presented in Physics, Aristotle proceeds to offer a demonstration that there exists a first mover of all other movers which is not itself moved in any respect. The first, unmoved mover is a postulate intended to account for the perpetuity of motion and change around us. The “argument from motion” is not meant to be a dialectical exercise that induces non-inferential perception of God, but a demonstration or proof according to the canons of proof that Aristotle presents in the Posterior Analytics. In the later books of Metaphysics, Aristotle goes further and identifies the unmoved mover as separated from matter and as nous or Mind. It is thought thinking itself. On Aristotle’s view, even though the world is everlasting, all things everlastingly proceed in accord with separated Reason: the first principle of all. Both Plato and Aristotle have one view in common. They hold that through a form of rational argumentation (whether it be demonstrative or dialectical), one can – without appeal to the authority of sacred writings – arrive at some knowledge or awareness of a first principle that is separated from matter.

We have now come to call the development of this non-poetic or non-mythological form of thought from the pre-Socratics through Plato and Aristotle by the name of “philosophy.” Aristotle’s arguments for the existence of God, because they argued from some feature of nature, came to be called “natural theology.” Natural theology was part of philosophy, as opposed to being part of the mytho-poetic theology.

3. Ancient Jewish and Early Christian Theology

As philosophy was developing from the Pre-Socratics through to Plato and Aristotle, another development was taking place among the Israelites or the ancient Jews. What was developing was their understanding of their corporate identity as the chosen people of God (YHWH). They conceived of themselves as a people established in a covenant with him, and bound to serve him according to the law and ritual prescriptions they had received from him. Texts received as sacred and as the word of God were an essential basis for their life, practice and thought.

It was among Jews and as a Jew that Jesus of Nazareth was born, lived his life, and gathered his first adherents. Christianity shared with Judaism a method for approaching God that essentially involved texts and faith in them as God’s word (although Christianity would eventually involve more texts than ancient Judaism). As Christianity spread, so did its faith-based and text-based method for approaching an understanding of God. As a minority practice within a predominantly Roman-Hellenistic culture, Christianity soon faced two new questions. First, do Christians have a “theology?”Second, what should a Christian make of “philosophy?” So long as Christianity remained a minority practice, Christians themselves remained conflicted on how to answer the two questions posed by the predominant culture.

The first question – do Christians have a theology? – was difficult for Christians to answer due to the poetic-mythological sense of the term “theology” still prevalent in the predominant Roman-Hellenistic milieu. All Christians rejected the views of the mythological-poets (the theologians). So long as the word “theology” meant the pagan mythological poetry and worship of the gods as practiced in the prevailing culture, Christians rejected the word “theology” as well. But once Christianity became culturally predominant, the word “theology” could and did become disassociated from the belief in and worship of the gods and was applied instead to the specifically Christian task of thinking and speaking about God as revealed in the Christian Scriptures. Under the new conditions, Christians found themselves more widely capable of saying that they had a theology.

The second question – what should Christians make of philosophy? – was difficult for Christians to answer because in the name of “philosophy” Christianity met with strong resistance to its central claims, for example, that Jesus is the Word made flesh. Some Christians considered philosophy essentially incompatible with Christianity; other Christians considered the possibility of a sort of intellectual alliance between philosophy and Christianity. On the one hand, Tertullian (160 – 220) famously quipped “What has Athens to do with Jerusalem?” (Prescription Against the Heretics, ch. VII). He is often quoted to show (perhaps unfairly) that he and Christians of his age rejected philosophical or “purely rational” methods for approaching knowledge of God. On the other hand, some Christians who were roughly his contemporaries happily availed themselves of contemporary philosophical vocabulary, concepts, and reasoning to expound Christian teaching. For example, Justin the Martyr (100-165), a convert to Christianity from Platonism, developed an account of the activity of Christ in terms of a medley of Platonist and Stoic ideas. Clement of Alexandria developed an account of Christian knowledge (gnosis) based on a variety of ideas drawn from prevalent philosophies. Greek speaking eastern Christians (more quickly than Latin speaking ones) began a process of borrowing, altering, and then using prevalent philosophical categories to corroborate and clarify their faith-based views of God. Their writings are filled with discussions of God’s existence and attributes in terms that are recognizable to philosophers. But is philosophical thought that has been used to clarify and corroborate faith-based and text-based beliefs still philosophical thought? Philosophy, after all, proceeds without appeal to the authority of sacred texts, and Christian theology proceeded by way of appeal to Christian sacred texts. There was now need for a new degree of precision regarding the ways to arrive at knowledge of God.

4. Distinction between Revealed Theology and Natural Theology

The distinction between revealed theology and natural theology eventually grew out of the distinction between what is held by faith and what is held by understanding or reason. St. Augustine, in describing how he was taught as a catechumen in the Church, writes:

“From this time on, however, I gave my preference to the Catholic faith. I thought it more modest and not in the least misleading to be told by the Church to believe what could not be demonstrated – whether that was because a demonstration existed but could not be understood by all or whether the matter was not one open to rational proof…You [God] persuaded me that the defect lay not with those who believed your books, which you have established with such great authority amongst almost all nations, but with those who did not believe them.” Confessions Bk. VI, v (7). (Chadwick, 1992)

Here Augustine describes being asked to believe certain things, that is, take them on authority, even though they could not be demonstrated. The distinction between what one takes on authority (particularly the authority of Scripture) and what one accepts on the basis of demonstration runs throughout the corpus of Augustine’s writings. These two ways of holding claims about God correspond roughly with things one accepts by faith and things that proceed from understanding or reason. Each of the two ways will produce a type of theology. The program for inquiring into God on the basis of faith/text-commitments will be called “revealed theology” many centuries later. Also, the program for inquiring about God strictly on the basis of understanding or reason will be called “natural theology” many centuries later. The distinction between holding something by faith and holding it by reason, as well as the distinction between the two types of theology that each way produces, can be traced through some major figures of the Middle Ages. Two examples follow.

First, Anicius Manlius Severinus Boethius (480 – 524) presented an elaborate account of God’s existence, attributes, and providence. Although a Christian, Boethius brings together in his Consolation of Philosophy the best of various ancient philosophical currents about God. Without any appeal to the authority of Christian Scripture, Boethius elaborated his account of God as eternal, provident, good, and so forth.

Second, Pseudo-Dionysius (late 5th century) also raised the distinction between knowing things from the authority of Scripture and knowing them from rational arguments:

“Theological tradition has a dual aspect, the ineffable and mysterious on the one hand, the open and more evident on the other. The one resorts to symbolism and involves initiation. The other is philosophic and employs the method of demonstration.” Epistola IX (Luibheid, 1987)

Here we have the distinction between the two ways of approaching God explicitly identified as two aspects of theology. Augustine, Boethius, and Pseudo-Dionysius (to name but a few) thus make possible a more refined distinction between two types of aspects to theology. On the one hand, there is a program of inquiry that aims to understand what one accepts in faith as divine revelation from above. On the other hand, there is a program of inquiry that proceeds without appeal to revelation and aims to obtain some knowledge of God from below.

The eighth to the twelfth centuries are often considered the years of monastic theology. During this time, Aristotle’s writings in physics and metaphysics were lost to the West, and the knowledge of Platonism possessed by earlier Christians waned. The speculative ambitions of earlier Christian theologians (for example, Origen, Augustine, the Cappadocians, and so forth) were succeeded by the tendency of the monks to meditate upon, but not to speculate beyond, the Scriptures and the theological tradition received from earlier Christians. The monk aimed primarily at experiencing what the texts revealed about God rather than to understanding what the texts revealed about God in terms afforded by reason and philosophy (see LeClerq, 1982). This began to change with Anselm of Canterbury (1033 - 1109).

Anselm is best known in contemporary philosophical circles for his ontological argument for the existence of God. As the argument is commonly understood, Anselm aimed to show that God exists without making appeal to any sacred texts and also without basing his argument upon any empirical or observable truth. The argument consists entirely of an analysis of the idea of God, and a tracing of the implications of that idea given the laws of logic, for example, the principle of non-contradiction. Anselm, however, is known among medieval specialists for much more. Although a monk himself, he is known as the first to go beyond the purely meditative and experiential aims of monastic theology, and to pursue a serious speculative ambition. He wished to find the necessary reasons for why God acted as he has in history (as revealed by the Bible). Although Anselm’s program was still a matter of Christian faith seeking to understand God as revealed by the Bible and grasped by faith, Anselm helped legitimize once again the use of reason for speculating upon matters held by faith. Once the writings of Aristotle in Physics and Metaphysics were recovered in the West, the question inevitably arose as to what to make of Aristotelian theses vis-à-vis views held on Christian faith. There arose a need for a new degree of precision on the relationship between philosophy and theology, faith and understanding. One classic account to provide that precision came from Thomas Aquinas who had at his disposal many centuries of preliminary reflection on the issues.

5. Thomas Aquinas

In the work of Thomas Aquinas (1225 - 1276), one finds two distinctions that serve to clarify the nature and status of natural theology. Aquinas distinguishes between two sorts of truths and between two ways of knowing them.

For Aquinas, there are two sorts of truths about God:

“There is a twofold mode of truth in what we profess about God. Some truths about God exceed all the ability of human reason. Such is the truth that God is triune. But there are some truths which the natural reason also is able to reach. Such are the truth that God exists, that he is one, and the like. In fact, such truths about God have been proved demonstratively by the philosophers, guided by the light of natural reason.” (SCG I, ch.3, n.2)

On the one hand, there are truths beyond the capacity of the human intellect to discover or verify and, on the other hand, there are truths falling within the capacity of human intellect to discover and verify. Let us call the first sort truths beyond reason and the latter sort truths of natural reason. There are different ways of knowing or obtaining access to each sort of truth.

The truths of natural reason are discovered or obtained by using the natural light of reason. The natural light of reason is the capacity for intelligent thought that all human beings have just by virtue of being human. By exercising their native intelligence, human beings can discover, verify, and organize many truths of natural reason. Aquinas thinks that human beings have discovered many such truths and he expects human beings to discover many more. Although there is progress amidst the human race in understanding truths of natural reason, Aquinas thinks there are truths that are totally beyond the intelligence of the entire human race.

The truths beyond reason are outside the aptitude of the natural light of reason to discover or verify. The cognitive power of all humanity combined, all humanity of the past, present, and future, does not suffice to discover or verify one of the truths beyond reason. How then does an individual or humanity arrive at such truths? Humanity does not arrive at them. Rather, the truths arrive at humanity from a higher intellect – God. They come by way of divine revelation, that is, by God testifying to them. God testifies to them in a three-step process.

First, God elevates the cognitive powers of certain human beings so that their cognitive powers operate at a level of aptitude beyond what they are capable of by nature. Thanks to the divinely enhanced cognition, such people see more deeply into things than is possible for humans whose cognition has not been so enhanced. The heightened cognition is compared to light, and is often said to be a higher light than the light of natural reason. It is called the light of prophecy or the light of revelation. The recipients of the light of prophecy see certain things that God sees but that the rest of humanity does not. Having seen higher truths in a higher light, the recipients of the higher light are ready for the second step.

Second, God sends those who see things in the higher light to bear witness and to testify to what they see in the higher light. By so testifying, the witnesses (the prophets and Apostles of old) served as instruments or a mouthpiece through which God made accessible to humanity some of those truths that God sees but that humanity does not see. Furthermore, such truths were then consigned to Scripture (by the cognitively enhanced or “inspired” authors of the books of the Bible), and the Bible was composed. The Bible makes for the third step.

Third, in the present God uses the Bible as a current, active instrument for teaching the same truths to humanity. By accepting in faith God speaking through the Bible, people today have a second-hand knowledge of certain truths that God alone sees first-hand. Just as God illuminated the prophets and apostles in the light of prophecy to see what God alone sees, God also illuminates people today to have faith in God speaking through the Bible. This illumination is called the light of faith.

Just as one sees certain claims of natural reason by the light of natural reason, so the Christian faith hold certain claims beyond reason by the God-given light of faith. In the thought of Thomas Aquinas, the traditional distinction between two domains of truths and the distinctive way of knowing truth in each domain, reaches a point of clarity. This distinction is at the basis of the distinction between theology and natural theology.

Theology (in the Thomistic sense), as it later came to be called, is the program for inquiring by the light of faith into what one believes by faith to be truths beyond reason that are revealed by God. Natural theology, as it later came to be called, is the program for inquiring by the light of natural reason alone into whatever truths of natural reason human beings might be able to find about God. Theology and natural theology differ in what they inquire into, and in what manner they inquire. What theology inquires into is what God has revealed himself to be. What natural theology inquires into is what human intelligence can figure out about God without using any of the truths beyond reason, that is, the truths divinely revealed. Theology proceeds by taking God’s revelation as a given and using one divinely revealed truth to account for another divinely revealed truth (or to give a higher account of truths of natural reason). Natural theology proceeds by bracketing and setting aside God’s revelation and seeking to discover, verify, and organize truths of natural reason about God. Aquinas’s distinctions remain the historical source of how many contemporary theologians and philosophers characterize the differences of their respective disciplines.

To see how theology and natural theology differ for Aquinas, it may help to look into faith and theology in more detail. One seems blind in accepting on faith the truths of revelation found in the Bible. They seem blind because faith is a way of knowing something second-hand. A faithful person is in the position of believing what another intellect (the divine intellect) sees. Now although one does not see for oneself the truths accepted in faith, one desires to see them for oneself. Faith tends to prompt intellectual questioning, inquiry, and seeking into the meaning and intelligibility of the mystery held in faith. Why did God create the world? Why does God allow so much suffering? Why did God become Incarnate? Why did he have to die on a cross to save humanity? Many more questions come up. One asks questions of the truths of divine revelation without doubting those truths. On the contrary, one raises such questions because in faith one is confident that one truth of divine revelation can explain another truth of divine revelation. The truth of the Trinity’s purposes in creating us, for example, can explain the Incarnation. Thus, one questions the faith in faith. The project of questioning the faith in faith, finding answers, organizing them, justifying them, debating them, seeking to understanding “the why” and so forth is called theology.

Natural theology, on the other hand, does not presuppose faith as theology does. Natural theology does not attempt to explain truths beyond reason such as the Incarnation or the Trinity, and it certainly does not attempt to base anything on claims made in the Bible. Rather, natural theology uses other sources of evidence. Natural theology appeals to empirical data and the deliverances of reason to search out, verify, justify, and organize as much truth about God as can be figured out when one limits oneself to just these sources of evidence.

Aquinas practiced both theology and natural theology. Furthermore, he blended the two rather freely, and blended them into a unified architectonic wisdom. His architectonic contains both theology and natural theology (sometimes they are difficult to sort out).

Aquinas is primarily a theologian and his best-known work is his Summa Theologica. Aquinas saw himself as using truths of natural reason to help understand truths of divine revelation. Consequently, as part of his theology, Aquinas presents and refines many philosophical arguments (truths of natural reason) that he had inherited from multiple streams of his culture: Aristotle, Augustine, Boethius, Pseudo-Dionysius, Muslim philosophers and commentators on Aristotle, and the Jewish Rabbi Moses Maimonides. Aquinas saw himself as taking all the truth they had discovered and using it all to penetrate the meaning and intelligibility of what God is speaking through the bible.

In his Summa Contra Gentiles, Aquinas presents in lengthy detail a series of philosophical demonstrations of the existence of God, philosophical demonstrations of a variety of divine attributes, a philosophical theory of naming God, as well as multiple philosophical points concerning divine providence, for example, the problem of evil. For the first two volumes of the Summa Contra Gentiles, Aquinas proceeds without substantial appeal to the authority of Scripture (although Aquinas does repeatedly point to the agreement between what he arrived at philosophically and what Christians hold by faith in their Scriptures). He seems to intend his arguments to presuppose as little of the Christian faith as possible. The Summa Contra Gentiles, traditionally, was pointed out as one of the principal locations of Aquinas natural theology. One old interpretation of the Summa Contra Gentiles says that its purpose was to train Christian missionaries who would be required to engage Muslims in discussion and debate about God. Since Christians and Muslims held no common sacred texts, they would need to dispute in terms afforded by their common humanity, that is, the truths of natural reason. Another interpretation makes it out to be Aquinas’s own preparation for his SummaTtheologice (Hibbs, 1995).

Thomas Aquinas’s distinction of the two sorts of truths about God and the two ways of knowing the truth about God soon faced outbreaks of skepticism. That skepticism, ironically, led to several developments in natural theology.

6. Modern Philosophy and Natural Theology

Not long after Aquinas, certain philosophers began to doubt that knowledge of God could be obtained apart from divine revelation and faith. William of Ockham (1280 – 1348) rejected central theses of Aristotelian philosophy that Aquinas relied upon in arguing for the existence of God, divine attributes, divine providence, and so forth. Ockham rejected the Aristotelian theory of form. He believed that a world construed in terms of Aristotelian essences was incompatible with God and creation as revealed in Scripture. To Ockham, Aquinas’s God seemed subject to the natures of things rather than being their author in any significant sense. Nonetheless, Ockham was a Christian. Having rejected the Aristotelian theory of form and essence, natural theology as practiced by Aquinas was not possible. Of the two ways available for obtaining some knowledge of God – faith in revelation and reason without revelation – Ockham rejected the latter. Consequently, the only way remaining to know something of God was by faith in divine revelation.

After Ockham, the modern period abounded in various views towards natural theology. On the one hand, there were many who continued to hold that nature affords some knowledge of God and that human nature has some way of approaching God even apart from revelation. The scholastic thinker Francisco Suarez (1548-1617), for example, presented arguments for the existence of God, divine attributes, and divine providence. On the other hand, the rise of general anti-Aristotelianism (for example, Bacon), the rise of a mechanistic conception of the universe (for example, Hobbes, and the methodological decision to ignore final causality (for example, Descartes), all made traditional theological arguments for the existence of God from nature harder to sustain. Modern philosophy and modern science was perceived by many to threaten the traditional claims and conclusions of natural theology, for example, that the existence and attributes of God can be known apart from revelation and faith.

Many Christian thinkers responded to the new situation posed by modern philosophy and modern science. These responses shared with modern philosophy and modern science a non-Aristotelian, and perhaps even anti-Aristotelian, line of thought. Consequently, these responses constitute a thoroughly non-Aristotelian form of natural theology, that is, a natural theology that does not presuppose any of Aristotle’s views on nature, motion, causality, and so forth.

Descartes himself, for example, is commonly thought to have offered a new version of the ontological argument (Anselm’s argument) for the existence of God. Descartes advanced his argument in such a way that not only did he intend to avoid any Aristotelian presuppositions about the external world, he apparently intended to avoid any presuppositions at all about the external world – even the presupposition of its existence. Descartes’ rationalist and a priori method characterized much of the natural theology on the continent of Europe. In Great Britain, there grew up another form of natural theology tending to use empirical starting points and consciously probabilistic forms of argument. Two examples are noteworthy in this regard. Samuel Clark’s (1675 – 1729) work A Demonstration of the Being and Attributes of God and Joseph Butler’s (1692 – 1752) Analogy of Religion, Natural and Revealed. The former latter work begins from the fact, presumably accessible empirically, that something or other has always existed. It proceeds to argue for the existence of God and various attributes, for example, God’s infinity and omnipresence. The latter work offers a probabilistic argument in favor of the existence of God and certain attributes based on analogies between what is found in nature and what is found in revelation.

David Hume (1711 – 1761) offered perhaps the most poignant criticisms of the post-Aristotelian forms of natural theology. His Enquiry Concerning Human Understanding contained a chapter criticizing the justification for belief in miracles as well as a chapter leveled against arguments from design. The latter criticism against design arguments, as well as additional criticisms of various divine attributes, was offered in much more extensive detail in his Dialogues Concerning Natural Religion. The latter work was more extensive in that it applied some of the central tenets of Hume’s epistemology to natural theology in general, and thus served as a sort of critique of natural theology as a whole. Inspired by Hume’s thought, the empiricist critique of natural theology would later take on even more expanded and sophisticated forms.

David Hume’s agnostic and atheistic conclusions, however, did not find much popular appeal in his own day. Hence, even after Hume’s death, William Paley (1743 – 1805) was able to advance a natural theology that became standard reading in universities for the first half of the nineteenth century. Paley’s Natural Theology or Evidences of the Existence and Attributes of the Deity formulated a version of the design argument that even convinced the early Charles Darwin. Although Hume did not dissuade his contemporaries such as Paley from doing natural theology, Hume still had a significant impact on natural theology through his influence on Immanuel Kant.

Immanuel Kant (1724 – 1804) found himself faced on the one side with a rationalism that made quite ambitious metaphysical claims and on the other side with an empiricism that allowed humans to know little beyond what was immediately sensible. The rationalists claimed to offer in modo geometrico, a series of demonstrations of many truths about God proceeding from a set of axioms self-evident to reason and needing no empirical verification. Later, their approach would be called a priori. The empiricists followed a different course, and stressed the human incapacity to know substantive necessary truths, or at least Hume seems to have stressed this or Hume as Kant understood him. Kant became skeptical of the rationalist’s metaphysical ambitions, yet was eager to overcome the Humean skepticism that threatened not only metaphysics but the new science as well. In his work, Kant is widely thought to have posed perhaps the most significant argumentative challenge to theology, natural theology, and metaphysics in general.

For Kant, arguments for the existence of God cannot prove their point due to the limits of the human cognitive capacity. The apparent cogency of such arguments is due to transcendental illusion; confusing the constitution of things and the constitution of one’s thought or experience of things. For example, causal principles such as “every event has a cause” are nothing but requirements for the rational organization of our perceptions. Demonstrations of God’s existence, divine attributes, and divine providence, to the extent that they use such principles as premises concerning the constitution of things in themselves, are illusory. Henceforth, any attempt to do classical theology, natural theology, or metaphysics had to answer the Kantian challenge.

Natural theology after Kant took two various routes. In Protestant and Anglican circles, the influence of Paley and others suffered a blow from Charles Darwin’s (1809 - 1882) theory of evolution and the subsequent evolutionary theories that have been developed. Given Darwin, the proposition that all life developed by chance alone is widely perceived to have a degree of plausibility that it was not perceived to have in Paley’s day. Whether and to what extent Darwinian principles eliminate the necessity for positing a divine designer is one of the most hotly contested issues in natural theology today. But there was more to post-Kantian natural theology.

In Catholic circles, natural theology went in two directions. On the one hand, there were some who intended to use modern philosophy for theological purposes just as the mediaevals had done. Antonio Rosmini (1797 - 1855), for example, developed a theology and a natural theology using elements from Augustine, Bonaventure, Pascal, and Malebranche. On the other hand, there were some who revived the thought of Thomas Aquinas. At first, there were but a handful of neo-Thomists. But in time Thomism was not only revived, but disseminated through a vast system of Catholic education. Thomists disagreed amongst each other on how to relate to strands of contemporary thought such as science and Kant. So neo-Thomism grew in many directions: Transcendental Thomism, Aristotelian Thomism, Existential Thomism, and so forth. At any rate, neo-Thomists tended to develop their own counter-reading of modern philosophy – especially Kant – and to use Thomistic natural theology as an apparatus for higher education and apologetics.

7. Natural Theology Today

Outside neo-Thomistic circles, natural theology was generally out of favor throughout the twentieth century. Due to neo-Kantian criticisms of metaphysics, an extreme confidence in contemporary science, a revival and elaboration of Humean empiricism in the form of logical positivism, as well as existentialism among Continental thinkers, metaphysics was thought to be forever eliminated as a way of knowing or understanding truth about God (or anything at all for that matter). Natural theology was thought to have suffered the same fate as being part of metaphysics. It is fair to say that in many places metaphysics and natural theology were even held in contempt. Towards the second half of the twentieth century, however, the tide began to turn – first in favor of the possibility of metaphysics and soon afterwards to a revival of natural theology.

Natural theology today is practiced with a degree of diversity and confidence unprecedented since the late Middle Ages. Natural theologians have revived and extended arguments like Anselm’s (the so-called “perfect being theology”). They have also re-cast arguments from nature in several forms – from neo-Thomistic presentations of Aquinas’s five ways to new teleological arguments drawing upon the results of contemporary cosmology. Arguments from the reality of an objective moral order to the existence of God are circulated and taken seriously. Ethical theories that define goodness in terms of divine command are considered live options among an array of ethical theories. Discussions of divine attributes abound in books and journals devoted exclusively to purely philosophical treatments of God, for example, the journal Faith and Philosophy. Debates rage over divine causality, the extent of God’s providence, and the reality of human free choice. The problem of evil has also been taken up anew for fresh discussions – both by those who see it as arguing against the existence of God and by those who wish to defend theism against the reality of evil. It is English speaking “analytic” philosophers who have taken the lead in discussing and debating these topics.

For people of faith who wish to think through their faith, to see whether reason alone apart from revelation offers anything to corroborate, clarify, or justify what is held by faith, there is no shortage of materials to research or study or criticize. Rather, vast quantities of books, articles, debates, discussions, conferences, and gatherings are available. For those who have no faith, but wish to inquire into God without faith, the same books, articles, debates, discussion, conferences, and gatherings are available. Natural theology is alive and well to assist anyone interested grappling with the perennial questions about God.

8. References and Further Reading

a. Primary Sources

i. Ancient Mediaeval Theology

  • Plato, Republic, particularly Bk. VII.
    • The so-called “Allegory of the Cave” in the opening pages of Bk. VII was an influential text upon later conceptions of God and the Good.
  • Aristotle, Physics, particularly Bk. VII & VIII.
    • The locus classicus for the argument from motion for the existence of a first, unmoved mover.
  • Aristotle, Metaphysics, particularly Bk. XII
    • This passage takes the argument of the Physics Bks. VII & VIII a step further by arguing that the first mover moves things as an end or goal and is intelligent.

ii. Mediaeval Natural Theology

  • Augustine, Confessions, trans. Chadwick, Henry. Oxford, 1992.
    • A classic autobiographical account of a thinking man’s journey to faith in the Christian God. In Bk.VI, Augustine draws a distinction between things demonstrable and things to be taken on authority.
  • Augustine, On Free Choice of the Will, trans. Williams, Thomas. Indianapolis: Hackett Publishing Company, 1993.
    • Out of the many works of St. Augustine, Bks. II & III in this work come as close as possible to presenting an argument for the existence of God. Augustine considers eternal truths, the order of the world, and the nature of reason, and proceeds to discuss the relationship between these things and the wisdom the pre-existed that world. Many students find this dialogue satisfying to read.
  • Boethius, The Consolation of Philosophy. trans. Green, Richard. New York: Macmillan Publishing Company, 1962.
    • A shorter work, cast in semi-dialogue form, that synthesizes and presents a great deal of late Hellenistic natural theology. It is fair to call this work one of the principal sources of mediaeval humanism and philosophy. Many students find this work satisfying to read.
  • Plotinus, Enneads. trans. MacKenna, Stephen. New York: Larson Publications, 1992.
    • A lengthy work of neo-Platonic cosmology and natural theology. Being the work of a non-Christian, it shows (like Aristotle’s works) that someone without Christian faith commitments can engage in natural theology. However, Plotinus’ sympathies lie more with Plato’s notion of a dialectically induced vision of the Good than with a demonstrative approach to proving the existence of God. Consequently, there are many passages of a more mystical and meditative quality intended for those who have had the prerequisite perceptions of the One.
  • Pseudo-Dionysius, “Letter Nine” in The Complete Works. trans. Luibheid, Colm. New Jersey: Paulist Press, 1987.
    • Presents the distinction between natural and mystical theology and the two ways of knowing that are proper to each.
  • Anselm, “Monologion” & “Proslogion” both in The Major Works. Oxford University Press, 1998.
    • The Proslogion contains the so-called “ontological argument” for the existence of God. The Monologion, in its first two dozen chapters, presents a natural theology by way of unpacking what is involved in the notion of a supreme nature.
  • Aquinas, SummaTtheologiae, trans. Fathers of the English Dominican Province. New York: Benziger Bros, 1948 .
    • The classic theological work by Thomas Aquinas. In part I, q. 2 – 27, Aquinas presents numerous philosophical arguments for the existence of God, divine attributes, divine providence, and so forth. Often called the “Treatise on God,” it is a classic locus of natural theology.
  • Aquinas, Summa Contra Gentiles, esp. trans. Pegis, Anton. University of Notre Dame Press, 1975.
    • In Bks. I & II, Aquinas presents what he considers to be demonstrations for the existence of God, several divine attributes, and an account of divine providence. For these two books, a great deal of the thinking is commonly thought to proceed in the light of natural reason alone.
  • Bonaventure, The Journey of the Mind to God. trans. Boehner, Philotheus. Indianapolis: Hackett Publishing Company, 1993.
    • A short work of mediaeval natural theology. A contemporary of Aquinas, Bonaventure takes the reader on a journey from creatures to the Creator. This book shows what an alternative to Aquinas’s Aristotelian natural theology looks like.

iii. Modern Natural Theology

  • Butler, Joseph. The Analogy of Religion, Natural and Revealed, to the Constitution and Course of Nature. Ann Arbor, MI: Scholarly Publishing Office, University of Michigan Library, 2005.
    • A classic of English natural theology with an extended treatment of the immortality of the soul. The author ventures a probabilistic argument in confirmation of certain revealed truths.
  • Clark, Samuel. A Demonstration of the Being and Attributes of God: And Other Writings. Ed. Vailato, Ezio. Cambridge University Press, 1998.
    • This treatise of English natural theology was originally a set of sermons preached against the writings of Hobbes and Spinoza and their followers. Those sermons were revised into an extended and rigorous argument.
  • Descartes, Rene. “Meditations” in Selected Philosophical Writings. trans. Cottingham, John., Stoothoff, Robert., Murdoch, Dougald. Cambridge University Press, 1998.
    • In the “Third Meditation,” Descartes advances an argument for the existence of God that some have called an “ontological argument” because he infers from his idea of God to the existence of God.
  • Locke, John. An Essay Concerning Human Understanding. Oxford University Press, 1975.
    • In Bk. IV, ch. 10 John Locke advances what he considers to be a demonstration of the existence of an eternal and necessary being. The chapter is an example of how arguments for the existence of God continued to be advanced well into early modernity by post-Aristotelian thinkers.
  • Hume, David. An Enquiry Concerning Human Understanding. Indianapolis, IN: Hackett Publishing Company, 1977.
    • A brief classical essay in empiricist philosophy. The principles presented in this book served first to motivate Kant to mount his criticisms of metaphysics and natural theology and continue to motivate many of today’s criticisms of arguments for the existence of God, divine attributes, and so forth.
  • Hume, David. Dialogues Concerning Natural Religion: The Posthumous Essays of the Immortality of the Soul and of Suicide. Indianapolis, IN: Hackett Publishing Co., 1998.
    • This dialogue is an extended application of Hume’s epistemology, and in effect a critique of natural theology as an enterprise.
  • Kant, Immanuel. Critique of Pure Reason. trans. Smith, Norman Kemp. NY: St. Martin’s Press. 1929.
    • This classical work stands as a permanent challenge to anyone aiming at arriving at some knowledge or understanding of God by the light of natural reason alone. The work is no easy read – not even for specialists. However, in Part II, Second Division, Chapter II, Kant presents his famous “antinomies of pure reason.” The antinomies are arguments, laid out in synopsis form, both for and against certain theses. Of all the criticism of metaphysics that can be found in this book, the antinomies in particular have persuaded many thinkers to hold that any attempt by reason alone to arrive at some knowledge of God is bound to end in hopeless self-contradiction. See especially the Fourth Antinomy.
  • Kant, Immanuel. Prolegomena to Any Future Metaphysics. trans. Ellington, James W. Indianapolis: Hackett Publishing Company, 1977.
    • This shorter work summarizes and presents in simpler form much of the thought found in the longer and more elaborate Critique of Pure Reason.
  • Newman, John Henry Cardinal. An Essay in Aid of a Grammar of Assent. University of Notre Dame Press, 1979.
    • A classic work of nineteenth century British apologetics. Among many other things, Newman presents an account of how conscience moves one to believe in the existence of God.

iv. Contemporary Natural Theology

  • Howard-Snyder, Daniel, ed. The Evidential Argument from Evil. Indiana University Press, 1996.
    • An excellent anthology of essays, all treating of the problem of evil, by contemporary philosophers. The collection contains some essays arguing against the existence of God on the basis of evil and other essays defending the existence of God against such arguments.
  • Kenny, Anthony. The Five Ways: St. Thomas Aquinas’ proofs of the existence of God. London: Routledge & K. Paul, 1969.
    • A short work that goes through Aquinas’s arguments for the existence of God and treats them in terms of contemporary formal logic. Kenny concludes that all the arguments fail.
  • Mackie, J.L., The Miracle of Theism: Arguments for and against the existence of God. Oxford University Press, 1982.
    • A widely read work that presents a wide variety of arguments for the existence of God, criticizes them, and ultimately rejects them all. It also contains important discussions of who has the burden of proof in natural theology and arguments against the existence of God based on the reality of evil.
  • Plantinga, Alvin. God and Other Minds. Cornell University Press, 1967.
    • Another work that presents several standard proofs for the existence of God and criticizes them. The author, however, is a theist. After dismissing the standard proofs for the existence of God as inconclusive or indecisive, Plantinga goes on to give an argument that belief in the existence of God can be rational even without such proofs. He argues that believing in God is analogous to believing in other minds. Just as one is rational in believing in other minds without decisive or conclusive proof that other minds exist, so one is rational in believing in God without decisive or conclusive proof that God exists.
  • Plantinga, Alvin. God, Freedom, & Evil. William B. Eerdmans Publishing Co., 1977.
    • This widely hailed work purports to refute the thesis that it is impossible for both God and evil to exist. Using the modal logic that he helped to pioneer, Plantinga shows how it is possible for both God and evil to exist. Even atheist philosophers find Plantinga’s point to be compelling, and the terms of the debate on the problem of evil have changed since, and because of, the book’s publication. For the current state of the debate, see Howard-Snyder’s work referenced above.
  • Swinburne, Richard. The Coherence of Theism. Oxford: Clarendon Press, 1977.
     Swinburne, Richard. The Existence of God. 2nd Edition. Oxford University Press, 2004.
    Swinburne, Richard. Providence and the Problem of Evil. Oxford: Clarendon Press, 1998.

    • These three books by Richard Swinburne jointly constitute a powerful argument for, and defense of, the existence of God. In The Coherence of Theism, Swinburne answers common arguments advanced against the possibility of the existence of God or arguing for the existence of God. In The Existence of God, Swinburne presents his “cumulative case” inductive argument for the existence of God. In Providence and the Problem of Evil, Swinburne aims to account for the existence of evil given the existence of a provident God.
  • Varghese, Roy Abraham. The Wonder of the World: A Journey from Modern Science to the Mind of God. Arizona: Tyr Publishing, 2004.
    • This work brings together under one cover many of the scientifically received facts that tend to confirm the existence of God. One can find laid out here many of the physical, biological, and cosmological facts that have persuaded many contemporary scientists of the existence of an intelligent God behind it all. The work also raises pertinent philosophical considerations in favor of the same conclusion. Written in semi-dialogue form, without using significant technical jargon, this award-winning book is accessible to a wide audience.

b. Secondary Sources

  • Craig, William Lane. The Cosmological Argument from Plato to Leibniz. NY: Barnes & Noble Books, 1980.
    • The book does what the title says; it gives a history of the various cosmological arguments from ancient times until modernity.
  • Congar, Yves. A History of Theology. NY: Doubleday, 1968.
    • A good one-volume summary of the history of theology. This book served as the basic reference for section 3 above in the discussion of ancient Greek theology, and the development of theology among early Christians.
  • Davies, Brian. An Introduction to the Philosophy of Religion. Oxford University Press, 1982.
    • This widely used textbook presents most of the main topics in the philosophy of religion today – including arguments in natural theology.
  • Hibbs, Thomas. Dialectic and Narrative in Aquinas: An Interpretation of the Summa Contra Gentiles. University of Notre Dame Press, 1995.
    • This book was referenced above as presenting an alternative interpretation to the Summa Contra Gentiles.
  • LeClerq, Jean. The Love of Learning and the Desire for God. trans. Misrah, Catharine. Fordham University Press, 1982.
    • This book was referenced in the fourth section above as regards the state of theology in mediaeval monasteries.
  • Stump, Eleonore. “Aquinas on the Sufferings of Job” in The Evidential Argument from Evil. ed. Howard-Snyder, Daniel. Indiana University Press, 1996.
    • An unusually clear elucidation of Aquinas’ understanding of the relationship between God and evil as Aquinas presents it in his commentary on Job.
  • Stump, Eleonore, ed. Philosophy of Religion. Malden, MA: Blackwell Publishers, 1999.
    • An anthology of classic texts on many topics in the philosophy of religion. Many of the texts referenced in this list are found within this anthology.

Author Information

James Brent
Saint Louis University
U. S. A.

Faith and Reason

Traditionally, faith and reason have each been considered to be sources of justification for religious belief. Because both can purportedly serve this same epistemic function, it has been a matter of much interest to philosophers and theologians how the two are related and thus how the rational agent should treat claims derived from either source. Some have held that there can be no conflict between the two—that reason properly employed and faith properly understood will never produce contradictory or competing claims—whereas others have maintained that faith and reason can (or even must) be in genuine contention over certain propositions or methodologies. Those who have taken the latter view disagree as to whether faith or reason ought to prevail when the two are in conflict. Kierkegaard, for instance, prioritizes faith even to the point that it becomes positively irrational, while Locke emphasizes the reasonableness of faith to such an extent that a religious doctrine’s irrationality—conflict with itself or with known facts—is a sign that it is unsound. Other thinkers have theorized that faith and reason each govern their own separate domains, such that cases of apparent conflict are resolved on the side of faith when the claim in question is, say, a religious or theological claim, but resolved on the side of reason when the disputed claim is, for example, empirical or logical. Some relatively recent philosophers, most notably the logical positivists, have denied that there is a domain of thought or human existence rightly governed by faith, asserting instead that all meaningful statements and ideas are accessible to thorough rational examination. This has presented a challenge to religious thinkers to explain how an admittedly nonrational or transrational form of language can hold meaningful cognitive content.

This article traces the historical development of thought on the interrelation of religious faith and reason, beginning with Classical Greek conceptions of mind and religious mythology and continuing through the medieval Christian theologians, the rise of science proper in the early modern period, and the reformulation of the issue as one of ‘science versus religion’ in the twentieth century.

Table of Contents

  1. Introduction
  2. The Classical Period
    1. Aristotle and Plato
    2. Stoics and Epicureans
    3. Plotinus
  3. The Rise of Christianity
    1. St. Paul
    2. Early Christian Apologists
    3. St. Augustine
    4. Pseudo-Dionysius
  4. The Medieval Period
    1. St. Anselm
    2. Peter Lombard
    3. Islamic Philosophers
    4. Jewish Philosophy
    5. St. Thomas Aquinas
    6. The Franciscan Philosophers
  5. The Renaissance and Enlightenment Periods
    1. The Galileo Controversy
    2. Erasmus
    3. The Protestant Reformers
    4. Continental Rationalism
    5. Blaise Pascal
    6. Empiricism
    7. German Idealism
  6. The Nineteenth Century
    1. Romanticism
    2. Socialism
    3. Existentialism
    4. Catholic Apologists
    5. Pragmatism
  7. The Twentieth Century
    1. Logical Positivism and Its Critics
    2. Philosophical Theology
    3. Neo-Existentialism
    4. Neo-Darwinism
    5. Contemporary Reactions Against Naturalism and Neo-Darwinism
    6. Liberation Theology
  8. References and Further Reading

1. Introduction

Faith and reason are both sources of authority upon which beliefs can rest. Reason generally is understood as the principles for a methodological inquiry, whether intellectual, moral, aesthetic, or religious. Thus is it not simply the rules of logical inference or the embodied wisdom of a tradition or authority. Some kind of algorithmic demonstrability is ordinarily presupposed. Once demonstrated, a proposition or claim is ordinarily understood to be justified as true or authoritative. Faith, on the other hand, involves a stance toward some claim that is not, at least presently, demonstrable by reason. Thus faith is a kind of attitude of trust or assent. As such, it is ordinarily understood to involve an act of will or a commitment on the part of the believer. Religious faith involves a belief that makes some kind of either an implicit or explicit reference to a transcendent source. The basis for a person's faith usually is understood to come from the authority of revelation. Revelation is either direct, through some kind of direct infusion, or indirect, usually from the testimony of an other. The religious beliefs that are the objects of faith can thus be divided into those what are in fact strictly demonstrable (scienta) and those that inform a believer's virtuous practices (sapientia).

Religious faith is of two kinds: evidence-sensitive and evidence-insensitive. The former views faith as closely coordinated with demonstrable truths; the latter more strictly as an act of the will of the religious believer alone. The former includes evidence garnered from the testimony and works of other believers. It is, however, possible to hold a religious belief simply on the basis either of faith alone or of reason alone. Moreover, one can even lack faith in God or deny His existence, but still find solace in the practice of religion.

The basic impetus for the problem of faith and reason comes from the fact that the revelation or set of revelations on which most religions are based is usually described and interpreted in sacred pronouncements, either in an oral tradition or canonical writings, backed by some kind of divine authority. These writings or oral traditions are usually presented in the literary forms of narrative, parable, or discourse. As such, they are in some measure immune from rational critique and evaluation. In fact even the attempt to verify religious beliefs rationally can be seen as a kind of category mistake. Yet most religious traditions allow and even encourage some kind of rational examination of their beliefs.

The key philosophical issue regarding the problem of faith and reason is to work out how the authority of faith and the authority of reason interrelate in the process by which a religious belief is justified or established as true or justified. Four basic models of interaction are possible.

(a) The conflict model. Here the aims, objects, or methods of reason and faith seem to be very much the same. Thus when they seem to be saying different things, there is genuine rivalry. This model is thus assumed both by religious fundamentalists, who resolve the rivalry on the side of faith, and scientific naturalists, who resolve it on the side of reason.

(b) The incompatibilist model. Here the aims, objects, and methods of reason and faith are understood to be distinct. Compartmentalization of each is possible. Reason aims at empirical truth; religion aims at divine truths. Thus no rivalry exists between them. This model subdivides further into three subdivisions. First, one can hold faith is transrational, inasmuch as it is higher than reason. This latter strategy has been employed by some Christian existentialists. Reason can only reconstruct what is already implicit in faith or religious practice. Second, one can hold that religious belief is irrational, thus not subject to rational evaluation at all. This is the position taken ordinarily by those who adopt negative theology, the method that assumes that all speculation about God can only arrive at what God is not. The latter subdivision also includes those theories of belief that claim that religious language is only metaphorical in nature. This and other forms of irrationalism result in what is ordinarily considered fideism: the conviction that faith ought not to be subjected to any rational elucidation or justification.

(c) The weak compatibilist model. Here it is understood that dialogue is possible between reason and faith, though both maintain distinct realms of evaluation and cogency. For example, the substance of faith can be seen to involve miracles; that of reason to involve the scientific method of hypothesis testing. Much of the Reformed model of Christianity adopts this basic model.

(d) The strong compatibilist model. Here it is understood that faith and reason have an organic connection, and perhaps even parity. A typical form of strong compatibilism is termed natural theology. Articles of faith can be demonstrated by reason, either deductively (from widely shared theological premises) or inductively (from common experiences). It can take one of two forms: either it begins with justified scientific claims and supplements them with valid theological claims unavailable to science, or it starts with typical claims within a theological tradition and refines them by using scientific thinking. An example of the former would be the cosmological proof for God's existence; an example of the latter would be the argument that science would not be possible unless God's goodness ensured that the world is intelligible. Many, but certainly not all, Roman Catholic philosophers and theologians hold to the possibility of natural theology. Some natural theologians have attempted to unite faith and reason into a comprehensive metaphysical system. The strong compatibilist model, however, must explain why God chose to reveal Himself at all since we have such access to him through reason alone.

The interplay between reason and faith is an important topic in the philosophy of religion. It is closely related to, but distinct from, several other issues in the philosophy of religion: namely, the existence of God, divine attributes, the problem of evil, divine action in the world, religion and ethics, religious experience and religious language, and the problem of religious pluralism. Moreover, an analysis of the interplay between faith and reason also provides resources for philosophical arguments in other areas such as metaphysics, ontology, and epistemology.

While the issues the interplay between faith and reason addresses are endemic to almost any religious faith, this article will focus primarily on the faith claims found in the three great monotheistic world religions: Judaism, Islam, and particularly Christianity.

This rest of the article will trace out the history of the development of thinking about the relationship between faith and reason in Western philosophy from the classical period of the Greeks through the end of the twentieth century.

2. The Classical Period

Greek religions, in contrast to Judaism, speculated primarily not on the human world but on the cosmos as a whole. They were often formulated as literary myths. Nonetheless these forms of religious speculation were generally practical in nature: they aimed to increase personal and social virtue in those who engaged in them. Most of these religions involved civic cultic practices.

Philosophers from the earliest times in Greece tried to distill metaphysical issues out of these mythological claims. Once these principles were located and excised, these philosophers purified them from the esoteric speculation and superstition of their religious origins. They also decried the proclivities to gnosticism and elitism found in the religious culture whence the religious myths developed. None of these philosophers, however, was particularly interested in the issue of willed assent to or faith in these religious beliefs as such.

a. Aristotle and Plato

Both Plato and Aristotle found a principle of intellectual organization in religious thinking that could function metaphysically as a halt to the regress of explanation. In Plato, this is found in the Forms, particularly the Form of the Good. The Form of Good is that by which all things gain their intelligibility. Aristotle rejected the Form of the Good as unable to account for the variety of good things, appealing instead to the unmoved mover as an unchangeable cosmic entity. This primary substance also has intelligence as nous: it is "thought thinking itself." From this mind emerges exemplars for existent things.

Both thinkers also developed versions of natural theology by showing how religious beliefs emerge from rational reflections on concrete reality as such. An early form of religious apologetics - demonstrating the existence of the gods -- can be found in Plato's Laws. Aristotle's Physics gave arguments demonstrating the existence of an unmoved mover as a timeless self-thinker from the evidence of motion in the world.

b. Stoics and Epicureans

Both of these schools of thought derived certain theological kinds of thinking from physics and cosmology. The Stoics generally held a cosmological view of an eternal cycle of identical world-revolutions and world-destructions by a universal conflagration. Absolute necessity governs the cyclic process and is identified with divine reason (logos) and providence. This provident and benevolent God is immanent in the physical world. God orders the universe, though without an explicit purpose. Humans are microcosms; their souls are emanations of the fiery soul of the universe.

The Epicureans, on the other hand, were skeptical, materialistic, and anti-dogmatic. It is not clear they were theists at all, though at some points they seem to be. They did speak of the gods as living in a blissful state in intermundial regions, without any interest in the affairs of humans. There is no relation between the evils of human life and a divine guidance of the universe. At death all human perception ceases.

c. Plotinus

Plotinus, in the Enneads, held that all modes of being and value originate in an overflow of procession from a single ineffable power that he identified with the radical simplicity of the One of Parmenides or the Good of Plato's Republic. Nous, the second hypostasis after the One, resembles Aristotle's unmoved mover. The orders of the world soul and nature follow after Nous in a linear procession. Humans contain the potentialities of these creative principles, and can choose to make their lives an ascent towards and then a union with the intuitive intelligence. The One is not a being, but infinite being. It is the cause of beings. Thus Christian and Jewish philosophers who held to a creator God could affirm such a conception. Plotinus might have been the first negative theologian, arguing that God, as simple, is know more from what he is not, than from what he is.

3. The Rise of Christianity

Christianity, emerging from Judaism, imposed a set of revealed truths and practices on its adherents. Many of these beliefs and practices differed significantly from what the Greek religions and Judaism had held. For example, Christians held that God created the world ex nihilo, that God is three persons, and that Jesus Christ was the ultimate revelation of God. Nonetheless, from the earliest of times, Christians held to a significant degree of compatibility between faith and reason.

a. St. Paul

The writings attributed to St. Paul in the Christian Scriptures provide diverse interpretations of the relation between faith and reason. First, in the Acts of the Apostles, Paul himself engages in discussion with "certain Epicurean and Stoic philosophers" at the Aeropagus in Athens (Acts 17:18). Here he champions the unity of the Christian God as the creator of all. God is "not far from any one of us." Much of Paul's speech, in fact, seems to allude to Stoic beliefs. It reflects a sympathy with pagan customs, handles the subject of idol worship gently, and appeals for a new examination of divinity not from the standpoint of creation, but from practical engagement with the world. However, he claims that this same God will one day come to judge all mankind. But in his famous passage from Romans 1:20, Paul is less obliging to non-Christians. Here he champions a natural theology against those pagans who would claim that, even on Christian grounds, their previous lack of access to the Christian God would absolve them from guilt for their nonbelief. Paul argues that in fact anyone can attain to the truth of God's existence merely from using his or her reason to reflect on the natural world. Thus this strong compatibilist interpretation entailed a reduced tolerance for atheists and agnostics. Yet in 1 Corinthians 1:23, Paul suggests a kind of incompatibilism, claiming that Christian revelation is folly the Gentiles (meaning Greeks). He points out that the world did not come to know God through wisdom; God chose to reveal Himself fully to those of simple faith.

These diverse Pauline interpretations of the relation between faith and reason were to continue to manifest themselves in various ways through the centuries that followed.

b. Early Christian Apologists

The early apologists were both compatibilists and incompatibilists. Tertullian took up the ideas of Paul in 1 Corinthians, proclaiming that Christianity is not merely incompatible with but offensive to natural reason. Jerusalem has nothing to do with Athens. He boldly claimed credo quia absurdum est ("I believe because it is absurd"). He claims that religious faith is both against and above reason. In his De Praescriptione Haereticorum, he proclaims, "when we believe, we desire to believe nothing further."

On the other hand, Justin Martyr converted to Christianity, but continued to hold Greek philosophy in high esteem. In his Dialogue with Trypho he finds Christianity "the only sure and profitable philosophy."

In a similar vein, Clement of Alexandria in his Stromata called the Gospel "the true philosophy." Philosophy acted as a "schoolmaster" to bring the Greeks to Christ, just as the law brought the Jews. But he maintained that Greek philosophy is unnecessary for a defense of the faith, though it helps to disarm sophistry. He also worked to demonstrate in a rational way what is found in faith. He claimed that "I believe in order that I may know" (credo ut intelligam). This set Christianity on firmer intellectual foundations. Clement also worked to clarify the early creeds of Christianity, using philosophical notions of substance, being, and person, in order to combat heresies.

c. St. Augustine

Augustine emerged in the late fourth century as a rigorous defender of the Christian faith. He responded forcefully to pagans' allegations that Christian beliefs were not only superstitious but also barbaric. But he was, for the most part, a strong compatibilist. He felt that intellectual inquiry into the faith was to be understood as faith seeking understanding (fides quaerens intellectum). To believe is "to think with assent" (credere est assensione cogitare). It is an act of the intellect determined not by the reason, but by the will. Faith involves a commitment "to believe in a God," "to believe God," and "to believe in God."

In On Christian Doctrine Augustine makes it clear that Christian teachers not only may, but ought, to use pagan thinking when interpreting Scripture. He points out that if a pagan science studies what is eternal and unchanging, it can be used to clarify and illuminate the Christian faith. Thus logic, history, and the natural sciences are extremely helpful in matters of interpreting ambiguous or unknown symbols in the Scriptures. However, Augustine is equally interested to avoid any pagan learning, such as that of crafts and superstition that is not targeted at unchangeable knowledge.

Augustine believed that Platonists were the best of philosophers, since they concentrated not merely on the causes of things and the method of acquiring knowledge, but also on the cause of the organized universe as such. One does not, then, have to be a Christian to have a conception of God. Yet, only a Christian can attain to this kind of knowledge without having to have recourse to philosophy.

Augustine argued further that the final authority for the determination of the use of reason in faith lies not with the individual, but with the Church itself. His battle with the Manichean heresy prompted him to realize that the Church is indeed the final arbiter of what cannot be demonstrated--or can be demonstrated but cannot be understood by all believers. Yet despite this appeal to ecclesiastical authority, he believe that one cannot genuinely understand God until one loves Him.

d. Pseudo-Dionysius

Pseudo Dionysius was heavily influenced by neo-Platonism. In letter IX of his Corpus Dionysiacum, he claimed that our language about God provides no information about God but only a way of protecting God's otherness. His analysis gave rise to the unique form negative theology. It entailed a severe restriction in our access to and understanding of the nature of God. In his "Mystical Theology" Pseudo-Dionysius describes how the soul's destiny is to be fully united with the ineffable and absolutely transcendent God.

4. The Medieval Period

Much of the importance of this period stems from its retrieval of Greek thinking, particularly that of Aristotle. At the beginning of the period Arab translators set to work translating and distributing many works of Greek philosophy, making them available to Jewish, Islamic, and Christian philosophers and theologians alike.

For the most part, medieval theologians adopted an epistemological distinction the Greeks had developed: between scienta (episteme), propositions established on the basis of principles, and opinio, propositions established on the basis of appeals to authority. An established claim in theology, confirmed by either scienta or opinio, demanded the believer's assent. Yet despite this possibility of scientia in matters of faith, medieval philosophers and theologians believed that it could be realized only in a limited sense. They were all too aware of St. Paul's caveat that faith is a matter of "seeing in a mirror dimly" (1 Cor 1:13).

a. St. Anselm

Like Augustine, Anselm held that one must love God in order to have knowledge of Him. In the Proslogion, he argues that "the smoke of our wrongdoing" will prohibit us from this knowledge. Anselm is most noted, however, for his ontological argument, presented in his Proslogion. He claimed that it is possible for reason to affirm that God exists from inferences made from what the understanding can conceive within its own confines. As such he was a gifted natural theologian. Like Augustine, Anselm held that the natural theologian seeks not to understand in order to believe, but to believe in order to understand. This is the basis for his principle intellectus fidei. Under this conception, reason is not asked to pass judgment on the content of faith, but to find its meaning and to discover explanations that enable others to understand its content. But when reason confronts what is incomprehensible, it remains unshaken since it is guided by faith's affirmation of the truth of its own incomprehensible claims.

b. Peter Lombard

Lombard was an important precursor to Aquinas. Following Augustine, he argued that pagans can know about much about truths of the one God simply by their possession of reason (e.g. that spirit is better than body, the mutable can exists only from a immutable principle, all beauty points to a beauty beyond compare). But in addition, pagans can affirm basic truths about the Trinity from these same affirmations, inasmuch as all things mirror three attributes associated with the Trinity: unity (the Father), form or beauty (the Son), and a position or order (the Holy Spirit).

c. Islamic Philosophers

Islamic philosophers in the tenth and eleventh centuries were also heavily influenced by the reintroduction of Aristotle into their intellectual culture.

Avicenna (Ibn Sina) held that as long as religion is properly construed it comprises an area of truth no different than that of philosophy. He built this theory of strong compatibilism on the basis of his philosophical study of Aristotle and Plotinus and his theological study of his native Islam. He held that philosophy reveals that Islam is the highest form of life. He defended the Islamic belief in the immortality of individual souls on the grounds that, although as Aristotle taught the agent intellect was one in all persons, the unique potential intellect of each person, illuminated by the agent intellect, survives death.

Averroes (Ibn Rushd), though also a scholar of Aristotle's works, was less sympathetic to compatibilism than his predecessor Avicenna. But in his Incoherence of Incoherence, he attacked Algazel's criticisms of rationalism in theology. For example, he developed a form of natural theology in which the task of proving the existence of God is possible. He held, however, that it could be proven only from the physical fact of motion. Nonetheless Averroes did not think that philosophy could prove all Islamic beliefs, such as that of individual immortality. Following Aristotle in De Anima, Averroes argued for a separation between the active and passive intellects, even though they enter into a temporary connection with individual humans. This position entails the conclusion that no individuated intellect survives death. Yet Averroes held firmly to the contrary opinion by faith alone.

d. Jewish Philosophy

Moses Maimonides, a Jewish philosopher, allowed for a significant role of reason in critically interpreting the Scriptures. But he is probably best known for his development of negative theology. Following Avicenna's affirmation of a real distinction between essence and existence, Maimonides concluded that no positive essential attributes may be predicated of God. God does not possess anything superadded to his essence, and his essence includes all his perfections. The attributes we do have are derived from the Pentateuch and the Prophets. Yet even these positive attributes, such as wisdom and power, would imply defects in God if applied to Him in the same sense they are applied to us. Since God is simple, it is impossible that we should know one part, or predication, of Him and not another. He argues that when one proves the negation of a thing believed to exist in God, one becomes more perfect and closer to knowledge of God. He quotes Psalm 4:4's approval of an attitude of silence towards God. Those who do otherwise commit profanity and blasphemy. It is not certain, however, whether Maimonides rejected the possibility of positive knowledge of the accidental attributes of God's action.

e. St. Thomas Aquinas

Unlike Augustine, who made little distinction between explaining the meaning of a theological proposition and giving an argument for it, Aquinas worked out a highly articulated theory of theological reasoning. St. Bonaventure, an immediate precursor to Aquinas, had argued that no one could attain to truth unless he philosophizes in the light of faith. Thomas held that our faith in eternal salvation shows that we have theological truths that exceed human reason. But he also claimed that one could attain truths about religious claims without faith, though such truths are incomplete. In the Summa Contra Gentiles he called this a "a two fold truth" about religious claims, "one to which the inquiry of reason can reach, the other which surpasses the whole ability of the human reason." No contradiction can stand between these two truths. However, something can be true for faith and false (or inconclusive) in philosophy, though not the other way around. This entails that a non-believer can attain to truth, though not to the higher truths of faith.

A puzzling question naturally arises: why are two truths needed? Isn't one truth enough? Moreover, if God were indeed the object of rational inquiry in this supernatural way, why would faith be required at all? In De Veritate (14,9) Thomas responds to this question by claiming that one cannot believe by faith and know by rational demonstration the very same truth since this would make one or the other kind of knowledge superfluous.

On the basis of this two-fold theory of truth, Aquinas thus distinguished between revealed (dogmatic) theology and rational (philosophical) theology. The former is a genuine science, even though it is not based on natural experience and reason. Revealed theology is a single speculative science concerned with knowledge of God. Because of its greater certitude and higher dignity of subject matter, it is nobler than any other science. Philosophical theology, though, can make demonstrations using the articles of faith as its principles. Moreover, it can apologetically refute objections raised against the faith even if no articles of faith are presupposed. But unlike revealed theology, it can err.

Aquinas claimed that the act of faith consists essentially in knowledge. Faith is an intellectual act whose object is truth. Thus it has both a subjective and objective aspect. From the side of the subject, it is the mind's assent to what is not seen: "Faith is the evidence of things that appear not" (Hebrews 11:1). Moreover, this assent, as an act of will, can be meritorious for the believer, even though it also always involves the assistance of God's grace. Moreover, faith can be a virtue, since it is a good habit, productive of good works. However, when we assent to truth in faith, we do so on the accepted testimony of another. From the side of what is believed, the objective aspect, Aquinas clearly distinguished between "preambles of faith," which can be established by philosophical principles, and "articles of faith" that rest on divine testimony alone. A proof of God's existence is an example of a preamble of faith. Faith alone can grasp, on the other hand, the article of faith that the world was created in time (Summa Theologiae I, q. 46, a. 2). Aquinas argued that the world considered in itself offers no grounds for demonstrating that it was once all new. Demonstration is always about definitions, and definitions, as universal, abstract from "the here and now." A temporal beginning, thus demonstrated, is ruled out tout court. Of course this would extend to any argument about origination of the first of any species in a chain of efficient causes. Here Thomas sounds a lot like Kant will in his antinomies. Yet by faith we believe he world had a beginning. However, one rational consideration that suggests, though not definitively, a beginning to the world is that the passage from one term to another includes only a limited number of intermediate points between them.

Aquinas thus characterizes the articles of faith as first truths that stand in a "mean between science and opinion." They are like scientific claims since their objects are true; they are like mere opinions in that they have not been verified by natural experience. Though he agrees with Augustine that no created intellect can comprehend God as an object, the intellect can grasp his existence indirectly. The more a cause is grasped, the more of its effects can be seen in it; and since God is the ultimate cause of all other reality, the more perfectly an intellect understands God, the greater will be its knowledge of the things God does or can do. So although we cannot know the divine essence as an object, we can know whether He exists and on the basis of analogical knowledge what must necessarily belong to Him. Aquinas maintains, however, that some objects of faith, such as the Trinity or the Incarnation, lie entirely beyond our capacity to understand them in this life.

Aquinas also elucidates the relationship between faith and reason on the basis of a distinction between higher and lower orders of creation. Aquinas criticizes the form of naturalism that holds that the goodness of any reality "is whatever belongs to it in keeping with its own nature" without need for faith (II-IIae, q.2, a.3). Yet, from reason itself we know that every ordered pattern of nature has two factors that concur in its full development: one on the basis of its own operation; the other, on the basis of the operation of a higher nature. The example is water: in a lower pattern, it naturally flows toward the centre, but in virtue of a higher pattern, such as the pull of the moon, it flows around the center. In the realm of our concrete knowledge of things, a lower pattern grasps only particulars, while a higher pattern grasps universals.

Given this distinction of orders, Thomas shows how the lower can indeed point to the higher. His arguments for God's existence indicate this possibility. From this conviction he develops a highly nuanced natural theology regarding the proofs of God's existence. The first of his famous five ways is the argument from motion. Borrowing from Aristotle, Aquinas holds to the claim that, since every physical mover is a moved mover, the experience of any physical motion indicates a first unmoved mover. Otherwise one would have to affirm an infinite chain of movers, which he shows is not rationally possible. Aquinas then proceeds to arguments from the lower orders of efficient causation, contingency, imperfection, and teleology to affirm the existence of a unitary all-powerful being. He concludes that these conclusions compel belief in the Judeo-Christian God.

Conversely, it is also possible to move from the higher to the lower orders. Rational beings can know "the meaning of the good as such" since goodness has an immediate order to the higher pattern of the universal source of being (II-IIae q.2, a.3). The final good considered by the theologian differs, however, from that considered by the philosopher: the former is the bonum ultimum proportionate to human powers; the latter is the beatific vision. Both forms of the ultimate good have important ramifications, since they ground not only the moral distinction between natural and supernatural virtues, but also the political distinction between ecclesial and secular power.

Aquinas concludes that we come to know completely the truths of faith only through the virtue of wisdom (sapientia). Thomas says that "whatever its source, truth of is of the Holy Spirit" (Summa Theologiae, I-IIae q. 109, a. 1). The Spirit "enables judgment according to divine truth" (II-IIae 45, q. 1, ad 2). Moreover, faith and charity are prerequisites for the achievement of this wisdom.

Thomas's two-fold theory of truth develops a strong compatibilism between faith and reason. But it can be argued that after his time what was intended as a mutual autonomy soon became an expanding separation.

f. The Franciscan Philosophers

Duns Scotus, like his successor William of Ockham, reacted in a characteristic Franciscan way to Thomas's Dominican views. While the Dominicans tended to affirm the possibility of rational demonstrability of certain preambles of faith, the Franciscans tended more toward a more restricted theological science, based solely on empirical and logical analysis of beliefs.

Scotus first restricts the scope of Aquinas's rational theology by refuting its ability to provide arguments that stop infinite regresses. In fact he is wary of the attempts of natural theology to prove anything about higher orders from lower orders. On this basis, he rejects the argument from motion to prove God's existence. He admits that lower beings move and as such they require a first mover; but he maintains that one cannot prove something definitive about higher beings from even the most noble of lower beings. Instead, Scotus thinks that reason can be employed only to elucidate a concept. In the realm of theology, the key concept to elucidate is that of infinite being. So in his discussion of God's existence, he takes a metaphysical view of efficiency, arguing that there must be not a first mover, but an actually self-existent being which makes all possibles possible. In moving towards this restricted form of conceptualist analysis, he thus gives renewed emphasis to negative theology.

Ockham then radicalized Scotus's restrictions of our knowledge of God. He claimed that the Greek metaphysics of the 13th century, holding to the necessity of causal connections, contaminated the purity of the Christian faith. He argued instead that we cannot know God as a deduction from necessary principles. In fact, he rejected the possibility that any science can verify any necessity, since nothing in the world is necessary: if A and B are distinct, God could cause one to exist without the other. So science can demonstrate only the implications of terms, premises, and definitions. It keeps within the purely conceptual sphere. Like Scotus he argued held that any necessity in an empirical proposition comes from the divine order. He concluded that we know the existence of God, his attributes, the immortality of the soul, and freedom only by faith. His desire to preserve divine freedom and omnipotence thus led in the direction of a voluntaristic form of fideism.

5. The Renaissance and Enlightenment Periods

Ockham's denial of the necessity in the scope of scientific findings perhaps surprisingly heralded the beginnings of a significant movement towards the autonomy of empirical science. But with this increased autonomy came also a growing incompatibility between the claims of science and those of religious authorities. Thus the tension between faith and reason now became set squarely for the first time in the conflict between science and religion. This influx of scientific thinking undermined the hitherto reign of Scholasticism. By the seventeenth century, what had begun as a criticism of the authority of the Church evolved into a full-blown skepticism regarding the possibility of any rational defense of fundamental Christian beliefs.

The Protestant Reformers shifted their emphasis from the medieval conception of faith as a fides (belief that) to fiducia (faith in). Thus attitude and commitment of the believer took on more importance. The Reformation brought in its wake a remarkable new focus on the importance of the study of Scripture as a warrant for one's personal beliefs.

The Renaissance also witnessed the development of a renewed emphasis on Greek humanism. In the early part of this period, Nicholas of Cusa and others took a renewed interest in Platonism.

a. The Galileo Controversy

In the seventeenth century, Galileo understood "reason" as scientific inference based and experiment and demonstration. Moreover, experimentation was not a matter simply of observation, it also involved measurement, quantification, and formulization of the properties of the objects observed. Though he was not the first to do attempt this systematization -- Archimedes had done the same centuries before - Galileo developed it to such an extent that he overthrew the foundations of Aristotelian physics. He rejected, for example, Aristotle's claim that every moving had a mover whose force had to be continually applied. In fact it was possible to have more than one force operating on the same body at the same time. Without the principle of a singular moved mover, it was also conceivable that God could have "started" the world, then left it to move on its own.

The finding of his that sparked the great controversy with the Catholic Church was, however, Galileo's defense of Copernicus's rejection of the Ptolemaic geocentric universe. Galileo used a telescope he had designed to confirm the hypothesis of the heliocentric system. He also hypothesized that the universe might be indefinitely large. Realizing that such conclusions were at variance with Church teaching, he followed Augustine's rule than an interpretation of Scripture should be revised when it confronts properly scientific knowledge.

The officials of the Catholic Church - with some exceptions -- strongly resisted these conclusions and continued to champion a pre-Copernican conception of the cosmos. The Church formally condemned Galileo's findings for on several grounds. First, the Church tended to hold to a rather literal interpretation of Scripture, particularly of the account of creation in the book of Genesis. Such interpretations did not square with the new scientific views of the cosmos such as the claim that the universe is infinitely large. Second, the Church was wary of those aspects of the "new science" Galileo represented that still mixed with magic and astrology. Third, these scientific findings upset much of the hitherto view of the cosmos that had undergirded the socio-political order the Church endorsed. Moreover, the new scientific views supported Calvinist views of determinism against the Catholic notion of free will. It took centuries before the Church officially rescinded its condemnation of Galileo.

b. Erasmus

Inspired by Greek humanism, Erasmus placed a strong emphasis on the autonomy of human reason and the importance of moral precepts. As a Christian, he distinguished among three forms of law: laws of nature, thoroughly engraved in the minds of all men as St. Paul had argued, laws of works, and laws of faith. He was convinced that philosophers, who study laws of nature, could also produce moral precepts akin to those in Christianity. But Christian justification still comes ultimately only from the grace that can reveal and give a person the ability to follow the law of faith. As such, "faith cures reason, which has been wounded by sin." So, while the laws of works are for the most part prohibitions against certain sins, the laws of faith tend to be positive duties, such as the injunctions to love one's enemies and to carry one's cross daily.

c. The Protestant Reformers

Martin Luther restricted the power of reason to illuminate faith. Like many reformers, he considered the human being alone unable to free itself from sin. In The Bondage of the Will, he makes a strict separation between what man has dominion over (his dealings with the lower creatures) and what God has dominion over (the affairs of His kingdom and thus of salvation). Reason is often very foolish: it immediately jumps to conclusions when it sees a thing happen once or twice. But by its reflections on the nature of words and our use of language, it can help us to grasp our own spiritual impotence.

Luther thus rejected the doctrine of analogy, developed by Aquinas and others, as an example of the false power of reason. In his Heidelberg Disputation Luther claims that a theologian must look only "on the visible rearward parts of God as seen in suffering and the cross." Only from this perspective, do we keep our faith when we see, for example, that in the world the unjust prosper and the good undergo afflictions. Thus faith is primarily an act of trust in God's grace.

Luther thus stresses the gratuitousness of salvation. In a traditional sense, Roman Catholics generally held that faith is meritorious, and thus that salvation involves good works. Protestant reformers like Luther, on the other hand, held that indeed faith is pure gift. He thus tended to make the hitherto Catholic emphasis on works look voluntaristic.

Like Luther, John Calvin appealed to the radical necessity of grace for salvation. This was embodied in his doctrine of election. But unlike Luther, Calvin gave a more measured response to the power of human reason to illuminate faith. In his Institutes of the Christian Religion, he argued that the human mind possesses, by natural instinct, an "awareness of divinity." This sensus divinitatis is that whereby we form specific beliefs about God in specific situations, e.g. when experiencing danger, beauty, or even guilt. Even idolatry can contain as aspect of this. So religion is not merely arbitrary superstition. And yet, the law of creation makes necessary that we direct every thought and action to this goal of knowing God.

Despite this fundamental divine orientation, Calvin denied that a believer could build up a firm faith in Scripture through argument and disputation. He appealed instead to the testimony of Spirit embodied gained through a life of religious piety. Only through this testimony is certainty about one's beliefs obtained. We attain a conviction without reasons, but only through "nothing other than what each believer experiences within himself--though my words fall far beneath a just explanation of the matter." He realized, however, that "believers have a perpetual struggle with their own lack of faith." But these struggles never remove them from divine mercy.

Calvin is thus an incompatibilist of the transrational type: faith is not against, but is beyond human reason.

d. Continental Rationalism

René Descartes, even more profoundly than Calvin, moved reason into the confines of the thinking subject. But he expanded the power of reason to grasp firmly the preambles of faith. In his Meditations, he claimed to have provided what amounted to be the most certain proofs of God possible. God becomes explicated by means of the foundation of subjective self-certainty. His proofs hinged upon his conviction that God cannot be a deceiver. Little room is left for faith.

Descartes's thinking prepared Gottfried Leibniz to develop his doctrine of sufficient reason. Leibniz first argued that all truths are reducible to identities. From this it follows that a complete or perfect concept of an individual substance involves all its predicates, whether past, present, or future. From this he constructed his principle of sufficient reason: there is no event without a reason and no effect without a cause. He uses this not only to provide a rigorous cosmological proof for God's existence from the fact of motion, but also to defend the cogency of both the ontological argument and the argument from design.

In his Theodicy Leibniz responded to Pierre Bayle, a French philosophe, who gave a skeptical critique of rationalism and support of fideism. First, Leibniz held that all truths are complementary, and cannot be mutually inconsistent. He argued that there are two general types of truth: those that are altogether necessary, since their opposite implies contradiction, and those that are consequences of the laws of nature. God can dispense only with the latter laws, such as the law of our mortality. A doctrine of faith can never violate something of the first type; but it can be in tension with truths of the second sort. Thus though no article of faith can be self-contradictory, reason may not be able to fully comprehend it. Mysteries, such as that of the Trinity, are simply "above reason." But how do we weigh the probabilities favoring a doctrine of faith against those derived from general experience and the laws of nature? We must weigh these decisions by taking into account the existence and nature of God and the universal harmony by which the world is providentially created and ordered.

Leibniz insisted that one must respect the differences among the three distinct functions of reason: to comprehend, to prove, and to answer objections. In the faith/reason controversy, Leibniz thought that the third function takes on particular prominence. However, one sees vestiges of the first two as well, since an inquiry into truths of faith employs proofs of the infinite whose strength or weakness the reasoner can comprehend.

Baruch Spinoza, a Dutch philosopher, brought a distinctly Jewish perspective to his rigorously rationalistic analysis of faith. Noticing that religious persons showed no particular penchant to virtuous life, he decided to read the Scriptures afresh without any presuppositions. He found that Old Testament prophecy, for example, concerned not speculative but primarily practical matters. Obedience to God was one. He took this to entail that whatever remains effective in religion applies only to moral matters. He then claimed that the Scriptures do not conflict with natural reason, leaving it free reign. No revelation is needed for morality. Moreover, he was led to claim that though the various religions have very different doctrines, they are very similar to one another in their moral pronouncements.

e. Blaise Pascal

Pascal rejected the hitherto claims of medieval natural theologians, by claiming that reason can neither affirm nor deny God’s existence. Instead he focused on the way that we should act given this ambiguity. He argued that since the negative consequences of believing are few (diminution of the passions, some pious actions) but the gain of believing is infinite (eternal life), it is more rational to believe than to disbelieve in God's existence. This assumes, of course, both that God would not grant eternal life to a non-believer and that sincerity in one's belief in God is not a requirement for salvation. As such, Pascal introduced an original form of rational voluntarism into the analysis of faith.

f. Empiricism

John Locke lived at a time when the traditional medieval view of a unified body of articulate wisdom no longer seemed plausible. Yet he still held to the basic medieval idea that faith is assent to specific propositions on the basis of God's authority. Yet unlike Aquinas, he argued that faith is not a state between knowledge and opinion, but a form of opinion (doxa). But he developed a kind of apology for Christianity: an appeal to revelation, without an appeal to enthusiasm or inspiration. His aim was to demonstrate the "reasonableness of Christianity." Though faith and reason have "strict" distinct provinces, faith must be in accord with reason. Faith cannot convince us of what contradicts, or is contrary, to our knowledge. We cannot assent to a revealed proposition if it be contradictory to our clear intuitive knowledge. But propositions of faith are, nonetheless, understood to be "above reason."

Locke specifies two ways in which matters of faith can be revealed: either though "original revelation" or "traditional revelation." Moses receiving the Decalogue is an example of the former; his communication of its laws to the Israelites is an example of the latter. The truth of original revelation cannot be contrary to reason. But traditional revelation is even more dependent on reason, since if an original revelation is to be communicated, it cannot be understood unless those who receive it have already received a correlate idea through sensation or reflection and understood the empirical signs through which it is communicated.

For Locke, reason justifies beliefs, and assigns them varying degrees of probability based on the power of the evidence. But, like Aquinas, Locke held to the evidence not only of logical/mathematical and certain self-affirming existential claims, but also "that which is evident to the senses." All of these veridical beliefs depend upon no other beliefs for their justification. But faith requires the even less certain evidence of the testimony of others. In the final analysis, faith's assent is made not by a deduction from reason, but by the "credit of the proposer, as coming from God, in some extraordinary way of communication." Thus Locke's understands faith as a probable consent.

Locke also developed a version of natural theology. In An Essay Concerning Human Understanding he claims that the complex ideas we have of God are made of up ideas of reflection. For example, we take the ideas of existence, duration, pleasure, happiness, knowledge, and power and "enlarge every one of these with our idea of Infinity; and so putting them together, make our complex idea of God." We cannot know God's own essence, however.

David Hume, like Locke, rejected rationalism, but developed a more radical kind of empiricism than Locke had. He argued that concrete experience is "our only guide in reasoning concerning matters of fact." Thus he rejected the possibility of arguing for the truths of faith on the basis either of natural theology or the evidence of miracles. He supported this conclusion on two grounds. First, natural theology requires certain inferences from everyday experience. The argument from design infers that we can infer a single designer from our experience of the world. Though Hume agrees that we have experiences of the world as an artifact, he claims that we cannot make any probable inference from this fact to quality, power, or number of the artisans. Second, Hume argues that miracles are not only often unreliable grounds as evidence for belief, but in fact are apriori impossible. A miracle by definition is a transgression of a law of nature, and yet by their very nature these laws admit of no exceptions. Thus we cannot even call it a law of nature that has been violated. He concludes that reason and experience fail to establish divine infinity, God's moral attributes, or any specification of the ongoing relationship between the Deity and man. But rather than concluding that his stance towards religious beliefs was one of atheism or even a mere Deism, Hume argued that he was a genuine Theist. He believed that we have a genuine natural sentiment by which we long for heaven. The one who is aware of the inability of reason to affirm these truths in fact is the person who can grasp revealed truth with the greatest avidity.

g. German Idealism

Immanuel Kant was heavily influenced by Descartes's anthropomorphism and Spinoza's and Jean Jacques Rousseau's restriction of the scope of religion to ethical matters. Moreover, he wanted a view that was consistent with Newton's discoveries about the strict natural laws that govern the empirical world. To accomplish this, he steered the scope of reason away from metaphysical, natural, and religious speculation altogether.

Kant's claim that theoretical reason was unable to grasp truths about God effectively continued the contraction of the authority of scienta in matters of faith that had been occurring since the late medieval period. He rejected, then, the timeless and spaceless God of revelation characteristic of the Augustinian tradition as beyond human ken. This is most evident in his critique of the cosmological proof for the existence of God in The Critique of Pure Reason. This move left Kant immune from the threat of unresolvable paradoxes. Nonetheless he did allow the concept of God (as well as the ideas of immortality and the soul) to become not a constitutive but a regulative ideal of reason. God's existence remains a necessary postulate specifically for the moral law. God functions as the sources for the summum bonum. Only God can guarantee an ideal conformity of virtue and happiness, which is required to fulfill the principle that "ought implies can." This grounded what Kant called a faith distinct from knowledge or comprehension, but nonetheless rational. Rational faith involves reliance neither upon God's word nor the person of Christ, but only upon the recognition of God as the source of how we subjectively realize our duties. God is cause of our moral purposes as rational beings in nature. Yet faith is "free belief": it is the permanent principle of the mind to assume as true, on account of the obligation in reference to it, that which is necessary to presuppose as condition of the possibility of the highest moral purpose. Like Spinoza, Kant makes all theology moral theology.

Since faith transcends the world of experience, it is neither doubtful nor merely probable. Thus Kant's view of faith is complex: it has no theoretical grounds, yet it has a rational basis that provides more or less stable conviction for believers. He provided a religion grounded without revelation or grace. It ushered in new immanentism in rational views of belief.

G.W.F. Hegel, at the peak of German Idealism, took up Kant's immanentism but moved it in a more radical direction. He claimed that in Kant, "philosophy has made itself the handmaid of a faith once more" though one not externally imposed but autonomously constituted. Hegel approved of the way Kant helped to modify the Enlightenment's dogmatic emphasis on the empirical world, particularly as evidenced in the way Locke turned philosophy into empirical psychology. But though Kant held to an "idealism of the finite," Hegel thought that Kant did not extend his idealism far enough. Kant's regulative view of reason was doomed to regard faith and knowledge as irrevocably opposed. Hegel argued that a further development of idealism shows have faith and knowledge are related and synthesized in the Absolute.

Hegel reinterpreted the traditional proofs for God's existence, rejected by Kant, as authentic expressions of the need of finite spirit to elevate itself to oneness with God. In religion this attempt to identify with God is accomplished through feeling. Feelings are, however, subject to conflict and opposition. But they are not merely subjective. The content of God enters feeling such that the feeling derives its determination from this content. Thus faith, implanted in one's heart, can be defended by the testimony of the indwelling spirit of truth.

Hegel's thoroughgoing rationalism ultimate yields a form of panentheism in which all finite beings, though distinct from natural necessity, have no existence independent from it. "There is only one Being… and things by their very nature form part of it." God is the being in whom spirit and nature are united. Thus faith is merely an expression of a finitude comprehensible only from the rational perspective of the infinite. Faith is merely a moment in our transition to absolute knowledge.

6. The Nineteenth Century

Physics and astronomy were the primary scientific concerns for theologians in the seventeenth and eighteenth centuries. But in the nineteenth and twentieth centuries the sciences of geology, sociology, psychology, and biology became more pronounced.

Kant's understanding of God as a postulate of practical reason - and his dismissal of metaphysical and empirical support for religion -- soon led to the idea that God could be a mere projection of practical feeling or psychological impulse. Such an idea echoed Hobbes's claim that religion arises from fear and superstition. Sigmund Freud claimed, for example, that religious beliefs were the result of the projection of a protective father figure onto our life situations. Although such claims about projection seem immune from falsification, the Freudian could count such an attempt to falsify itself simply as rationalization: a masking of a deeper unconscious drive.

The nineteenth century biological development most significant for theology was Charles Darwin's theory of natural selection. It explained all human development on the basis simply of progressive adaptation or organisms to their physical environment. No reference to a mind or rational will was required to explain any human endeavor. Darwin himself once had believed in God and the immortality of the soul. But later he found that these could not count as evidence for the existence of God. He ended up an agnostic. On the one hand he felt compelled to affirm a First Cause of such an immense and wonderful universe and to reject blind chance or necessity, but on the other hand he remained skeptical of the capacities of humans "developed from a mind as low as that possessed by the lowest animals." Such naturalistic views made it difficult to support any argument for God's existence, particularly a design argument.

Not all nineteenth century scientific thinking, however, yielded skeptical conclusions. Emilé Durkheim, in his sociological study The Elementary Forms of Religious Life, took the scientific critiques of religion seriously, but gave them a much different interpretation. He concluded that the cultic practices of religion have the non-illusory quality of producing measurable good consequences in their adherents. Moreover, he theorized that the fundamental categories of thought, and even of science, have religious origins. Almost all the great social institutions were born of religion. He was lead to claim that "the idea of society is the soul of religion": society derived from religious forces.

In the context of these various scientific developments, philosophical arguments about faith and reason developed in several remarkable directions in the nineteenth century.

a. Romanticism

Friedrich Schleiermacher was a liberal theologian who was quite interested in problems of biblical interpretation. He claimed that religion constituted its own sphere of experience, unrelated to scientific knowledge. Thus religious meaning is independent of scientific fact. His Romantic fideism would have a profound influence on Kierkegaard.

b. Socialism

Karl Marx is well known as an atheist who had strong criticisms of all religious practice. Much of his critique of religion had been derived from Ludwig Feuerbach, who claimed that God is merely a psychological projection meant to compensate for the suffering people feel. Rejecting wholesale the validity of such wishful thinking, Marx claimed not only that all sufferings are the result of economic class struggle but that they could be alleviated by means of a Communist revolution that would eliminate economic classes altogether. Moreover, Marx claimed that religion was a fundamental obstacle to such a revolution, since it was an "opiate" that kept the masses quiescent. Religious beliefs thus arise from a cognitive malfunction: they emerge from a "perverted world consciousness." Only a classless communist society, which Marx thought would emerge when capitalism met its necessary demise, would eliminate religion and furnish true human emancipation.

c. Existentialism

Søren Kierkegaard, arguably the father of existentialism, was a profound religious thinker. He came up with an unequivocal view of faith and reason much like Tertullian's strong incompatibilism. If Kant argued for religion within the limits of reason alone, Kierkegaard called for reason with the limits of religion alone. Faith requires a leap. It demands risk. All arguments that reason derives for a proof of God are in fact viciously circular: one can only reason about the existence of an object that one already assumes to exist. Hegel tried to claim that faith could be elevated to the status of objective certainty. Seeking such certainly, moreover, Kierkegaard considered a trap: what is needed is a radical trust. The radical trust of faith is the highest virtue one can reach.

Kierkegaard claimed that all essential knowledge intrinsically relates to an existing individual. In Either/Or, he outlined three general forms of life individuals can adopt: the aesthetic, ethical, and ethico-religious. The aesthetic is the life that seeks pleasure. The ethical is that which stresses the fulfillment of duties. Neither of these attains to the true individuality of human existence. But in the ethico-religious sphere, truth emerges in the authenticity of the relationship between a person and the object of his attention. With authenticity, the importance is on the "how," not the "what," of knowledge. It attains to a subjective truth, in which the sincerity and intensity of the commitment is key. This authenticity is equivalent to faith understood as "an objective uncertainty held fast in an appropriation-process of the most passionate inwardness." The coexistence of this "objective uncertainty" with "passionate inwardness" is strikingly paradoxical. Kierkegaard makes a similarly paradoxical claim in holding that "nothing historical can become infinitely certain for me except the fact of my own existence (which again cannot become infinitely certain for any other individual, who has infinite certainty only of his own existence) and this is not something historical." Thus faith can never be a matter of objective certainty; it involves no reckoning of probabilities, it is not an intellectual acceptance of a doctrine at all. Faith involves a submission of the intellect. It is not only hostile to but also completely beyond the grasp of reason.

Though he never read Kierkegaard, Friedrich Nietzsche came up with remarkable parallels to his thought. Both stressed the centrality of the individual, a certain disdain for public life, and a hatred of personal weakness and anonymity. They also both attacked certain hypocrisies in Christendom and the overstated praise for reason in Kant and Hegel. But Nietzsche had no part of Kierkegaard's new Christian individual, and instead defended the aesthetic life disdained by Kierkegaard against both morality and Christianity. So he critique religion not from Kierkegaard's epistemological perspective, but from a highly original moral perspective.

Nietzsche claimed that religion breeds hostility to life, understood broadly as will to power. Religion produces two types of character: a weak servile character that is at the same time strongly resentful towards those in power, and an Übermensch, or superman, who creates his own values. In The Joyful Wisdom Nietzsche proclaims that God as a protector of the weak, though once alive, is now dead, and that we have rightly killed him. Now, instead, he claims that we instead need to grasp the will to power that is part of all things and guides them to their full development completely within the natural world. For humans Nietzsche casts the will to power as a force of artistic and creative energy.

d. Catholic Apologists

Roman Catholics traditionally claimed that the task of reason was to make faith intelligible. In the later part of the nineteenth century, John Cardinal Newman worked to defend the power of reason against those intellectuals of his day who challenged its efficacy in matters of faith. Though maintaining the importance of reason in matters of faith, he reduces its ability to arrive at absolute certainties.

In his Grammar of Assent, Newman argued that one assents to God on the basis of one's experience and principles. And one can do this by means of a kind of rational demonstration. And yet this demonstration is not actually reproducible by others; each of us has a unique domain of experience and expertise. Some are just given the capacity and opportunities to make this assent to what is demonstrated others are not. Drawing for Aristotle's Nicomachean Ethics, Newman argues that "a special preparation of mind is required for each separate department of inquiry and discussion." He stressed the continuity between religious belief and other kinds of belief that involve complex sets of phenomena. He claims that Locke, for example, overlooked how human nature actually works, imposing instead his own idea of how the mind is to act on the basis of deduction from evidence. If Locke would have looked more closely at experience, he would have noticed that much of our reasoning is tacit and informal. It cannot usually be reconstructed for a set of premises. Rather it is the accumulation of probabilities, independent of each other, arising out of the circumstances of the particular case. No specific consideration usually suffices to generate the required conclusion, but taken together, they may converge upon it. This is usually what is called a moral proof for belief in a proposition. In fact, we are justified in holding the beliefs even after we have forgotten what the warrant was. This probabilistic approach to religious assent continued in the later thinking of Basil Mitchell.

e. Pragmatism

William James followed in the pragmatist tradition inaugurated by Charles Sanders Peirce. Pragmatists held that all beliefs must be tested, and those that failed to garner sufficient practical value ought to be discarded.

In his Will to Believe, James was a strong critic of W.K. Clifford's uncompromising empiricism. Clifford, like Hume, had argued that acting on beliefs or convictions alone, unsupported by evidence, was pure folly. He likened such acting to that of an irresponsible shipowner who allows an untrustworthy ship to be ready to set sail, merely thinking it safe, and then gives "benevolent wishes" for those who would set sail in it. Clifford concluded that we have a duty to act only on well founded beliefs. If we have no grounds for belief, we must suspend judgment. This provided the basis for an ethics of belief quite different than Newman's. Clifford's evidentialism inspired subsequent philosophers such as Bertrand Russell and Michael Scriven.

James argued, pace Clifford, that life would be severely impoverished if we acted only on completely well founded beliefs. Like Newman, James held that belief admits of a wide spectrum of commitment: from tentative to firm. The feelings that attach to a belief are significant. He defended the need we have, at times, to allow our "passional tendencies" to influence our judgments. Thus, like Pascal, he took up a voluntarist argument for religious belief, though one not dependent solely upon a wager. There are times, admittedly few, when we must act on our beliefs passionately held but without sufficient supporting evidence. These rare situations must be both momentous, once in a lifetime opportunities, and forced, such that the situation offers the agent only two options: to act or not to act on the belief. Religious beliefs often take on both of these characteristics. Pascal had realized the forced aspect of Christian belief, regarding salvation: God would not save the disbeliever. As a result, religion James claimed that a religious belief could be a genuine hypothesis for a person to adopt.

James does, however, also give some evidential support for this choice to believe. We have faith in many things in life -- in molecules, conversation of energy, democracy, and so forth -- that are based on evidence of their usefulness for us. But even in these cases "Our faith is faith in some one else's faith." Our mental life effectively comprises a constant interplay between volitions and beliefs. Nonetheless, James believed that while philosophers like Descartes and Clifford, not wanting to ever be dupes, focused primarily on the need to avoid error, even to the point of letting truth take its chance, he as an empiricist must hold that the pursuit of truth is paramount and the avoidance of error is secondary. His position entailed that that dupery in the face of hope is better than dupery in the face of fear.

In "The Sentiment of Rationality" James concludes that faith is "belief in something concerning which doubt is still theoretically possible; and as the test of belief is willingness to act, one may say that faith is the readiness to act in a cause the prosperous issue of which is not certified to us in advance." So, faith is not only compatible with doubt, but it requires its possibility. Faith is oriented towards action: it is a kind of "working hypothesis" needed for practical life.

7. The Twentieth Century

Darwins's scientific thesis of natural selection and Freud's projective views of God continued to have a profound impact on many aspects of the philosophy of religion in the twentieth century. In fact the interplay between faith and reason began to be cast, in many cases, simply as the conflict between science and religion.

Not all scientific discoveries were used to invoke greater skepticism about the validity of religious claims, however. For example, in the late twentieth century some physicists endorsed what came to be called the anthropic principle. The principle derives from the claim of some physicists that a number of factors in the early universe had to coordinate in a highly statistically improbable way to produce a universe capable of sustaining advanced life forms. Among the factors are the mass of the universe and the strengths of the four basic forces (electromagnetism, gravitation, and the strong and weak nuclear forces). It is difficult to explain this fine tuning. Many who adhere to the anthropic principle, such as Holmes Rolston, John Leslie, and Stephen Hawking, argue that it demands some kind of extra-natural explanation. Some think it suggests possibilities for a new design argument for God's existence. However, one can hold the anthropic principle and still deny that it has religious implications. It is possible to argue that it indicates not a single creator creating a single universe, but indeed many universes, either contemporaneous with our own or in succession to it.

The twentieth century witnessed numerous attempts to reconcile religious belief with new strands of philosophical thinking and with new theories in science.

a. Logical Positivism and Its Critics

Many philosophers of religion in the twentieth century took up a new appreciation for the scope and power of religious language. This was prompted to a large extent by the emphasis on conceptual clarity that dominated much Western philosophy, particularly early in the century.

This emphasis on conceptual clarity was evidenced especially in logical positivism. A.J. Ayer and Antony Flew, for example, argued that all metaphysical language fails to meet a standard of logical coherence and is thus meaningless. Metaphysical claims are not in principle falsifiable. As such, their claims are neither true nor false. They make no verifiable reference to the world. Religious language shares these characteristics with metaphysical language. Flew emphasized that religious believers generally cannot even state the conditions under which they would give up their faith claims. Since their claims then are unfalsifiable, they are not objects for rational determination.

One response by compatibilists to these arguments of logical positivists was to claim that religious beliefs, though meaningless in the verificational sense, are nonetheless important in providing the believer with moral motivations and self-understanding. This is an anti-realist understanding of faith. An example of this approach is found in R.M. Hare. Responding to Flew, he admitted that religious faith consists of a set of unfalsifiable assumptions, which he termed "bliks." But Hare argued that our practical dealings with the everyday world involve numerous such "bliks." Though some of these principles are faulty, we cannot but have some in order to live in the world.

Basil Mitchell responded to Flew's claim that religious beliefs cannot be falsified. Mitchell argued that although rational and scientific considerations can and ought at times to prompt revisions of one's religious belief, no one can give a general determination of exactly at what point a set of evidence ought to count decisively against a faith claim. It is up to each believer to decide when this occurs. To underscore this claim, Mitchell claimed that the rationality of religious beliefs ought to be determined not foundationally, as deductions from rational first principles, but collectively from the gathering of various types of evidence into a pattern. Nonetheless, he realized that this accumulation of evidence, as the basis for a new kind of natural theology, might not be strong enough to counter the skeptic. In the spirit of Newman, Mitchell concluded by defending a highly refined cumulative probabilism in religious belief.

Another reaction against logical positivism stemmed from Ludwig Wittgenstein. In his "Lectures on Religious Belief," he argued that there is something unique about the linguistic framework of religious believers. Their language makes little sense to outsiders. Thus one has to share in their form of life in order to understand the way the various concepts function in their language games. The various language games form a kind of "family resemblance." Wittgenstein concluded that those who demand a nonperspectival impartial way of assessing the truth value of a religious claim are asking for something impossible. From Wittgenstein's perspective, science and religion are just two different types of language games. This demand to take on an internal perspective in order to assess religious beliefs commits Wittgenstein to a form of incompatibilism between faith and reason. Interpreters of Wittgenstein, like Norman Malcolm, claimed that although this entails that religious beliefs are essentially groundless, so are countless other everyday beliefs, such as in the permanence of our objects of perception, in the uniformity of nature, and even in our knowledge of our own intentions.

Wittgenstein, like Kierkegaard, claimed that proofs for God's existence have little to do with actual belief in God. He did think that life itself could "educate" us about God's existence. In Culture and Value he claims that sufferings can have a great impact on one's beliefs. "These neither show us God in the way a sense impression shows us an object, nor do they give rise to conjectures about him. Experiences, thoughts--life can force this concept on us." D.Z. Phillips also holds the view that religion has its own unique criteria for acceptable belief.

John Hick, in Faith and Knowledge, modifies the Wittgensteinian idea of forms of life to analyze faith claims in a novel manner. Hick claimed that this could shed light upon the epistemological (fides) analysis of faith. From such an analysis follows the non-epistemological thinking (fiducia) that guides actual practice.

Taking up the epistemological analysis, Hick first criticizes the voluntarisms of Pascal and James as "remote from the state of mind of such men as the great prophets." He criticizes James in particular for reducing truth to utility. Hick argues instead for the importance of rational certainty in faith. He posits that there are as many types of grounds for rational certainty as there are kinds of objects of knowledge. He claims that religious beliefs share several crucial features with any empirical claim: they are propositional; they are objects of assent; an agent can have dispositions to act upon them; and we feel convictions for them when they are challenged. Nonetheless, Hick realizes that there are important ways in which sense beliefs and religious beliefs are distinct: sense perception is coercive, while religious perception is not; sense perception is universal, while religious is not; and sense perception is highly coherent within space and time, while religious awareness among different individuals is not. In fact, it may in fact be rational for a person who has not had experiences that compel belief to withhold belief in God.

From these similarities and differences between faith claims and claims of reason, Hick concludes that religious faith is the noninferential and unprovable basic interpretation either of a moral or religious "situational significance" in human experience. Faith is not the result of logical reasoning, but rather a profession that God "as a living being" has entered into the believer's experience. This act of faith situates itself in the person's material and social environment. Religious faith interprets reality in terms of the divine presence within the believer's human experience. Although the person of faith may be unable to prove or explain this divine presence, his or her religious belief still acquire the status of knowledge similar to that of scientific and moral claims. Thus even if one could prove God's existence, this fact alone would be a form of knowledge neither necessary nor sufficient for one's faith. It would at best only force a notional assent. Believers live by not by confirmed hypotheses, but by an intense, coercive, indubitable experience of the divine.

Sallie McFague, in Models of God, argues that religious thinking requires a rethinking of the ways in which religious language employs metaphor. Religious language is for the most part neither propositional nor assertoric. Rather, it functions not to render strict definitions, but to give accounts. To say, for example, "God is mother," is neither to define God as a mother nor to assert an identity between them, but rather to suggest that we consider what we do not know how to talk about--relating to God - through the metaphor of a mother. Moreover, no single metaphor can function as the sole way of expressing any aspect of a religious belief.

b. Philosophical Theology

Many Protestant and Roman Catholic theologians in the twentieth century responded to the criticisms of religious belief, leveled by atheistic existentialists, naturalists, and linguistic positivists, by forging a new understanding of Christian revelation.

Karl Barth, a Reformed Protestant, provided a startlingly new model of the relation between faith and reason. He rejected Schleiermacher's view that the actualization of one's religious motivation leads to some sort of established union between man and God. Barth argued instead that revelation is aimed at a believer who must receive it before it is a revelation. This means that one cannot understand a revelation without already, in a sense, believing it. God's revelation of Himself, His very communication of that self, is not distinct from Himself. "In God's revelation God's Word is identical with God Himself" (in Church Dogmatics ii, I). Moreover, Barth claimed that God's revelation has its reality and truth wholly and in every respect, both ontically and noetically, within itself. Revelation cannot be made true by anything else. The fullness of the "original self-existent being of God's Word" reposes and lives in revelation. This renders the belief in an important way immune from both critical rational scrutiny and the reach of arguments from analogy.

Barth held, however, that relative to the believer, God remains "totally other" (totaliter aliter). Our selfhood stands in contradiction to the divine nature. Religion is, in fact, "unbelief": our attempts to know God from our own standpoint are wholly and entirely futile. This was a consistent conclusion of his dialectical method: the simultaneous affirmation and negation of a given theological point. Barth was thus an incompatibilist who held that the ground of faith lies beyond reason. Yet he urged that a believer is nonetheless always to seek knowledge and that religious beliefs have marked consequences for daily life.

Karl Rahner, arguably the most influential Catholic theologian of the twentieth century, was profoundly influenced by Barth's dialectical method. But Rahner argued that God's mystical self-revelation of Himself to us through an act of grace is not predestined for a few but extends to all persons: it constitutes the "supernatural existential" that grounds all intelligibility and action. It lies beyond proof or demonstration. Thus all persons, living in this prior and often unthematized state of God's gift, are "anonymous Christians." All humans can respond to God's self-communication in history. Rahner held thus that previous religions embodied a various forms of knowledge of God and thus were lawful religions. But now God has revealed his fullness to humans through the Christian Incarnation and word. This explicit self-realization is the culmination of the history of the previously anonymous Christianity. Christianity now understands itself as an absolute religion intended for all. This claim itself is basic for its understanding of itself.

Rahner's claim about the gratuitous gifts of grace in all humans reaches beyond a natural theology. Nonetheless one form of evidence to which he appeals for its rational justification is the stipulation that humans, social by nature, cannot achieve a relationship to God "in an absolutely private interior reality." The individual must encounter the natural divine law, not in his role as a "private metaphysician" but according to God's will in a religious and social context. Rahner thus emphasized the importance of culture as a medium in which religious faith becomes understood. He thus forged a new kind of compatibilism between faith and rationality.

c. Neo-Existentialism

Paul Tillich, a German Protestant theologian, developed a highly original form of Christian apologetics. In his Systematic Theology, he laid out a original method, called correlation, that explains the contents of the Christian faith through existential questions and theological answers in mutual interdependence. Existential questions arise from our experiences of transitoriness, finitude, and the threat of nonbeing. In this context, faith is what emerges as our thinking about our "ultimate concern." Only those who have had these kinds of experiences can raise the questions that open them to understand the meaning of the Christian message. Secular culture provides numerous media, such as poetry, drama, and novels, in which these questions are engendered. In turn, the Christian message provides unique answers to these questions that emerge from our human existence. Tillich realized that such an existentialist method - with its high degree of correlation between faith and everyday experience and thus between the human and the divine -- would evoke protest from thinkers like Barth.

Steven Cahn approaches a Christian existentialism from less sociological and a more psychological angle than Tillich. Cahn agrees with Kierkegaard's claim that most believers in fact care little about proofs for the existence of God. Neither naturalist nor supernaturalist religion depend upon philosophical proofs for God's existence. It is impossible to prove definitely the testimony of another's supposedly self-validating experience. One is always justified in entertaining either philosophical doubts concerning the logical possibility of such an experience or practical doubts as to whether the person has undergone it. Moreover, these proofs, even if true, would furnish the believer with no moral code. Cahn concludes that one must undergo a self-validating experience personal experience in which one senses the presence of God. All moral imperatives derive from learning the will of God. One may, however, join others in a communal effort to forge a moral code.

d. Neo-Darwinism

The Darwinistic thinking of the nineteenth century continued to have a strong impact of philosophy of religion. Richard Dawkins in his Blind Watchmaker, uses the same theory of natural selection to construct an argument against the cogency of religious faith. He argues that the theory of evolution by gradual but cumulative natural selection is the only theory that is in principle capable of explaining the existence of organized complexity in the world. He admits that this organized complexity is highly improbable, yet the best explanation for it is still a Darwinian worldview. Dawkins even claims that Darwin effectively solved the mystery of our own existence. Since religions remain firm in their conviction that God guides all biological and human development, Dawkins concludes that religion and science are in fact doomed rivals. They make incompatible claims. He resolves the conflict in favor of science.

e. Contemporary Reactions Against Naturalism and Neo-Darwinism

Contemporary philosophers of religion respond to the criticisms of naturalists, like Dawkins, from several angles.

Alvin Plantinga thinks that natural selection demonstrates only the function of species survival, not the production of true beliefs in individuals. Yet he rejects traditional Lockean evidentialism, the view that a belief needs adequate evidence as a criterion for its justification. But he refuses to furnish a fideist or existentialist condition for the truth of religious beliefs. Rather he claims that religious beliefs are justified without reasons and are, as such, "properly basic." These he sets in contrast to the claims of natural theology to form the basis of his "Reformed epistemology." Other Reformed epistemologists are W.P Alston and Nicholas Wolterstorff.

Plantinga builds his Reformed epistemology by means of several criticisms of evidentialism. First, the standards of evidence in evidentialism are usually set too high. Most of our reliable everyday beliefs are not subject to such strict standards. Second, the set of arguments that evidentialists attack is traditionally very narrow. Plantinga suggest that they tend to overlook much of what is internally available to the believer: important beliefs concerning beauty and physical attributes of creatures, play and enjoyment, morality, and the meaning of life. Third, those who employ these epistemological criticisms often fail to realize that the criticisms themselves rest upon auxiliary assumptions that are not themselves epistemological, but rather theological, metaphysical, or ontological. Finally, and more importantly, not all beliefs are subject to such evidence. Beliefs in memories or other minds, for example, generally appeal to something properly basic beyond the reach of evidence. What is basic for a religious belief can be, for example, a profound personal religious experience. In short, being self-evident, incorrigible, or evident to the senses is not a necessary condition of proper basicality. We argue to what is basic from below rather than from above. These claims are tested by a relevant set of "internal markers." Plantinga does admit that in fact no widespread acceptance of the markers can be assumed. He concludes, though, that religious believers cannot be accused of shirking some fundamental epistemic duty by relying upon this basic form of evidence.

Epistemological views such as Plantinga develops entail that there is an important distinction between determining whether or not a religious belief is true (de facto) and whether or not one ought to hold or accept it (de jure). On de jure grounds, for example, one can suggest that beliefs are irrational because they are produced either by a errant process or by an proper process aimed at the wrong aim or end. Theism has been criticized on both of these grounds. But since Christianity purports to be true, the de jure considerations must reduce ultimately to de facto considerations.

J.J. Haldane criticizes the scientific critiques of religion on the grounds that they themselves make two unacknowledged assumptions about reality: the existence of regular patterns of interaction, and the reality of stable intelligences in humans. These assumptions themselves cannot be proven by scientific inquiry. Thus it seems odd to oppose as rivals scientific and religious ways of thinking about reality. Science itself is faith-like in resting upon these assumptions; theology carries forward a scientific impulse in asking how the order of the world is possible. But what do we make of the fact that scientific models often explain the world better than religious claims? What troubles Haldane is the explanatory reductionism physical sciences employ is often thought to be entailed by the ontological reduction it assumes. For example, the fact that one can give a complete description of human action and development on a biological level alone is often thought to mean that all action and development can be explained according to biological laws. Haldane rejects this thesis, arguing that certain mental events might be ontologically reducible to physical events, but talk of physical events cannot be equally substituted for mental events in the order of explanation. Such argumentation reflects the general direction of the anomological monism proposed by Donald Davidson. Haldane concludes that language can be a unique source of explanatory potential for all human activity.

Like Haldane, Nancey Murphy also holds for a new form of compatibilism between religion and science. In Science and Theology she argues that the differences between scientific and theological methodologies are only of degree, not kind. She admits that scientific methodology has fundamentally changed the way we think about the world. Consequently, theology in the modern period has been preoccupied with the question of theological method. But she thinks that theological method can develop to meet the same standard of criteria as scientific method has.

Scientific thinking in the twentieth century in particular has been developing away from foundationalism: the derivation of theories from indubitable first principles. Willard van Orman Quine and others urged that scientific methodologists give up on foundationalism. He claimed that knowledge is like a web or net of beliefs: some beliefs are simply more apt to be adopted or rejected in certain situations than others are. Murphy sees that theology, too, is developing away from the foundationalism that literal interpretations of Scripture used to provide. Now it tends to emphasize the importance of religious experience and the individual interpretation of beliefs. But two problems await the move from theology away from foundationalism: subjectivism and circularity. The subjectivism emerges from the believer's inability to make the leap from his or her private inner experience to the real world. The circularity emerges from the lack of any kind of external check on interpretation. Alasdair MacIntyre is concerned with the latter problem. He claims that evidence for belief requires a veridical experience for each subsequent belief that arises from it. But Murphy finds this approach still close to foundationalism. Instead she develops two non-foundational criteria for the interpretation of a religious belief: that several related but differing experiences give rise to the belief, and that the belief have publicly observable consequences emanating from it.

To illustrate this approach to interpretation of beliefs, Murphy considers Catherine of Siena's claim that a true "verification" of a revelation from God requires that the believer subsequently engage in publicly observable acts of humility and charity. The verification also requires what Murphy calls discernment. Discernment reveals analogous experiences and interpretations in other believers and a certain reliability in the actions done. It functions the same way that a theory of instrumentation does in science. Discernment often takes place within a community of some sort.

But are these beliefs, supported by this indirect verification and communal discernment, still in any sense falsifiable? Murphy notes that religious experience has clashed with authoritative theological doctrine numerous times. But it has also ended up correcting it, for example in the way that Catherine of Siena's writings eventually changed the Roman Catholic tradition in which she was writing.

Murphy claims, however, that until theology takes on the status as a kind of knowledge of a reality independent of the human subject it is unlikely that theology and science will have a fruitful dialogue. But she thinks that turning from the subjectivization of the liberal turn in theology to discourse about human religiosity will help this dialogue.

A strong critic of the negative impact of scientific naturalism on faith is the Canadian philosopher Charles Taylor. Taylor finds in all naturalisms a kind of "exclusive humanism" that not only puts humans at the center of the universe, but denies them any authentic aspirations to goals or states beyond the world in which they live. In modernity naturalism has led inexorably to secularization. In Sources of the Self, Taylor argues that secularization, inspired by both Luther and Calvin, first resulted in the prioritizing of "ordinary life" of marriage and family over that of contemplative lives in the vowed or clerical state. In later phases it led to the transformation of cultural practices into forms that are neutral with regard to religious affiliation. But secularization is not a prima facie problem for any religious believer, since it does not preclude the possibility of religious faith or practices per se. Moreover, secularization has made possible the development of legal and governmental structures, such as human rights, better fit for pluralistic societies containing persons of a number of different religious faiths. Thus it has made it easier for Christians to accept full rights for atheists or violators of the Christian moral code. Nonetheless, Taylor sees problems that secularism poses for the Christian faith. It can facilitate a marriage between the Christian faith and a particular form of culture.

In contrast to naturalism, Taylor urges the adoption of a unique transcendental point of view. Such a view does not equate a meaningful life with a full or good life. Instead, a transcendental view finds in suffering and death not only something that matters beyond life, but something from which life itself originally draws. Thus natural life is to be subordinated to the "abundant life" that Jesus advocates in his Good Shepherd discourse (John 10:10). This call of the transcendental requires, ultimately, a conversion or a change of identity. This is a transition from self-centeredness, a kind of natural state, to God-centeredness. Unable to find value in suffering and death, those who focus on ordinary life try assiduously to avoid them. The consequences of this resistance to the transcendent, found in this uncritical embrace of ordinary life, are not so much epistemic as moral and spiritual. Ordinary life virtues emphasize benevolence and solidarity. But modern individuals, trying to meet these demands, experience instead a growing sense of anger, futility, and even contempt when confronted with the disappointments of actual human performance. This is ordinary life's "dialectics of reception." A transcendental vision, on the other hand, opens up a future for humans that is not a matter of guarantee, but only faith. It is derived from "standing among others in the stream" of God's unconditional love.

The theological principle by which Taylor buttresses this vision is that "Redemption happens through Incarnation." The incarnational and natural "ordinary" requires always the call of a redemptive "beyond" that is the object of our endeavors inspired by faith and hope.

f. Liberation Theology

Liberation theologians, such as Juan Segundo and Leonardo Boff, have drawn their inspiration from the plight of the poverty and injustice of peoples in the Third World, particularly Latin American. Drawing from Marx's distinction between theory and practice, Gustavo Gutiérrez, in A Theology of Liberation, argues that theology is critical reflection on the socio-cultural situation in which belief takes place. Ultimately theology is reactive: it does not produce pastoral practice, but it finds the Spirit either present or absent in current practices. The reflection begins by examining the faith of a people is expressed through their acts of charity: their life, preaching, and historical commitment of the Church. The reflection also draws from the totality of human history. In a second moment, the reflection provides resources for new practices. Thus it protects the faith of the people from uncritical practices of fetishism and idolatry. Theology thus plays a prophetic role, by interpreting historical events with the intention of revealing and proclaiming their profound meaning.

8. References and Further Reading

  • Alston, William. "History of Philosophy of Religion." The Routledge Encyclopedia of Philosophy. Vol. 8. Ed. E. Craig. New York: Routledge, 1998. Pp. 238-248.
    • This article provides a good basic outline of the problem of faith and reason.
  • Asimov, Isaac. Asimov's Biographical Encyclopedia of Science and Technology. Garden City NY: Doubleday, 1964.
    • Much of the above section of Galileo comes from this text.
  • Copleston, Frederick. Medieval Philosophy. New York: Harper, 1952.
  • Helm, Paul, ed. Faith and Reason. Oxford: Oxford University Press, 1999.
    • This text has an excellent set of readings and good introductions to each section. Some of the above treatment of the introductions to each period are derived from it.
  • McInerny, Ralph. St. Thomas Aquinas. Boston: Twayne, 1977.
  • McGrath, Alister, ed. The Christian Theology Reader. Oxford: Basil Blackwell, 1995.
    • This text provided some of the above material on early Christian philosophers.
  • Meagher, Paul, Thomas O'Brien and Consuelo Aherne, eds. Encyclopedic Dictionary of Religion. 3 Vols. Washington DC: Corpus Publications, 1979.
  • Murphy, Nancey. "Religion and Science." The Routledge Encyclopedia of Philosophy. Vol. 8. Ed. E. Craig.. New York: Routledge, 1998. Pp. 230-236
  • Murphy, Nancey. Theology in the Age of Scientific Reasoning. Ithaca NY: Cornell University Press, 1990.
  • Peterson, Michael, William Hasker, Bruce Reichenback, David Basinger. Philosophy of Religion: Selected Readings. Oxford: Oxford University Press, 2001.
    • This text was helpful for the above treatments of Richard Dawkins and Nancey Murphy.
  • Plantinga, Alvin. "Religion and Epistemology." The Routledge Encyclopedia of Philosophy. Vol. 8. Ed. E. Craig. London/New York: Routledge, 1998. Pp. 209-218.
  • Pojman, Louis, ed. Philosophy of Religion: An Anthology. 2nd ed. Belmont CA.: Wadsworth, 1994.
    • This text provides a good introduction to the philosophy of religion. Some of the above treatments of Kant, Pascal, Plantinga, Cahn, Leibniz, Flew, Hare, Mitchell, Wittgenstein, and Hick are derived from its summaries.
  • Pomerleau, Wayne. Western Philosophies of Religion. New York, Ardsley House, 1998.
    • This text serves as the basis for much of the above summaries of Augustine, Aquinas, Descartes, Locke, Leibniz, Hume, Kant, Hegel, Kierkegaard, James, Wittgenstein, and Hick.
  • Rolston, Holmes III. Science and Religion: A Critical Survey. New York: Random House, 1987.
    • This has a good section on the anthropic principle.
  • Solomon, Robert, ed. Existentialism. New York: The Modern Library, 1974.
  • Taylor, Charles. A Catholic Modernity? Oxford: Oxford University Press, 1999.
  • Taylor, Charles. Sources of the Self. Cambridge MA.: Harvard University Press, 1989.
  • Wolterstoff, Nicholas. "Faith." The Routledge Encyclopedia of Philosophy. Vol. 3. Ed. E. Craig. London: Routledge, 1998. Pp. 538-544.
    • This text formed the basis for much of the above treatment of "Reformed Epistemology.

Author Information

James Swindal
Duquesne University
U. S. A.


Qualia are the subjective or qualitative properties of experiences. What it feels like, experientially, to see a red rose is different from what it feels like to see a yellow rose. Likewise for hearing a musical note played by a piano and hearing the same musical note played by a tuba. The qualia of these experiences are what give each of them its characteristic "feel" and also what distinguish them from one another. Qualia have traditionally been thought to be intrinsic qualities of experience that are directly available to introspection. However, some philosophers offer theories of qualia that deny one or both of those features.

The term "qualia" (singular: quale and pronounced "kwol-ay") was introduced into the philosophical literature in its contemporary sense in 1929 by C. I. Lewis in a discussion of sense-data theory. As Lewis used the term, qualia were properties of sense-data themselves. In contemporary usage, the term has been broadened to refer more generally to properties of experience. Paradigm examples of experiences with qualia are perceptual experiences (including nonveridical perceptual experiences like hallucinations) and bodily sensations (such as pain, hunger, and itching). Emotions (like anger, envy, or fear) and moods (like euphoria, ennui, or anxiety) are also usually taken to have qualitative aspects.

Qualia are often referred to as the phenomenal properties of experience, and experiences that have qualia are referred to as being phenomenally conscious. Phenomenal consciousness is often contrasted with intentionality (that is, the representational aspects of mental states). Some mental states—for example, perceptual experiences—clearly have both phenomenal and intentional aspects. My visual experience of a peach on the kitchen counter represents the peach and also has an experiential feel. Less clear is whether all phenomenal states also have intentional aspects and whether all intentional states also have phenomenal aspects. Is there really something that it is like to have the belief—even the occurrent belief—that there is a peach on the counter? What could be the representational content of the experience of an orgasm? Along these lines, the nature of the relationship between phenomenal consciousness and intentionality has recently generated considerable philosophical discussion. Some philosophers think that phenomenal consciousness reduces to intentional content, while others think that the reductive relationship goes in the other direction. Still other philosophers deny both claims.

From the standpoint of introspection, the existence of qualia seems indisputable. It has, however, proved remarkably difficult to accommodate qualia within a physicalist account of the mind. Many philosophers have argued that qualia cannot be identified with or reduced to anything physical, and that any attempted explanation of the world in solely physicalist terms would leave qualia out. Thus, over the last several decades, qualia have been the source of considerable controversy in philosophy of mind.

Table of Contents

  1. The Hard Problem of Consciousness
  2. Qualia and Functionalism
  3. Qualia and Physicalism
  4. Qualia and Representationalism
  5. Eliminativism about Qualia
  6. Naturalistic Dualism
  7. References and Further Reading

1. The Hard Problem of Consciousness

One of the most fundamental questions about the mind concerns its relationship to the body (and, more specifically, its relationship to the brain). This has become known as the mind-body problem. Although it dates back at least to Plato's Phaedo, the problem was thrust into philosophical prominence by René Descartes. In taking up these issues in his Meditations on First Philosophy, Descartes argued for a dualist view according to which the mind and the body are fundamentally different kinds of things: While the body is a material thing existing in space, the mind is an immaterial thing, one that altogether lacks spatial extension. In contrast to dualists, the materialists claim that everything that exists must be made of matter. Historically, materialism was associated with Thomas Hobbes. Starting in the twentieth century, this position has become known as physicalism, the claim that everything that exists—all things and all properties of things—must fundamentally be physical. Most philosophers today endorse some form of physicalism.

For some aspects of consciousness, it is relatively straightforward to see how they can be accommodated within a physicalist picture. Consider, for example, our abilities to access, report on, and attend to our own mental states. It seems reasonable to assume that as neuroscience progresses and we learn more and more about the brain, we will be able to explain these abilities in terms of neural mechanisms. Aspects of consciousness that can be explained in this way constitute what David Chalmers has referred to as the easy problems of consciousness. The assertion that these problems are easy does not mean that they have already been solved or even that we are close to finding solutions. As Chalmers explicitly notes, we should think of "easy" as a relative term. In most cases, we are still nowhere near having a complete explanation of the relevant phenomena. Rather, what makes the problems easy is that, even though the solutions to these problems probably still require decades or even centuries of difficult empirical investigation, we nonetheless have every reason to believe that we can reach them using the standard methods of cognitive science and neuroscience. (Chalmers 1995, 1996) Solving the problem of attention, for example, simply awaits the empirical identification of a relevant neural mechanism. But what kind of mechanism could account for qualia? Though we strongly suspect that the physical system of the brain gives rise to qualia, we do not have any understanding of how it does so. The problem of accounting for qualia has thus become known, following Chalmers, as the hard problem of consciousness.

The hard problem of consciousness relates quite closely to what Joseph Levine had previously referred to as the explanatory gap. Given the scientific identification of heat with the motion of molecules, there is no further explanation that needs to be given: "our knowledge of chemistry and physics makes intelligible how it is that something like the motion of molecules could play the causal role we associate with heat…. Once we understand how this causal role is carried out there is nothing more we need to understand." (Levine 1983) In contrast, when we are told that pain is to be identified with some neural or functional state, while we have learned quite a bit, there is still something left unexplained. Suppose, for example, that we precisely identify the neural mechanism that accounts for pain—C-fiber firing, let's say. Still, a further question would remain: Why does our experience of pain feel the way that it does? Why does C-fiber firing feel like this, rather than like that, or rather than nothing at all? Identifying pain with C-fiber firing fails to provide us with a complete explanation along the lines of the identification of heat with the motion of molecules.

Some philosophers have claimed that closing the explanatory gap and fully accounting for qualia is not merely hard but rather impossible. This position, often referred to as new mysterianism, is most closely associated with Colin McGinn. According to McGinn, we will in principle never be able to resolve the mystery of what it is about the brain that accounts for qualia. (McGinn 1989) A similar, though slightly weaker, view is held by Thomas Nagel. According to Nagel, we currently do not have the conceptual apparatus necessary to even begin to understand how physicalism might be true. In order to solve the hard problem of consciousness, we would have to undergo a complete overhaul of our entire conceptual apparatus—a conceptual revolution so radical that we cannot even begin to conceive what the resulting concepts would be like. (Nagel 1998) But other philosophers reject the pessimism of the new mysterians as unwarranted or premature. Chalmers, for example, suggests that an explanation of how consciousness relates to the physical, even if it does not reduce to it, may well be enlightening. (See Chalmers 1996, 379)

It is perhaps easiest to see why the hard problem of consciousness is so hard by looking at particular attempts to account for qualia. The following three sections review three different theories of mental states—functionalism, physicalism, and representationalism—and the problems they face in accounting for qualia.

2. Qualia and Functionalism

The contemporary debate about qualia was framed in large part by discussions of functionalism in the late 1960s and early 1970s. Some attention had earlier been paid to qualia in connection with type identity theory, the view that mental state types could be identified with physical state types (for example, the mental state type pain might be identified to the neural state type C-fiber firing). But it was with the emergence of functionalism as a theory of mind that the debate about qualia began to heat up.

The intuition underlying the functionalist view is that the function of a mental state is its defining feature. Mental states are defined in terms of the causal role that they play in the entire system of the mind—that is, in terms of their causal relations to sensory stimuli, behavioral outputs, and other mental states. By defining mental states in this way, functionalism avoids many of the objections aimed at philosophical behaviorism, an early 20th century theory of mental states that defines them simply in terms of their input-output relations. Moreover, because a causal role can be defined independently of its physical realization (that is, because functional states are multiply realizable), functionalism avoids many of the objections aimed at the type identity theory. Rather than define pain in terms of C-fiber firing, functionalism defines pain in terms of the causal role it plays in our mental life: causing avoidance behavior, warning us of danger, etc., in response to certain environmental stimuli.

As plausible as functionalism may seem, however, it has long faced the charge that it is unable to account adequately for qualia. The causal role of a state seems to come apart from its qualitative aspects. To show this, opponents of functionalism have mounted two different kinds of arguments: (1) those aiming to show that two systems might be functionally identical even though only one of them has any qualia at all, and (2) those aiming to show that two systems might be functionally identical even though they have vastly different qualia from one another.

Falling in the first of these two categories, the absent qualia argument tries to establish that a system could instantiate the functional state of, say, pain without having any pain qualia. Ned Block originated this objection to functionalism with the thought experiment of the homunculi-headed robot (Block 1978). Suppose a billion people were recruited to take part in a giant experiment. Each individual is given a very small task to perform—for example, to press a certain button when a certain light comes on. In doing so, each of them plays the causal role of an individual neuron, with the communications between them mirroring the synaptic connections among the neurons. Now suppose that signals from this network of people are appropriately connected to a robot body, so that the signals from the network cause the robot to move, talk, etc. If the network were set up in the right way, then it seems in principle possible that it could be functionally equivalent to a human brain. However, intuitively speaking, it seems very odd to attribute qualia to the robot. Though it might be in a state functionally equivalent to the state you are in when you have a pain in your right toe, it seems implausible to suppose that the robot is feeling pain. In fact, it seems implausible to suppose that the robot could have any phenomenal experience whatsoever. Thus, if the absent qualia objection is right, we can have functional equivalence without qualitative equivalence, so qualia escape functional explanation.

A related objection, falling into the second category, is the inverted qualia argument against functionalism, which arises from considering a possibility originally suggested by John Locke. Suppose that two people, Norma and Abby, are qualitatively inverted with respect to one another. Both of them refer to stop signs, Coke cans, and Elmo as "red," and both refer to sugar snap peas, Heineken bottles, and Kermit the Frog as "green." But Abby's phenomenal experience when she sees a Coke can is like Norma's phenomenal experience when she sees a Heineken bottle. When Norma sees the Coke can, she has a reddish experience; when Abby sees the Coke can, she has a greenish experience. Likewise, when Norma sees the Heineken bottle, she has a greenish experience; but when Abby sees the Heineken bottle, she has a reddish experience. Qualitatively, the two are inverted relative to each other.

Though most people find this scenario conceptually coherent, the functionalist can make no sense of this inversion. Abby and Norma both refer to the Coke can as "red." They both indicate that it is the same color as stop signs and ripe tomatoes. Functionally speaking, there is nothing to differentiate the states that Abby and Norma are in when they see the Coke can. But, by hypothesis, they have different qualitative experiences when they see the Coke can. Thus, it looks as if functional definitions of mental states leave out the qualitative aspects of mental states.

In response to these qualia-related objections, the functionalist might try to argue that we have not really imagined the scenarios that we think we have imagined. For example, can we really imagine what would happen if we had a billion people participating in a network to operate the robot? (In fact, even a billion people would not be enough to simulate the human brain, which is estimated to have 100 billion neurons.) Along these lines, William Lycan (1995, 50-52) argues that our intuition that the robot does not have qualia stems from a misguided focus on each microscopic part of the system rather than on the macroscopic system as a whole. Likewise, the functionalist might offer considerations to show that, contrary to how it first seems, the notion of behaviorally undetectable qualia inversion is not conceptually coherent after all. For example, because saturated yellow is brighter than saturated blue, the inversion between Norma and Abby would be detectable if they were both shown patches of saturated blue and saturated yellow and asked which was brighter. (See Tye 1995, 203-4)

Alternatively, if the functionalist cannot convince us that the absent qualia and inverted qualia scenarios are incoherent, he might instead narrow the scope of the theory, restricting it to mental states that are not qualitative. As John Haugeland argues, we can "segregate" the states that can be functionalized from the states that cannot: "if felt qualities are fundamentally different, so be it; explaining them is somebody else's business." (Haugeland 1978, 222) However, while this kind of segregation might save functionalism as a theory of cognition, it does so only by ignoring the hard problem of consciousness.

3. Qualia and Physicalism

As described above, the absent qualia objection and the inverted qualia objection specifically target functionalism, but they can be generalized to apply to physicalism more broadly. For the inverted qualia argument, the generalization is straightforward. Just as we can conceive of Abby and Norma being in functionally identical states, it does not seem implausible to suppose that their brains might be physically identical to one another. If so, then just as qualia escape functional explanation, they also escape physical explanation.

The generalization is less straightforward with the absent qualia argument. The homunculi-headed robot, though functionally identical to a human being, is not physically identical to a human being. However, in recent work, Chalmers has argued that we can conceive of what he terms "zombies"—beings who are molecule-for-molecule identical with phenomenally conscious beings but who are not themselves phenomenally conscious. In appearance and action, a conscious being and his zombie replica would be indistinguishable, but for the zombie, as Chalmers says, "all is dark inside." (Chalmers 1996, 96) When Zack and Zombie Zack each take a bite of chocolate cake, they each have the same reaction—they smile, exclaim how good it is, lick their lips, and reach for another forkful. But whereas Zack, a phenomenally conscious being, is having a distinctive (and delightful) qualitative experience while tasting the chocolate cake, Zombie Zack is experiencing nothing at all. This suggests that Zack's consciousness is a further fact about him, over and above all the physical facts about him (since all those physical facts are true of Zombie Zack as well). Consciousness, that is, must be nonphysical.

Chalmers' argument has the standard form of a conceivability argument, moving from a claim about conceivability to a claim about metaphysical possibility. Though zombies are probably not physically possible—not possible in a world that has laws of nature like our world—the fact that they are conceivable is taken to show that there is a metaphysically possible world in which they could exist. This form of argument is not entirely uncontroversial (see, for example, Hill and McLaughlin 1999), and there is also considerable debate about whether Chalmers is right that zombies are conceivable (see, for example, Searle 1997). But if Chalmers is right about the conceivability of zombies, and if this conceivability implies their metaphysical possibility, then it would follow that physicalism is false.

An early and influential discussion of the general problem that qualia pose for physicalism can be found in Thomas Nagel's seminal paper, "What is it like to be a Bat?" (Nagel 1974). Although it might be that not all living creatures have phenomenal experiences, we can be pretty confident that bats do—after all, they are mammals who engage in fairly sophisticated behavior. In Nagel's words, there is something that it is like to be a bat. But the physiology of bats is radically different from the physiology of human beings, and the way they interact with the world is radically different from the way that we interact with the world. What we do via vision, they do via echolocation (sonar). We detect objects by sight; bats detect objects by sending out high-frequency signals and detecting the reflections from nearby objects. Because this way of perceiving the world is so different from our own, it seems that their perceptual experiences must be vastly different from our own—so different, in fact, that Nagel argues that it is unimaginable from our perspective. We, who are not bats, cannot know what it is like to be a bat. Qualia are inherently subjective, and as such, Nagel argues that they cannot be accommodated by physicalism: "Every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view." (Nagel 1974, 520)

Related worries about physicalism and qualia have been forcefully developed by Frank Jackson in his well-known thought experiment involving Mary, a brilliant color scientist who has spent her entire life in a black-and-white room. (Jackson 1982) Although she has normal color vision, her confinement has prevented her from ever having any color sensations. While in the room, Mary has studied color science through black and white textbooks, television, etc. And in that way she has learned the complete physical story about color experience, including all the physical facts about the brain and its visual system. She knows all the physical facts about color. But she has never seen anything in color. Now suppose that Mary is one day released from her room and presented with a ripe tomato. What should we imagine happens? Most people have the very strong intuition that Mary learns something from this perceptual experience. "Aha!" she might say. "Now I finally know what the color red is like."

The Mary case is the centerpiece of Jackson's knowledge argument against physicalism. While in the room, Mary knew all the physical facts about color, including the color red. When she is released from the room, Mary learns something about the color red, namely, what seeing red is like. What Mary learns consists of new, factual information. So there are facts about color in addition to all the physical facts about color (since Mary already knew all the physical facts about color). Thus, the argument goes, physicalism is false.

In the quarter century since Jackson's development of the knowledge argument, a vast literature has developed in response to it. Attempting to save physicalism, some philosophers deny that Mary learns anything at all when she leaves the room. If we really imagine that Mary has learned all the physical facts about color while in the room, then there would be no "Aha!" moment when she is shown a ripe tomato. We are led to think otherwise only because we typically fall short of imagining what we've been asked to imagine—we imagine only that Mary knows an immense amount about colors, that she has mastered all the information contained in our present science of color, which still remains incomplete. As Patricia Churchland has argued, "How can I assess what Mary will know and understand if she knows everything there is to know about the brain? Everything is a lot, and it means, in all likelihood, that Mary has a radically different and deeper understanding of the brain than anything barely conceivable in our wildest flights of fancy." (P.S. Churchland 1986, 332; see also Dennett 1991, 399-400)

Despite these reservations about what happens when Mary leaves the room, most philosophers—even most physicalists—accept Jackson's assessment that Mary learns something from her experience with the ripe tomato. Physicalists who grant this point have typically attempted two different strategies to respond to the knowledge argument: (1) They might accept that Mary gains new knowledge that isn't understood in terms of facts; or (2) they might accept that Mary's knowledge is factual but deny that she's learned anything new; rather, facts that she already knew are presented to her in a new way.

To pursue strategy (1), the physicalist might argue that the knowledge Mary gains when she leaves the room consists in nonfactual knowledge. Along these lines, David Lewis (1988) offers the ability hypothesis: When Mary leaves the room, all that happens is that she gains some new abilities regarding color that she didn't have before. Unlike before, Mary is now able to imagine, recognize, and remember the color red. So she gains know-how, but she doesn't learn any facts. Pursuing strategy (1) in a different way, Earl Conee (1994) offers the acquaintance hypothesis: When Mary leaves the room, all that happens is that she becomes acquainted with the color red. When you meet someone for the first time that you've previously heard or read a lot about, you don't necessarily learn any facts about them; rather, you just become acquainted with them. Conee thus argues that acquaintance knowledge (like ability knowledge) should not be understood in terms of facts. If either the ability hypothesis or the acquaintance hypothesis is right, and Mary does not learn any facts when she leaves the room, then the knowledge argument does not show that the physical facts are incomplete.

To pursue strategy (2), the physicalist might argue that Mary doesn't gain any new knowledge when she leaves the room; rather, she simply comes to apprehend an old fact under a new guise. While in the room, she did not have the conceptual apparatus she needed in order to apprehend certain color facts in a phenomenal way. Having seen color, she has now gained new concepts—phenomenal concepts—and thus is able to re-apprehend the same facts she already knew in a different way. (Loar 1990) Whether there are genuinely phenomenal concepts, and if so, whether they do the work in answering the knowledge argument that the physicalists want them to, has recently been generating a growing literature of its own.

4. Qualia and Representationalism

While functionalism and physicalism are put forth as general theories of mind, representationalism aims specifically to give an account of qualia. According to this view, the qualitative character of our phenomenal mental states depends on the intentional content of such states. Representationalist views divide into two categories depending on exactly how they characterize this dependence. Weak representationalism makes a claim only about supervenience: The qualitative character of our mental states supervenes on the intentional content of those states (that is, if two experiences are alike representationally, then they are alike phenomenally). Strong (or pure) representationalism makes a further claim: The qualitative character of our mental states consists in the intentional content of such states. Strong representationalism thus offers a theory of qualia—it attempts to explain what qualitative character is. This section addresses the strong representationalist theory of qualia; hereafter, the modifier "strong" will be omitted.

Recall the distinction above between the easy problems of consciousness and the hard problem. Accounting for representational content is supposed to be one of the easy problems. It may take us an enormous amount of empirical work to get to the solution, but the standard methods of cognitive science will be able to apply. Thus, if qualia can be reduced to intentionality, then we have turned the hard problem of consciousness into an easy problem. A full and satisfactory account of qualia awaits only a solution to the easy problem of intentionality.

Consider pain qualia. Traditionally, philosophers classified pain experiences as non-intentional. However, the representationalist claims that this is a mistake. When one has a pain in one's leg, the experience represents damage in the leg. Moreover, its phenomenal feel—its painfulness—consists in its doing so. As Michael Tye argues, "[T]he phenomenal character of my pain intuitively is something that is given to me via introspection of what I experience in having the pain. But what I experience is what my experience represents. So, phenomenal character is representational." (Tye 1990, 338)

Given that the representationalist typically does not want to claim that all intentional content is qualitative, he must explain what is special about the intentional content in which phenomenal character is supposed to consist. My belief that Thomas the Tank Engine is blue and my mental image of Thomas the Tank Engine have similar intentional content; they both represent him as blue. So, what about the intentional content of the latter gives it its distinctive phenomenology? Here Tye has a particularly well-developed answer. He suggests that phenomenal content is a species of nonconceptual intentional content, in particular, nonconceptual intentional content that is poised and abstract. (Tye 1995) Because we can experience many things for which we lack concepts—for example, a proud parent might visually experience his young child's drawing without having a concept for the shape that the drawing is—it is important that phenomenal content be restricted to nonconceptual content. The requirement that the contents be poised means that they "stand ready and in position to make a direct impact on the belief/desire system." (Tye 1995, 138) The requirement that the contents be abstract means that no particular concrete object is a part of them.

In support of their theory, representationalists often invoke what we might call the transparency thesis. According to this thesis, experience is alleged to be transparent in the sense that we "see" right through it to the object of that experience, analogously to the way that we see through a pane of glass to whatever is on the other side of it. Gilbert Harman introduced such considerations into the contemporary debate about qualia in a now-famous passage: "When Eloise sees a tree before her, the colors she experiences are all experienced as features of the tree and its surroundings. None of them are experienced as intrinsic features of her experience. Nor does she experience any features of anything as intrinsic features of her experiences." (Harman 1990, 667) As Harman went on to argue, the same is true for all of us: When we look at a tree and then introspect our visual experience, all we can find to attend to are features of the presented tree. Our experience is thus transparent; when we attend to it, we can do so only by attending to what the experience represents. Representationalists contend that their theory offers the best and simplest possible explanation of this phenomenon. The best explanation of the fact that we cannot introspectively find any intrinsic features of our experience is that there are none to find; the phenomenal character of experience is wholly constituted by the representational content of the experience. (see especially Tye 1995, 2000)

Whether experience is really transparent in the way that the representationalists suppose has lately been the subject of some dispute, and there has also been considerable discussion about the relationship between experiential transparency and representationalism (See, for example, Kind 2003, Siewert 2004). Most problematic for the representationalists, however, has been the fact that their view falls victim to several persistent and compelling counterexamples. Many phenomenal states simply do not seem to be doing any representing—or, more cautiously, it seems that their phenomenal content far outruns their representational content. Ned Block has argued this point using the example of the orgasm: "Orgasm is phenomenally impressive and there is nothing very impressive about the representational content that there is an orgasm." (Block 2003, 543) He also discusses phosphene experiences, the color sensations created by pressure on the eyeball when one's eyelids are closed. Phosphene experiences do not seem to be representing anything; we don't take the experience to suggest that there are colored moving expanses out there somewhere.

Consider also the experience of seeing something flying overhead and hearing something flying overhead. While these two experiences have quite different phenomenal characters, their representational contents are plausibly the same: there's something flying overhead. (The most obvious way of differentiating them—by talking of the "way" of representing—brings in something nonrepresentational.) If this is right, then phenomenal character does not supervene on representational character. In response to objections of this sort, intramodal representationalists restrict their view so that it applies only within a given sensory modality. Unlike intermodal representationalists, who claim that all phenomenal differences, even differences between sensory modalities, can be explained in terms of representational content, intramodal representationalists think that we must offer some additional explanation to account for what makes a phenomenal experience auditory rather than visual, or visual rather than tactile. Typically, this additional explanation is provided in functionalist terms. (See Lycan 1996, esp. 134-35)

Along with these sorts of counterexamples, representationalism also falls victim to a version of the inverted qualia argument: the case of Inverted Earth (Block 1990). On Inverted Earth, the colors of objects are inverted relative to earth. Ripe tomatoes are green; unripe tomatoes are red. Big Bird is blue; the Cookie Monster is yellow. Other than this color inversion, everything else on Inverted Earth is exactly like earth. Now imagine that, without your knowledge, you are fitted with color-inverting lenses and transported to Inverted Earth. Since the lens-inversion cancels out the inversion of colors of Inverted Earth, you are unable to detect that you're in a different environment. When you look at the sky on Inverted Earth, you have a blue experience even though the sky there is yellow; when you look at the green ripe tomatoes, you have a red experience. While originally on earth, your red experience while looking at ripe tomatoes represented red. But according to Block, after enough time passes and you have become embedded in the linguistic and physical environment of Inverted Earth, your reddish experience while looking at ripe tomatoes represents green (since that is the color of the ripe tomatoes on Inverted Earth). If Block's description of the Inverted Earth case is correct, then two experiences having identical qualitative character can differ in their intentional contents; thus, qualia do not supervene on intentional content and representationalism must be false.

In response to the Inverted Earth scenario, representationalists often adopt a teleological account of intentionality according to which the intentional contents of an individual's qualitative states are determined by the evolutionary history of its species. This allows them to reject Block's assertion that your intentional contents switch to match the Inverted Earthlings intentional contents. Humans have evolved such that red experiences represent red things. Thus, no matter how long you spend on Inverted Earth, the intentional contents of your reddish experiences will never switch to match the intentional contents of the Inverted Earthlings.

A completely different source of worry about representationalism has been raised by John Searle. Searle agrees with the representationalist that there is a close connection between phenomenal consciousness and intentionality, but he thinks that the representationalist gets the explanatory connection backwards. Rather than explain consciousness in terms of intentionality, Searle claims that we need to explain intentionality in terms of consciousness: "There is a conceptual connection between consciousness and intentionality that has the consequence that a complete theory of intentionality requires an account of consciousness." (Searle 1992, 132) Recent work by George Graham, Terry Horgan, and John Tienson argues along similar lines. On their view, "the most fundamental, nonderivative sort of intentionality is fully constituted by phenomenology." (Graham and Horgan 2008, 92; see also Horgan and Tienson 2002)

5. Eliminativism about Qualia

Rather than trying to find some way to fit qualia into a physicalist theory of mind, some philosophers have taken an entirely different attitude towards qualia. They deny that qualia exist. This position is known as eliminativism about qualia, and it commonly constitutes a part of a larger eliminativist project about mental states in general. For example, Paul and Patricia Churchland have argued (both together and individually) that as we gain more and more neuroscientific understanding of our mental lives, we will come to see that our current mental state concepts—belief, pain, sensation, qualia, etc.—all need to be discarded.

The Churchlands offer numerous useful analogies to help make this point. To consider just one of their examples: Ptolemaic theory placed the Earth at the center of the universe, around which a giant celestial sphere revolved. This created all sorts of difficult problems in need of solutions, like determining the cause of the sphere's rotation. When Newtonian theory displaced Ptolemaic theory, the notion of the celestial sphere was completely discarded. It wasn't that Ptolemaic theorists had an inadequate account of the celestial sphere; rather, what was discovered was that there was no celestial sphere. Thus, the problem of what causes the sphere's movement turned out to be a pseudo-problem. Similarly, the Churchlands predict that as our neuroscientific knowledge increases, we will come to see that the problem of qualia is a pseudo-problem, because we will come to see that there are no qualia—at least not as presently understood. Just as the celestial sphere did not turn out to be identifiable with or reducible to some element of Newtonian theory, qualia will not turn out to be identifiable with or reducible to some element of future neuroscientific theory. Rather, the concept will have to be eliminated entirely. (P.S. Churchland 1986, 292-293; P.M. Churchland 1984, 43-45)

Insofar as eliminative materialism merely makes a prediction about what will happen once we increase our neuroscientific knowledge, it is hard to evaluate. However, Daniel Dennett offers related arguments for eliminativism designed to show there is such internal inconsistency in our notion of qualia that we are hopelessly misguided in trying to retain it. According to Dennett, there are no properties that meet the standard conception of qualia (that is, properties of experience that are intrinsic, ineffable, directly and/or immediately introspectible, and private). He reaches this conclusion by consideration of numerous thought experiments that are designed to tease out the alleged confusions inherent in our concept of qualia. For example, consider two coffee drinkers, Chase and Sanborn. Both discover one day that they no longer like the Maxwell House coffee they've long enjoyed. Chase claims: "Even though the coffee still tastes the same to me, I now no longer like that taste." In contrast, Sanborn claims: "The coffee now tastes different to me, and I don't like the new taste." But, asks Dennett, how do they know this? Perhaps Chase's taste receptors have changed so gradually that he hasn't noticed a change in taste; that is, perhaps he's really in the situation that Sanborn purports to be in. Or perhaps Sanborn's standards have changed so gradually that he hasn't noticed that he now employs different criteria in evaluating the coffee; that is, perhaps he's really in the situation that Chase purports to be in. There seems no first-personal way for Chase and Sanborn to settle the matter, calling into question the idea that they have any kind of direct or special access to private properties of their experience. We might try to devise some behavioral tests to detect the difference, but if we could do so, that would suggest that qualia could be defined relationally, in reference to behavior, and this would call into question the idea that they are intrinsic. Thus, concludes Dennett, our conception of qualia is so confused that it would be "tactically obtuse" to try to salvage the notion; rather, we should just admit that "there simply are no qualia at all." (Dennett 1988)

6. Naturalistic Dualism

There is at least one further option available to philosophers when confronting the hard problem of consciousness. Without denying the reality of qualia, one might simply accept that they resist reduction in physical, functional, or representational terms and embrace some form of dualism. This is David Chalmers' own approach to the hard problem. Because he believes that we can account for phenomenal consciousness within a solely natural framework, he adopts what he refers to as naturalistic dualism.

Descartes' dualism was a version of substance dualism. According to Descartes, the mind is an immaterial substance existing independently of the body. In contrast, Chalmers' dualism is a version of property dualism. This view does not posit the existence of any nonphysical or immaterial substances, but instead posits the existence of properties—qualia—that are ontologically independent of any physical properties. Though these properties are not entailed by physicalism (that is, though they do not logically supervene on physical properties) they may nonetheless somehow arise from them. As Chalmers describes his view: "[C]onsciousness arises from a physical substrate in virtue of certain contingent laws of nature, which are not themselves implied by physical laws." (Chalmers 1996, 125)

Physics postulates a number of fundamental features of the world: mass, spin, charge, etc. Naturalistic dualism adds nonphysical phenomenal properties to this list. Correspondingly, it suggests we must add fundamental laws governing the behavior of the fundamental phenomenal features to the list of the fundamental laws governing the behavior of the fundamental physical features of the world. We don't presently understand exactly what these new laws and the completed theory containing them will look like, and Chalmers admits that developing such a theory will not be easy, but in principle it should be possible to do so.

This commitment to lawfulness is what allows Chalmers to remain within a naturalistic framework, even as he abandons the physicalistic framework. On his view, "the world still consists in a network of fundamental properties related by basic laws, and everything is to be ultimately explained in those terms. All that has happened is that the inventory of properties and laws has been expanded [beyond the physical properties and laws]." (Chalmers 1996, 127-8) In a similar spirit, Gregg Rosenberg has recently offered a view he calls liberal naturalism.Though liberal naturalism holds that the fundamental properties of the world "are mutually related in a coherent and natural way by a single set of fundamental laws," it denies that these properties and laws can all be completely captured in physical terms. (Rosenberg 2004, 9)

In giving up physicalism, naturalists argue that we can retain almost everything that's important about our current scientific worldview. But the adoption of nonphysicalistic naturalism typically leads in two directions that many have thought problematic. First, it seems to imply panpsychism, the view that everything in the universe has consciousness. Once you accept the existence of nonphysical features of the world that are fundamental, it is hard to find a principled way of limiting exactly where those fundamental features are found. As Chalmers admits, "if experience is truly a fundamental property, it seems natural for it to be widespread." (Chalmers 1996, 297; see also Nagel 1979) Second, it seems to commit one to epiphenomenalism, the view that qualia lack any causal power whatsoever. Intuitively, we believe that the qualitative character of pain—the fact that it hurts—causes us to react the way that we do when we feel pain. But if qualia are epiphenomenal, then the painfulness of pain is causally inert.

In addressing the first of these two worries, Chalmers denies that naturalistic dualism entails panpyschism. Though he recognizes that it provides a particularly elegant way of working out the details of the view that experience supervenes naturally on the physical, he believes that there remains the possibility that those details could be worked out another way. Benjamin Libet, for example, offers a theory that sees consciousness as fundamental without endorsing panpsychism (Libet 1996). In contrast to Chalmers and Libet, Rosenberg concedes that nonreductive naturalism will most likely require us to adopt at least a weak form of panpsychism, and he offers arguments to show why this consequence should not be seen as threatening.

Even if naturalism leads only to a mild form of panpyschism, however, most contemporary philosophers would find this extremely problematic. How could blades of grass, or rocks, or atoms be conscious? Panpsychism is almost universally regarded with skepticism, if not outright scorn. Colin McGinn, for example, has claimed that panpsychism is "metaphysically and scientifically outrageous." (McGinn 1996, 34) Similarly, in reaction to Chalmers' panpsychist musings, John Searle calls panpsychism "absurd" and claims that there is "not the slightest reason" to adopt it. (Searle 1997, 161)

The worries about epiphenomenalism are no less troublesome for the naturalist than are the worries about panpsychism. Intuitively speaking, qualia are important aspects of our mental lives. The itchiness of an itch makes us scratch, the delicious taste of chocolate leads us to reach for another piece, the wrenching feeling of grief erupts in a flood of tears. But if qualia are physically irreducible, then it seems they must be left out of the causal explanations of our actions. We typically assume that the physical world is causally closed; all physical events, including bodily movements, can be given complete causal explanations in wholly physical terms. Unless we reject causal closure, then assuming we do not want to embrace the possibility of causal overdetermination, qualia have no role to play in the causal story of our actions.

We can easily see why naturalism leads to epiphenomenalism by reconsidering the zombie world. By hypothesis, your zombie twin is behaviorally indistinguishable from you despite having no qualia. His actions can be causally explained entirely by the physical workings of his brain. But he's a molecule-for-molecule duplicate of you, so the physical workings of your brain can provide a complete causal explanation of your actions. Your qualia play no role in causing the actions that you perform.

Chalmers addresses the threat of epiphenomenalism in two ways. First, he suggests that our inadequate understanding of the nature of causation may here be leading us astray: "it is possible that when causation is better understood we will be in a position to understand a subtle way in which consciousness may be relevant." (Chalmers 1996, 150) Second, he tries to show that epiphenomenalism may not be as unpalatable as many have thought. In particular, he argues that we don't have any reasons to reject epiphenomenalism except for its seeming counterintuitive; there are no effective arguments against it. (See also Jackson 1982.) Moreover, given the fatal flaws that threaten the competing alternatives to naturalistic dualism, it may turn out that accepting some degree of counterintuitiveness is the small price we have to pay in order to develop a coherent and unmysterious view of consciousness and its place in nature.

7. References and Further Reading

  • Block, N. 2007. Consciousness, Function, and Representation. Cambridge, Mass.: The MIT Press.
    • A very useful collection bringing together Block's impressive body of work in philosophy of mind on issues relating to functionalism, qualia, and consciousness.
  • Block, N. 2003. "Mental Paint." In Martin Hahn and Bjorn Ramberg, eds., Reflections and Replies: Essays on the Philosophy of Tyler Burge, 165-200. Cambridge, Mass.: The MIT Press, 2003. Reprinted in Block 2007, 533-563; page references are to the reprinted version.
    • A helpful characterization of the issues surrounding representationalism (which Block calls representationism) and a defense of a qualia realist view he calls phenomenism.
  • Block, N. 1994. "Qualia." In Samuel Guttenplan, ed., A Companion to the Philosophy of Mind, 514-520. Oxford: Blackwell Publishers. Reprinted in Block 2007, 501-510.
  • Block, N. 1990. "Inverted Earth." In James Tomberlin, ed., Philosophical Perspectives 4, Action Theory and Philosophy of Mind, 53-79. Atascadero, Calif.: Ridgeview. Reprinted in Block 2007, 511-532.
    • A reply to Harman's "The Intrinsic Quality of Experience." This paper introduces the much-discussed Inverted Earth thought experiment, a version of the inverted qualia argument targeting representationalism.
  • Block, N. 1978. "Troubles with Functionalism." In C.W. Savage, ed., Perception and Cognition: Issues in the Foundations of Psychology, pp. 261-326. Reprinted with revision and abridgement in Block 2007, 63-101.
    • An influential work that develops in detail the absent qualia objection to functionalism.
  • Block, N., Flanagan, O., and Guzeldere, G., eds. 1997. The Nature of Consciousness. Cambridge, Mass.: The MIT Press.
    • An anthology collecting much of the classic work on consciousness.
  • Byrne, A. 2001. "Intentionalism Defended." Philosophical Review 110: 199-240.
    • A very useful overview of the issues surrounding representationalism.
  • Chalmers, D. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press.
    • One of the most important books in philosophy of mind over the last twenty years; introduces and discusses in detail the hard problem of consciousness. Although the book is technical in parts, the most technical sections are indicated by asterisk and can be skipped without losing the overall argument.
  • Chalmers, D. 1995. "Facing Up to the Problem of Consciousness." Journal of Consciousness Studies 2: 200-219.
  • Churchland, P.M. 1984. Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind. Cambridge, Mass.: The MIT Press.
    • An accessible introductory text to the philosophy of mind, though Churchland's own eliminativist leanings shade his treatment of the issues discussed.
  • Churchland, P.S. 1986. Neurophilosophy: Toward a Unified Science of the Mind-Brain. Cambridge, Mass.: The MIT Press.
  • Churchland, P.M. and Churchland, P.S. 1981. "Functionalism, Qualia, and Intentionality." Philosophical Topics 12: 121-145.
  • Conee, E. 1994. "Phenomenal Knowledge." Australasian Journal of Philosophy 72: 136-150. Reprinted in Ludlow et al, 2004.
    • A classic presentation of Conee's "acquaintance hypothesis" in response to Jackson's knowledge argument.
  • Dennett, D. 1991. Consciousness Explained. Boston: Little, Brown and Company.
  • Dennett, D. 1988. "Quining Qualia." In A. Marcel and E. Bisiach, eds., Consciousness in Contemporary Science, 43-77. Oxford: Oxford University Press.
    • Argues, in Dennett's characteristically jocular style, for eliminativism about qualia.
  • Dretske, F. 1995. Naturalizing the Mind. Cambridge, Mass.: The MIT Press.
    • A sustained argument for representationalism, with sustained discussion of how representation works.
  • Graham, G. and Horgan, T. "Qualia Realism's Contents and Discontents." In Edmond Wright, ed., The Case for Qualia. Cambridge, Mass.: The MIT Press (2008), 89-107.
  • Harman, G. 1990. "The Intrinsic Quality of Experience." In James Tomberlin, ed., Philosophical Perspectives 4, Action Theory and Philosophy of Mind, 31-52. Atascadero, Calif.: Ridgeview. Reprinted in Block et al, 1997, 663-675; page references to the reprinted version.
    • Introduces considerations of the transparency of experience into contemporary discussions of qualia.
  • Haugeland, J. 1978. "The Nature and Plausibility of Cognitivism." Behavioral and Brain Sciences 2: 215-260.
  • Hill, C. and McLaughlin, B. 1999. "There are Fewer Things in Reality than are Dreamt of in Chalmers' Philosophy." Journal of Phenomenological Research 59: 445-454.
  • Horgan, T. and Tienson, J. 2002. "The Intentionality of Phenomenology and the Phenomenology of Intentionality." In David Chalmers, ed., Philosophy of Mind: Classical and Contemporary Readings. Oxford: Oxford University Press (2002), 520-533.
  • Jackson, F. 1982. "Epiphenomenal Qualia." Philosophical Quarterly 32: 127-136. Reprinted in Ludlow et al, 2004.
    • Jackson's classic paper first laying out the Mary case and the knowledge argument against physicalism.
  • Keeley, B. 2009. "The Early History of the Quale and Its Relation to the Senses." In John Symons and Paco Calvo, eds., Routledge Companion to the Philosophy of Psychology. New York: Routledge Press.
    • Reviews the history of the use of the term "qualia," both before and after C.I. Lewis introduced it into the philosophical literature in roughly its contemporary sense.
  • Kind, A. 2003. "What's So Transparent About Transparency?" Philosophical Studies 115: 225-244.
  • Levine, J. 1983. "Materialism and Qualia: The Explanatory Gap." Pacific Philosophical Quarterly 64: 354-361.
  • Lewis, C.I. 1929. Mind and the World Order. New York: Charles Scribner's Sons.
    • Introduces the term "qualia" in its contemporary sense (introspectible, monadic, subjective properties), though Lewis uses it in the context of sense data.
  • Lewis, D. 1988. "What Experience Teaches." In J. Copley-Coltheart, ed., Proceedings of the Russellian Society 13: 29-57. Reprinted in Ludlow et al, 2004.
    • An influential presentation of the "ability hypothesis" in response to Jackson's knowledge argument.
  • Libet, B. 1996. "Solutions to the Hard Problem of Consciousness." Journal of Consciousness Studies 3: 33-35.
  • Loar, B. 1990. "Phenomenal States." In James Tomberlin, ed., Philosophical Perspectives 4, Action Theory and Philosophy of Mind, 81-108. Atascadero, Calif.: Ridgeview. Revised version reprinted in Ludlow et al, 2004.
  • Ludlow, P., Nagasawa, Y., and Stoljar, D. 2004. There's Something About Mary: Essays on Phenomenal Consciousness and Frank Jackson's Knowledge Argument. Cambridge, Mass.: The MIT Press.
    • An anthology that collects Jackson's original two papers laying out the knowledge argument along with many important papers in response. Also contains Jackson's recent surprising recantation of the original argument, published here for the first time. Jackson now believes that the representationalist view helps us to see how the argument goes wrong.
  • Lycan, W.G. 1996. Consciousness and Experience. Cambridge, Mass.: The MIT Press.
    • A development of Lycan's intramodal representationalism.
  • Lycan, W. 1995. Consciousness. Cambridge, Mass.: The MIT Press.
  • McGinn, C. 1989. "Can We Solve the Mind-Body Problem?" Mind 98: 349-366. Reprinted in Block et al, 1997, 529-542.
    • Defends new mysterianism, that is, the view that the problem of consciousness cannot in principle be solved.
  • McGinn, C. 1996. The Character of Mind (Second Edition). Oxford: Oxford University Press.
  • Nagel, T. 1998. "Conceiving the Impossible and the Mind-Body Problem", Philosophy 73: 337-352
  • Nagel, T. 1979. "Panpsychism." In Mortal Questions. Cambridge University Press.
  • Nagel, T. 1974. "What is it Like to be a Bat?" Philosophical Review 83: 435-450. Reprinted in Block et al, 1997, 519-527; page references are to the reprinted version.
    • A classic paper arguing that physicalism cannot accommodate the subjective aspects of experience—much-cited and well worth reading.
  • Rosenberg, G. 2004. A Place for Consciousness. Oxford: Oxford University Press.
  • Searle, John. 1997. The Mystery of Consciousness. New York: New York Review of Books.
    • A collection of Searle's essays from The New York Review of Books.
  • Searle, J. 1992. The Rediscovery of the Mind. Cambridge, Mass.: The MIT Press.
    • Argues for a conceptual connection between consciousness and intentionality.
  • Shoemaker, S. 1975. "Functionalism and Qualia." Philosophical Studies 27, 291-315.
    • An interesting argument attempting to show that functionalism can handle inverted qualia. Shoemaker's own view about qualia is somewhat idiosyncratic in that he denies they are directly introspectible.
  • Siewert, C. 2004. "Is Experience Transparent?" Philosophical Studies, 117: 15-41.
  • Tye, M. 2000. Consciousness, Color, and Content. Cambridge, Mass.: The MIT Press.
    • Further development of the representationalist view, including responses to common criticisms of the view.
  • Tye, M. 1995. Ten Problems of Consciousness. Cambridge, Mass.: The MIT Press.
    • Develops a strong representationalist view in an attempt to unravel several puzzling aspects of consciousness (its subjectivity, transparency, etc.).
  • Tye, M. 1990. "A Representational Theory of Pains and their Phenomenal Character." In James Tomberlin, ed., Philosophical Perspectives 9. Atascadero, Calif.: Ridgeview.
    • An early statement of representationalism, here limited specifically to pain.

Author Information

Amy Kind
Claremont McKenna College
U. S. A.

Russell's Metaphysics

russellMetaphysics is not a school or tradition but rather a sub-discipline within philosophy, as are ethics, logic and epistemology. Like many philosophical terms, “metaphysics” can be understood in a variety of ways, so any discussion of Bertrand Russell’s metaphysics must select from among the various possible ways of understanding the notion, for example, as the study of being qua being, the study of the first principles or grounds of being, the study of God, and so forth. The primary sense of “metaphysics” examined here in connection to Russell is the study of the ultimate nature and constituents of reality.

Since what we know, if anything, is assumed to be real, doctrines in metaphysics typically dovetail with doctrines in epistemology. But in this article, discussion of Russell’s epistemology is kept to a minimum in order to better canvas his metaphysics, beginning with his earliest adult views in 1897 and ending shortly before his death in 1970. Russell revises his conception of the nature of reality in both large and small ways throughout his career. Still, there are positions that he never abandons; particularly, the belief that reality is knowable, that it is many, that there are entities – universals – that do not exist in space and time, and that there are truths that cannot be known by direct experience or inference but are known a priori.

The word “metaphysics” sometimes is used to describe questions or doctrines that are a priori, that is, that purport to concern what transcends experience, and particularly sense-experience. Thus, a system may be called metaphysical if it contains doctrines, such as claims about the nature of the good or the nature of human reason, whose truth is supposed to be known independently of (sense) experience. Such claims have characterized philosophy from its beginnings, as has the belief that they are meaningful and valuable. However, from the modern period on, and especially in Russell’s own lifetime, various schools of philosophy began to deny the legitimacy and desirability of a priori metaphysical theorizing. In fact, Russell’s life begins in a period sympathetic to this traditional philosophical project, and ends in a period which is not. Concerning these “meta-metaphysical” issues (that is, doctrines not in metaphysics but about it and its feasibility), Russell remained emphatically a metaphysician throughout his life. In fact, in his later work, it is this strand more than doctrines about the nature of reality per se that justify his being considered as one of the last, great metaphysicians.

Table of Contents

  1. The 1890s: Idealism
    1. Neo-Hegelianism
    2. F. H. Bradley and Internal Relations
    3. Neo-Kantianism and A Priori Knowledge
    4. Russell’s Turn from Idealism to Realism
      1. His Rejection of Psychologism
      2. His Rejection of Internal Relations
  2. 1901-1904: Platonist Realism
    1. What has Being
    2. Propositions as Objects
    3. Analysis and Classes
    4. Concepts’ Dual Role in Propositions
    5. Meaning versus Denoting
    6. The Relation of Logic to Epistemology and Psychology
  3. 1905-1912: Logical Realism
    1. Acquaintance and Descriptive Psychology
    2. Eliminating Classes as Objects
      1. “On Denoting” (1905)
      2. Impact on Analysis
    3. Eliminating Propositions as Objects
    4. Facts versus Complexes
    5. Universals and Particulars
    6. Logic as the Study of Forms of Complexes and Facts
    7. Sense Data and the Problem of Matter
  4. 1913-1918: Occam’s Razor and Logical Atomism
    1. The Nature of Logic
    2. The Nature of Matter
    3. Logical Atomism
      1. The Atoms of Experience and the Misleading Nature of Language
      2. The Forms of Facts and Theory of Truth
      3. Belief as a New Form of Fact
      4. Neutral Monism
  5. 1919-1927: Neutral Monism, Science, and Language
    1. Mind, Matter, and Meaning
    2. Private versus Public Data
    3. Language, Facts, and Psychology
    4. Universals
    5. The Syntactical View
  6. 1930-1970: Anti-positivist Naturalism
    1. Logical Truths
    2. Empirical Truths
    3. A Priori Principles
    4. Universals
    5. The Study of Language
  7. References and Further Reading
    1. Primary Sources
      1. Monographs
      2. Collections of Essays
      3. Articles
      4. The Collected Papers of Bertrand Russell
      5. Autobiographies and Letters
    2. Secondary Sources
      1. General Surveys
      2. History of Analytic Philosophy
      3. Logic and Metaphysics
      4. Meaning and Metaphysics
      5. Beliefs and Facts
      6. Constructions
      7. Logical Atomism
      8. Naturalism and Psychology
      9. Biographies

1. The 1890s: Idealism

Russell’s earliest work in metaphysics is marked by the sympathies of his teachers and his era for a particular tradition known as idealism. Idealism is broadly understood as the contention that ultimate reality is immaterial or dependent on mind, so that matter is in some sense derivative, emergent, and at best conditionally real. Idealism flourished in Britain in the last third of the nineteenth century and first two decades of the twentieth. British idealists such as Bernard Bosanquet, T.H. Green, Harold Joachim, J.M.E. McTaggart and F.H. Bradley – some of whom were Russell’s teachers – were most influenced by Hegel’s form of absolute idealism, though influences of Immanuel Kant’s transcendental idealism can also be found in their work. This section will explore British Idealism’s influence on the young Bertrand Russell.

a. Neo-Hegelianism

Until 1898, Russell’s work a variety of subjects (like geometry or space and time) is marked by the presumption that any area of study contains contradictions that move the mind into other, related, areas that enrich and complete it. This is similar to Hegel’s dialectical framework. However, in Hegel’s work this so-called “dialectic” is a central part of his metaphysical worldview, characterizing the movement of “absolute spirit” as it unfolds into history. Russell is relatively uninfluenced by Hegel’s broader theory, and adopts merely the general dialectical approach. He argues, for example, that the sciences are incomplete and contain contradictions, that one passes over into the other, as number into geometry and geometry into physics. The goal of a system of the sciences, he thinks, is to reveal the basic postulates of each science, their relations to each other, and to eliminate all inconsistencies but those that are integral to the science as such. (“Note on the Logic of the Sciences,” Papers 2) In this way, Russell’s early work is dialectical and holistic rather than monistic. On this point, Russell’s thinking was probably influenced by his tutors John McTaggart and James Ward, who were both British idealists unsympathetic to Bradley’s monism.

b. F. H. Bradley and Internal Relations

Bradley, most famous for his book Appearance and Reality, defines what is ultimately real as what is wholly unconditioned or independent. Put another way, on Bradley’s view what is real must be complete and self-sufficient. Bradley also thinks that the relations a thing stands in, such as being to the left of something else, are internal to it, that is, grounded in its intrinsic properties, and therefore inseparable from those properties. It follows from these two views that the subjects of relations, considered in themselves, are incomplete and dependent, and therefore ultimately unreal. For instance, if my bookcase is to the left of my desk, and if the relation being to the left of is internal to my bookcase, then being to the left of my desk contributes to the identity or being of my bookcase just as being six feet tall and being brown do. Consequently, it is not unconditioned or independent, since its identity is bound up with my desk’s. Since the truly real is independent, it follows that my bookcase is not truly real. This sort of argument can be given for every object that we could conceivably encounter in experience: everything stands in some relation or other to something else, thus everything is partially dependent on something else for its identity; but since it is dependent, it is not truly real.

The only thing truly real, Bradley thinks, is the whole network of interrelated objects that constitutes what we might call “the whole world.” Thus he embraces a species of monism: the doctrine that, despite appearances to the contrary, no plurality of substances exists and that only one thing exits: the whole. What prevents us from apprehending this, he believes, is our tendency to confuse the limited reality of things in our experience (and the truths based on that limited perspective)- with the unconditioned reality of the whole, the Absolute or One. Hence, Bradley is unsympathetic to the activity of analysis, for by breaking wholes into parts it disguises rather than reveals the nature of reality.

The early Russell, who was familiar with Bradley’s work through his teachers at Cambridge, was only partly sympathetic to F. H. Bradley’s views. Russell accepts the doctrine that relations are internal but, unlike Bradley, he does not deny that there is a plurality of things or subjects. Thus Russell’s holism, for example, his view of the interconnectedness of the sciences, does not require the denial of plurality or the rejection of analysis as a falsification of reality, both of which doctrines are antithetic to him early on.

c. Neo-Kantianism and A Priori Knowledge

Russell’s early views are also influenced by Kant. Kant argued that the mind imposes categories (like being in space and time) that shape what we experience. Since Kant defines a priori propositions as those we know to be true independently of (logically prior to) experience, and a posteriori propositions as those whose truth we know only through experience, it follows that propositions about these categories are a priori, since the conditions of any possible experience must be independent of experience. Thus for Kant, geometry contains a priori propositions about categories of space that condition our experience of things as spatial.

Russell largely agrees with Kant in his 1898 Foundations of Geometry, which is based on his dissertation. Other indications of a Kantian approach can be seen, for example, in his 1897 claim that what is essential to matter is schematization under the form of space (“On Matter,” Papers 2).

d. Russell’s Turn from Idealism to Realism

There are several points on which Russell’s views eventually turn against idealism and towards realism. The transition is not sudden but gradual, growing out of discomfort with what he comes to see as an undue psychologism in his work, and out of growing awareness of the importance of asymmetrical (ordering) relations in mathematics. The first issue concerns knowledge and opposes neo-Kantianism; the second issue concerns the nature of relations and the validity of analysis and opposes Neo-Hegelianism and Monism. The former lends itself to realism and mind/matter dualism, that is, to a view of matter as independent of minds, which apprehend it without shaping it. The latter lends itself to a view of the radical plurality of what exists. Both contribute to a marked preference for analysis over synthesis, as the mind’s way of apprehending the basic constituents of reality. By the time these developments are complete, Russell’s work no longer refers to the dialectic of thought or to the form of space or to other marks of his early infatuation with idealism. Yet throughout Russell’s life there remains a desire to give a complete account of the sciences, as a kind of vestige of his earlier views.

i. His Rejection of Psychologism

When Russell begins to question idealism, he does so in part because of the idealist perspective on the status of truths of mathematics. In his first completely anti-idealist work, The Principles of Mathematics (1903), Russell does not reject Kant’s general conception of the distinction between a priori and a posteriori knowledge, but he rejects Kant’s idealism, that is, Kant’s doctrine that the nature of thought determines what is a priori. On Russell’s view, human nature could change, and those truths would then be destroyed, which he thinks is absurd. Moreover, Russell objects that the Kantian notion of a priori truth is conditional, that is, that Kant must hold that 2 + 2 equals 4 only on condition that the mind always thinks it so (Principles, p. 40.) On Russell’s view, in contrast, mathematical and logical truths must be true unconditionally; thus 2 + 2 equals 4 even if there are no intelligences or minds. Thus Russell’s attack on Kant’s notion of the a priori focuses on what he sees as Kant’s psychologism, that is, his tendency to confuse what is objectively true even if no one thinks it, with what we are so psychologically constructed as to have to think. In general, Russell begins to sharply distinguish questions of logic, conceived as closely related to metaphysics, from questions of knowledge and psychology. Thus in his 1904 paper “Meinong’s Theory of Complexes and Assumptions” (Essays in Analysis, pp. 21-22), he writes, “The theory of knowledge is often regarded as identical with logic. This view results from confounding psychical states with their objects; for, when it is admitted that the proposition known is not the identical with the knowledge of it, it becomes plain that the question as to the nature of propositions is distinct from all questions of knowledge…. The theory of knowledge is in fact distinct from psychology, but is more complex: for it involves not only what psychology has to say about belief, but also the distinction of truth and falsehood, since knowledge is only belief in what is true.”

ii. His Rejection of Internal Relations

In his early defense of pluralism, external relations ( relations which cannot be reduced to properties) play an important role. The monist asserts that all relations within a complex or whole are less real than that whole, so that analysis of a whole into its parts is a misrepresentation or falsification of reality, which is one. It is consonant with this view, Russell argues, to try to reduce propositions that express relations to propositions asserting a property of something, that is, some subject-term (Principles, p. 221.) The monist therefore denies or ignores the existence of relations. But some relations must be irreducible to properties of terms, in particular the transitive and asymmetrical relations that order series, as the quality of imposing order among terms is lost if the relation is reduced to a property of a term. In rejecting monism, Russell argues that at least some relations are irreducible to properties of terms, hence they are external to those terms (Principles, p. 224); and on the basis of this doctrine of external relations, he describes reality as not one but many, that is, composed of diverse entities, bound but not dissolved into wholes by external relations. Since monism tends to reduce relations to properties, and to take these as intrinsic to substances (and ultimately to only one substance), Russell’s emphasis on external relations is explicitly anti-monistic.

2. 1901-1904: Platonist Realism

When Russell rebelled against idealism (with his friend G.E. Moore) he adopted metaphysical doctrines that were realist and dualist as well as Platonist and pluralist. As noted above, his realism and dualism entails that there is an external reality distinct from the inner mental reality of ideas and perceptions, repudiating the idealist belief that ultimate reality consists of ideas and the materialist view that everything is matter, and his pluralism consists in assuming there are many entities bound by external relations. Equally important, however, is his Platonism.

a. What has Being

Russell’s Platonism involves a belief that there are mind-independent entities that need not exist to be real, that is, to subsist and have being. Entities, or what has being (and may or may not exist) are called terms, and terms include anything that can be thought. In Principles of Mathematics (1903) he therefore writes, “Whatever may be an object of thought,…, or can be counted as one, I call a term. …I shall use it as synonymous with the words unit, individual, and entity. … [E]very term has being, that is, is in some sense. A man, a moment, a number, a class, a relation, a chimera, or anything else that can be mentioned, is sure to be a term….” (Principles, p. 43) Russell links his metaphysical Platonism to a theory of meaning as well as a theory of knowledge. Thus, all words that possess meaning do so by denoting complex or simple, abstract or concrete objects, which we apprehend by a kind of knowledge called acquaintance.

b. Propositions as Objects

Since for Russell words mean objects (terms), and since sentences are built up out of several words, it follows that what a sentence means, a proposition, is also an entity -- a unity of those entities meant by the words in the sentence, namely, things (particulars, or those entities denoted by names) and concepts (entities denoted by words other than names). Propositions are thus complex objects that either exist and are true or subsist and are false. So, both true and false propositions have being (Principles, p. 35). A proposition is about the things it contains; for example, the proposition meant by the sentence “the cat is on the mat” is composed of and is about the cat, the mat, and the concept on. As Russell writes to Gottlob Frege in 1904: ‘I believe that in spite of all of its snowfields Mount Blanc itself is a component part of what is actually asserted in the proposition “Mount Blanc is more than 40,000 meters high.” We do not assert the thought, for that is a private psychological matter; we assert the object of the thought, and this is, to my mind, a certain complex (an objective proposition, one might say) in which Mount Blanc is itself a component part.’ (From Frege to Gödel, pp. 124-125)

This Platonist view of propositions as objects bears, furthermore, on Russell’s conception of logical propositions. In terms of the degree of abstractness in the entities making them up, the propositions of logic and those of a particular science sit at different points on a spectrum, with logical propositions representing the point of maximum generality and abstraction (Principles, p. 7). Thus, logical propositions are not different in kind from propositions of other sciences, and by a process of analysis we can come to their basic constituents, the objects (constants) of logic.

c. Analysis and Classes

Russell sometimes compares philosophical analysis to a kind of mental chemistry, since, as in chemical analysis, it involves resolving complexes into their simpler elements (Principles, p. xv). But in philosophical analyses, the process of decomposing a complex is entirely intellectual, a matter of seeing with the mind’s eye the simples involved in some complex concept. To have reached the end of such an intellectual analysis is to have reached the simple entities that cannot be further analyzed but must be immediately perceived. Reaching the end of an analysis – that is, arriving at the mental perception of a simple entity, a concept – then provides the means for definition, in the philosophical sense, since the meaning of the term being analyzed is defined in terms of the simple entities grasped at the end of the process of analysis. Yet in this period Russell is confronted with several logical and metaphysical problems. We see from his admission in the Principles that he has been unable to grasp the concept class which, he sees, leads to contradictions, for example, to Russell’s paradox (Principles, pp. xv-xvi).

Russell’s extreme Platonist realism involves him in several difficulties besides the fact that class appears to be a paradoxical (unthinkable) entity or concept. These additional concerns, which he sees even in the Principles, along with his difficulty handling the notion of a class and the paradoxes surrounding it, help determine the course of his later metaphysical (and logical) doctrines.

d. Concepts’ Dual Role in Propositions

One difficulty concerns the status of concepts within the entity called a proposition, and this arises from his doctrine that any quality or absence of quality presupposes being. On Russell’s view the difference between a concept occurring as such and occurring as a subject term in a proposition is merely a matter of their external relations and not an intrinsic or essential difference in entities (Principles, p. 46). Hence a concept can occur either predicatively or as a subject term. He therefore views with suspicion Frege’s doctrine that concepts are essentially predicative and cannot occur as objects, that is, as the subject terms of a proposition (Principles, Appendix A). As Frege acknowledges, to say that concepts cannot occur as objects is a doctrine that defies exact expression, for we cannot say “a concept is not an object” without seemingly treating a concept as an object, since it appears to be the referent of the subject term in our sentence. Frege shows little distress over this problem of inexpressibility, but for Russell such a state of affairs is self-contradictory and paradoxical since the concept is an object in any sentence that says it is not. Yet, as he discovers, to allow concepts a dual role opens the way to other contradictions (such as Russell’s paradox), since makes it possible for a predicate to be predicated of itself. Faced with paradoxes on either side, Russell chooses to risk the paradox he initially sees as arising from Frege’s distinction between concepts and objects in order to avoid more serious logical paradoxes arising from his own assumption of concepts’ dual role. (See Principles, Chapter X and Appendix B.) This issue contributes to his emerging attempt to eliminate problematic concepts and propositions from the domain of what has being. In doing so he implicitly draws away from his original belief that what is thinkable has being, as it is not clear how he can say that items he earlier entertained are unthinkable.

e. Meaning versus Denoting

Another difficulty with Russell’s Platonist realism concerns the way concepts are said to contribute to the meaning of propositions in which they occur. As noted earlier, propositions are supposed to contain what they are about, but the situation is more complex when these constituent entities include denoting concepts, either indefinite ones like a man or definite ones like the last man. The word “human” denotes an extra-mental concept human, but the concept human denotes the set of humans: Adam, Benjamin, Cain, and so on. As a result, denoting concepts have a peculiar role in objective propositions: when a denoting phrase occurs in a sentence, a denoting concept occurs in the corresponding proposition, but the proposition is not about the denoting concept but about the entities falling under the concept. Thus the proposition corresponding to the sentence “all humans are mortal” contains the concept human but is not about the concept per se – it is not attributing mortality to a concept - but is about individual humans. As a result, it is difficult to see how we can ever talk about the concept itself (as in the sentence “human is a concept”), for when we attempt to do so what we denote is not what we mean. In unpublished work from the period immediately following the publication of Principles (for example, “On Fundamentals,” Papers 4) Russell struggles to explain the connection between meaning and denoting, which he insists is a logical and not a merely psychological or linguistic connection.

f. The Relation of Logic to Epistemology and Psychology

In his early work, Russell treats logical questions quite like metaphysical ones and as distinct from epistemological and psychological issues bearing on how we know. As we saw (in section 1.d.i above), in his 1904 “Meinong’s Theory of Complexes and Assumptions” (Papers 4), Russell objects to what he sees as the idealist tendency to equate epistemology (that is, theory of knowledge) with logic, the study of propositions, by wrongly identifying states of knowing with the objects of those states (for example, judging with what is judged, the proposition). We must, he says, clearly distinguish a proposition from our knowledge of a proposition, and in this way it becomes clear that the study of the nature of a proposition, which falls within logic, in no sense involves the study of knowledge. Epistemology is also distinct from and more inclusive than psychology, for in studying knowledge we need to look at psychological phenomena like belief, but since “knowledge” refers not merely to belief but to true belief, the study of knowledge involves investigation into the distinction between true and false and in that way goes farther than psychology.

3. 1905-1912: Logical Realism

Even as these problems are emerging, Russell is becoming acquainted with Alexius Meinong’s psychologically oriented philosophical concerns. At the same time, he is adopting an eliminative approach towards classes and other putative entities by means of a logical analysis of sentences containing words that appear to refer to such entities. These forces together shape much of his metaphysics in this early period. By 1912, these changes have resulted in a metaphysic preoccupied with the nature and forms of facts and complexes.

a. Acquaintance and Descriptive Psychology

Russell becomes aware of the work of Alexius Meinong, an Austrian philosopher who studied with Franz Brentano and founded a school of experimental psychology. Meinong’s most famous work, Über Gegenstandstheorie (1904), or Theory of Objects, develops the concept of intentionality, that is, the idea that consciousness is always of objects, arguing, further, that non-existent as well as existent objects lay claim to a kind of being – a view to which Russell is already sympathetic. Russell’s 1904 essay “Meinong’s Theory of Complexes and Assumptions” (Papers 4) illustrates his growing fascination with descriptive psychology, which brings questions concerning the nature of cognition to the foreground. After 1904, Russell’s doctrine of the constituents of propositions is increasingly allied to epistemological and psychological investigations. For example, he begins to specify various kinds of acquaintance - sensed objects, abstract objects, introspected ones, logical ones, and so forth. Out of this discourse comes the more familiar terminology of universals and particulars absent from his Principles.

b. Eliminating Classes as Objects

Classes, as Russell discovers, give rise to contradictions, and their presence among the basic entities assumed by his logical system therefore impedes the goal, sketched in the Principles, of showing mathematics to be a branch of logic. The general idea of eliminating classes predates the discovery of the techniques enabling him to do so, and it is not until 1905, in “On Denoting,” that Russell discovers how to analyze sentences containing denoting phrases so as to deny that he is committed to the existence of corresponding entities. It is this general technique that he then employs to show that classes need not be assumed to exist, since sentences appearing to refer to classes can be rewritten in terms of properties.

i. “On Denoting” (1905)

For Russell in 1903, the meaning of a word is an entity, and the meaning of a sentence is therefore a complex entity (the proposition) composed of the entities that are the meanings of the words in the sentence. (See Principles, Chapter IV.) The words and phrases appearing in a sentence (like the words “I” and “met” and “man” in “I met a man”) are assumed to be those that have meaning (that is, that denote entities). In “On Denoting” (1905) Russell attempts to solve the problem of how indefinite and definite descriptive phrases like “a man” and “the present King of France,” which denote no single entities, have meaning. From this point on, Russell begins to believe that a process of logical analysis is necessary to locate the words and phrases that really give the sentence meaning and that these may be different than the words and phrases that appear at first glance to comprise the sentence. Despite advocating a deeper analysis of sentences and acknowledging that the words that contribute to their meaning may not be those that superficially appear in the sentence, Russell continues to believe (even after 1905), that a word of phrase has meaning only by denoting an entity.

ii. Impact on Analysis

This has a marked impact on his conception of analysis, which makes it a kind of discovery of entities. Thus Russell sometimes means by “analysis” a process of devising new ways of conveying what a particular word or phrase means, thereby eliminating the need for the original word. Sometimes the result of this kind of analysis or construction is to show that there can be no successful analysis in the first sense with respect to a particular purported entity. It is not uncommon for Russell to employ both kinds of analysis in the same work. This discovery, interwoven with his attempts to eliminate classes, emerges as a tactic that eventually eliminates a great many of the entities he admitted in 1903.

c. Eliminating Propositions as Objects

In 1903, Russell believed subsistence and existence were modalities of those objects called propositions. By 1906, Russell’s attempt to eliminate propositions testifies to his movement away from this view of propositions. (See “On the Nature of Truth, Proc. Arist. Soc., 1906, pp. 28-49.) Russell is already aware in 1903 that his conception of propositions as single (complex) entities is amenable to contradictions. In 1906, his worries about propositions and paradox lead him to reject objective false propositions, that is, false subsisting propositions that have being as much as true ones.

In seeking to eliminate propositions Russell is influenced by his success in “On Denoting,” as well as by Meinong. As he adopts the latter’s epistemological and psychological interests, he becomes interested in cognitive acts of believing, supposing, and so on, which in 1905 he already calls ‘propositional attitudes’ (“Meinong’s Theory of Complexes and Assumptions,” Papers 4) and which he hopes can be used to replace his doctrine of objective propositions. He therefore experiments with ways of eliminating propositions as single entities by accounting for them in terms of psychological acts of judgment that give unity to the various parts of the proposition, drawing them together into a meaningful whole. Yet the attempts do not go far, and the elimination of propositions only becomes official with the theory of belief he espouses in 1910 in “On the Nature of Truth and Falsehood” (Papers 6), which eliminates propositions and explains the meaning of sentences in terms of a person’s belief that various objects are unified in a fact.

d. Facts versus Complexes

By 1910 the emergence of the so-called multiple relation theory of belief brings the notion of a fact into the foreground. On this theory, a belief is true if things are related in fact as they are in the judgment, and false if they are not so related.

In this period, though Russell sometimes asks whether a complex is indeed the same as a fact (for example, in the 1913 unpublished manuscript Theory of Knowledge (Papers 7, p. 79)), he does not yet draw the sharp distinction between them that he later does in the 1918 lectures published as the Philosophy of Logical Atomism (Papers 8), and they are treated as interchangeable. That is, no distinction is yet drawn between what we perceive (a complex object, such as the shining sun) and what it is that makes a judgment based on perception true (a fact, such as that the sun is shining). He does, however, distinguish between a complex and a simple object (Principia, p. 44). A simple object is irreducible, while a complex object can be analyzed into other complex or simple constituents. Every complex contains one or more particulars and at least one universal, typically a relation, with the simplest kind of complex being a dyadic relation between two terms, as when this amber patch is to the right of that brown patch. Both complexes and facts are classified into various forms of increasing complication.

e. Universals and Particulars

In this period, largely through Meinong’s influence, Russell also begins to distinguish types of acquaintance – the acquaintance we have with particulars, with universals, and so on. He also begins to relinquish the idea of possible or subsisting particulars (for example, propositions), confining that notion to universals.

The 1911 “On the Relations of Universals and Particulars” (Papers 6) presents a full-blown doctrine of universals. Here Russell argues for the existence of diverse particulars – that is, things like tables, chairs, and the material particles that make them up that can exist in one and only one place at any given time. But he also argues for the existence of universals, that is, entities like redness that exist in more than one place at any time. Having argued that properties are universals, he cannot rely on properties to individuate particulars, since it is possible for there to be multiple particulars with all the same properties. In order to ground the numerical diversity of particulars even in cases where they share properties, Russell relies on spatial location. It is place or location, not any difference in properties, that most fundamentally distinguishes any two particulars.

Finally, he argues that our perceived space consists of asymmetrical relations such as left and right, that is, relations that order space. As he sees it, universals alone can’t account for the asymmetrical relations given in perception – particulars are needed. Hence, wherever a spatial relation holds, it must hold of numerically diverse terms, that is, of diverse particulars. Of course, there is also need for universals, since numerically diverse particulars cannot explain what is common to several particulars, that is, what occurs in more than one place.

f. Logic as the Study of Forms of Complexes and Facts

Though he eliminates propositions, Russell continues to view logic in a metaphysically realist way, treating its propositions as objects of a particularly formal, abstract kind. Since Russell thinks that logic must deal with what is objective, but he now denies that propositions are entities, he has come to view logic as the study of forms of complexes. The notion of the form of a complex is linked with the concept of substituting certain entities for others in a complex so as to arrive at a different complex of the same form. Since there can be no such substitution of entities when the complex doesn’t exist, Russell struggles to define the notions of form and substitution in a complex in a way that doesn’t rule out the existence of forms in cases of non-existent complexes. Russell raises this issue in a short manuscript called “What is Logic?” written in September and October of 1912 (Papers 6, pp. 54-56). After considering and rejecting various solutions Russell admits his inability to solve difficulties having to do with forms of non-existent complexes, but this and related difficulties plague his analysis of belief, that is, the analysis given to avoid commitment to objective false propositions.

g. Sense Data and the Problem of Matter

An interest in questions of what we can know about the world – about objects or matter – is a theme that begins to color Russell’s work by the end of this period. In 1912 Russell asks whether there is anything that is beyond doubt (Problems of Philosophy, p. 7). His investigation implies a particular view of what exists, based on what it is we can believe with greatest certainty.

Acknowledging that visible properties, like color, are variable from person to person as well as within one person’s experience and are a function of light’s interaction with our visual apparatus (eyes, and so forth), Russell concludes that we do not directly experience what we would normally describe as colored – or more broadly, visible – objects. Rather, we infer the existence of such objects from what we are directly acquainted with, namely, our sense experiences. The same holds for other sense-modalities, and the sorts of objects that we would normally describe as audible, scented, and so forth. For instance, in seeing and smelling a flower, we are not directly acquainted with a flower, but with the sense-data of color, shape, aroma, and so on. These sense-data are what are immediately and certainly known in sensation, while material objects (like the flower) that we normally think of as producing these experiences via the properties they bear (color, shape, aroma) are merely inferred.

These epistemological doctrines have latent metaphysical implications: because they are inferred rather than known directly, ordinary sense objects (like flowers) have the status of hypothetical or theoretical entities, and therefore may not exist. And since many ordinary sense objects are material, this calls the nature and existence of matter into question. Like Berkeley, Russell thinks it is possible that what we call “the material world” may be constructed out of elements of experience – not ideas, as Berkeley thought, but sense-data. That is, sense-data may be the ultimate reality. However, although Russell thought this was possible, he did not at this time embrace such a view. Instead, he continued to think of material objects as real, but as known only indirectly, via inferences from sense-data. This type of view is sometimes called “indirect realism.”

Although Russell is at this point willing to doubt the existence of physical objects and replace them with inferences from sense-data, he is unwilling to doubt the existence of universals, since even sense-data seem to have sharable properties. For instance, in Problems, he argues that, aside from sense data and inferred physical objects, there must also be qualities and relations (that is, universals), since in “I am in my room,” the word “in” has meaning and denotes something real, namely, a relation between me and my room (Problems, p. 80). Thus he concludes that knowledge involves acquaintance with universals.

4. 1913-1918: Occam’s Razor and Logical Atomism

In 1911 Ludwig Wittgenstein, a wealthy young Austrian, came to study logic with Russell, evidently at Frege’s urging. Russell quickly came to regard his student as a peer, and the two became friends (although their friendship did not last long). During this period, Wittgenstein came to disagree with Russell’s views on logic, meaning, and metaphysics, and began to develop his own alternatives. Surprisingly, Russell became convinced that Wittgenstein was correct both in his criticisms and in his alternative views. Consequently, during the period in question, Wittgenstein had considerable impact on the formation of Russell’s thought.

Besides Wittgenstein, another influence in this period was A.N. Whitehead, Russell’s collaborator on the Principia Mathematica, which is finally completed during this period after many years’ work.

The main strands of Russell’s development in this period concern the nature of logic and the nature of matter or physical reality. His work in and after 1914 is parsimonious about what exists while remaining wedded to metaphysical realism and Platonism. By the end of this period Russell has combined these strands in a metaphysical position called logical atomism.

a. The Nature of Logic

By 1913 the nature of form is prominent in Russell’s discussion of logical propositions, alongside his discussion of forms of facts. Russell describes logical propositions as constituted by nothing but form, saying in Theory of Knowledge that they do not have forms but are forms, that is, abstract entities (Papers 7, p. 98). He says in the same period that the study of philosophical logic is in great part the study of such forms. Under Ludwig Wittgenstein’s influence, Russell begins to conceive of the relations of metaphysics to logic, epistemology and psychology in a new way. Thus in the Theory of Knowledge (as revised in 1914) Russell admits that any sentence of belief must have a different logical form from any he has hitherto examined (Papers 7, p. 46), and, since he thinks that logic examines forms, he concludes, contra his earlier view (in “Meinong’s Theory of Complexes and Assumptions,” Papers 4), that the study of forms can’t be kept wholly separate from the theory of knowledge or from psychology.

In Our Knowledge of the External World (1914) the nature of logic plays a muted role, in large part because of Russell’s difficulties with the nature of propositions and the forms of non-existent complexes and facts. Russell argues that logic has two branches: mathematical and philosophical (Our Knowledge, pp. 49-52; 67). Mathematical logic contains completely general and a priori axioms and theorems as well as definitions such as the definition of number and the techniques of construction used, for example, in his theory of descriptions. Philosophical logic, which Russell sometimes simply calls logic, consists of the study of forms of propositions and the facts corresponding to them. The term 'philosophical logic' does not mean merely a study of grammar or a meta-level study of a logical language; rather, Russell has in mind the metaphysical and ontological examination of what there is. He further argues, following Wittgenstein, that belief facts are unlike other forms of facts in so far as they contain propositions as components (Our Knowledge, p, 63).

b. The Nature of Matter

In 1914 -1915, Russell rejects the indirect realism that he had embraced in 1912. He now sees material objects as constructed out of, rather than inferred from, sense-data. Crediting Alfred North Whitehead for his turn to this “method of construction,” in Our Knowledge of the External World (1914) and various related papers Russell shows how the language of logic can be used to interpret material objects in terms of classes of sense-data like colors or sounds. Even though we begin with something ultimately private - sense-data viewed from the space of our unique perspective - it is possible to relate that to the perspective of other observers or potential observers and to arrive at a class of classes of sense data. These “logical constructions” can be shown to have all the properties supposed to belong to the objects of which they are constructions. And by Occam’s Razor - the principle not to multiply entities unnecessarily - whenever it is possible to create a construction of an object with all the properties of the object, it is unnecessary to assume the existence of the object itself. Thus Russell equates his maxim “wherever possible, to substitute constructions for inferences” (“On the Relation of Sense Data to Physics, Papers 8) with Occam’s razor.

c. Logical Atomism

In the 1918 lectures published as Philosophy of Logical Atomism (Papers 8) Russell describes his philosophical views as a kind of logical atomism, as the view that reality consists of a great many ultimate constituents or ‘atoms’. In describing his position as “logical” atomism, he understands logic in the sense of “philosophical logic” rather than “technical logic,” that is, as an attempt to arrive through reason at what must be the ultimate constituents and forms constituting reality. Since it is by a process of a priori philosophical analysis that we reach the ultimate constituents of reality – sense data and universals – such constituents might equally have been called “philosophical” atoms: they are the entities we reach in thought when we consider what sorts of things must make up the world. Yet Russell’s metaphysical views are not determined solely a priori. They are constrained by science in so far as he believes he must take into account the best available scientific knowledge, as demonstrated in his attempt to show the relation between sense-data and the “space, time and matter” of physics (Our Knowledge, p. 10).

i. The Atoms of Experience and the Misleading Nature of Language

Russell believed that we cannot move directly from the words making up sentences to metaphysical views about which things or relations exist, for not all words and phrases really denote entities. It is only after the process of analysis that we can decide which words really denote things and thus, which things really exist. Analysis shows that many purported denoting phrases – such as words for ordinary objects like tables and chairs – can be replaced by logical constructions that, used in sentences, play the role of these words but denote other entities, such as sense-data (like patches of color) and universals, which can be included among the things that really exist.

Regarding linguistics, Russell believed that analysis results in a logically perfect language consisting only of words that denote the data of immediate experience (sense data and universals) and logical constants, that is, words like “or” and “not” (Papers 8, p. 176).

ii. The Forms of Facts and Theory of Truth

These objects (that is, logical constructions) in their relations or with their qualities constitute the various forms of facts. Assuming that what makes a sentence true is a fact, what sorts of facts must exist to explain the truth of the kinds of sentences there are? In 1918, Russell answers this question by accounting for the truth of several different kinds of sentences: atomic and molecular sentences, general sentences, and those expressing propositional attitudes like belief.

So-called atomic sentences like “Andrew is taller than Bob” contain two names (Andrew, Bob) and one symbol for a relation (is taller than). When true, an atomic sentence corresponds to an atomic fact containing two particulars and one universal (the relation).

Molecular sentences join atomic sentences into what are often called “compound sentences” by using words like “and” or “or.” When true, molecular sentences do not correspond to a single conjunctive or disjunctive fact, but to multiple atomic facts (Papers 8, pp. 185-86). Thus, we can account for the truth of molecular propositions like “Andrew is kind or he is young” simply in terms of the atomic facts (if any) corresponding to “Andrew is kind” and “Andrew is young,” and the meaning of the word “or.” It follows that “or” is not a name for a thing, and Russell denies the existence of molecular facts.

Yet to account for negation (for example, “Andrew is not kind”) Russell thinks that we require more than just atomic facts. We require negative facts; for if there were no negative facts, there would be nothing to verify a negative sentence and falsify its opposite, the corresponding positive atomic sentence (Papers 8, pp. 187-90).

Moreover, no list of atomic facts can tell us that it is all the facts; to convey the information expressed by sentences like “everything fair is good” requires the existence of general facts.

iii. Belief as a New Form of Fact

Russell describes Wittgenstein as having persuaded him that a belief fact is a new form of fact, belonging to a different series of facts than the series of atomic, molecular, and general facts. Russell acknowledges that belief-sentences pose a difficulty for his attempt (following Wittgenstein) to explain how the truth of the atomic sentences fully determines the truth or falsity of all other types of sentences, and he therefore considers the possibility of explaining-away belief facts. Though he concedes that expressions of propositional attitudes, that is, sentences of the form “Andrew believes that Carole loves Bob,” might, by adopting a behaviorist analysis of belief, be explained without the need of belief facts (Papers 8, pp. 191-96), he stops short of that analysis and accepts beliefs as facts containing at least two relations (in the example, belief and loves).

iv. Neutral Monism

By 1918, Russell is conscious that his arguments for mind/matter dualism and against neutral monism are open to dispute. Neutral monism opposes both materialism (the doctrine that what exists is material) and British and Kantian idealism (the doctrine that only thought or mind is ultimately real), arguing that reality is more fundamental than the categories of mind (or consciousness) and matter, and that these are simply names we give to one and the same neutral reality. The proponents of neutral monism include John Dewey and William James (who are sometimes referred to as American Realists), and Ernst Mach. Given the early Russell’s commitment to mind/matter dualism, neutral monism is to him at first alien and incredible. Still, he admits being drawn to the ontological simplicity it allows, which fits neatly with his preference for constructions over inferences and his increasing respect for Occam’s razor, the principle of not positing unnecessary entities in one’s ontology (Papers 8, p. 195).

5. 1919-1927: Neutral Monism, Science, and Language

During this period, Russell’s interests shift increasingly to questions belonging to the philosophy of science, particularly to questions about the kind of language necessary for a complete description of the world. Many distinct strands feed into Russell’s thought in this period.

First, in 1919 he finally breaks away from his longstanding dualism and shifts to a kind of neutral monism. This is the view that what we call “mental” and what we call “material” are really at bottom the same “stuff,” which is neither mental nor material but neutral. By entering into classes and series of classes in different ways, neutral stuff gives rise to what we mistakenly think of distinct categories, the mental and the material (Analysis of Mind, p. 105).

Second, Russell rather idiosyncratically interweaves his new monist ideas with elements of behaviorism, especially in advancing a view of language that moves some of what he formerly took to be abstract entities into the domain of stimuli or events studied by psychology and physiology. In neither case is his allegiance complete or unqualified. For example, he rejects a fully behaviorist account of language by accepting that meaning is grounded in mental images available to introspection but not to external observation. Clearly, this is incompatible with behaviorism. Moreover, this seems to commit Russell to intrinsically mental particulars. This would stand in opposition to neutral monism, which denies there are any intrinsically mental (or physical) particulars. (See Analysis of Mind, Lecture X.)

Third, he begins in this same period to accept Ludwig Wittgenstein’s conception (in the Tractatus Logico Philosophicus) of logical propositions as tautologies that say nothing about the world.

Though these developments give Russell’s work the appearance of a retreat from metaphysical realism, his conception of language and logic remains rooted in realist, metaphysical assumptions.

a. Mind, Matter, and Meaning

Because of his neutral monism, Russell can no longer maintain the distinction between a mental sensation and a material sense-datum, which was crucial to his earlier constructive work. Constructions are now carried out in terms that do not suppose mind and matter (sensations and sense-data) to be ultimately distinct. Consciousness is no longer seen as a relation between something psychical, a subject of consciousness, and something physical, a sense datum (Analysis of Mind, pp. 142-43). Instead, the so-called mental and so-called physical dimensions are both constructed out of classes of classes of perceived events, between which there exist – or may exist – correlations.

Meaning receives a similar treatment: instead of a conception of minds in a relation to things that are the meanings of words, Russell describes meaning in terms of classes of events stimulated or caused by certain other events (Analysis of Mind, Chapter X). Assertions that a complex exists hereafter reduce to assertions of some fact about classes, namely that the constituents of classes are related in a certain way.

His constructions also become more complex to accommodate Einstein’s theory of relativity. This work is carried out in particular both in his 1921 Analysis of Mind, which is occupied in part with explaining mind and consciousness in non-mental terms, and in his 1927 Analysis of Matter, which returns to the analysis of so-called material objects, that in 1914 were constructed out of classes of sense-data.

b. Private versus Public Data

Despite his monism, Russell continues to distinguish psychological and physical laws (“On Propositions,” Papers 8, p. 289), but this dualist element is mitigated by his belief that whether an experience exists in and obeys the laws of physical space is a matter of degree. Some sensations are localized in space to a very high degree, others are less so, and some aren’t at all. For example, when we have an idea of forming the word “orange” in our mouth, our throat constricts just a tiny bit as if to mouth, “orange.” In this case there exists no clear distinction between the image we have of words in the mouth and our mouth-and-lip sensations (Papers 8, p. 286). Depending on your choice of context the sensation can be labeled either mental or material.

Moreover, tactile images of words in the mouth do not violate the laws of physics when seen as material events located in the body, specifically, in the mouth or jaw. In contrast, visual images have no location in a body; for instance, the image of your friend seated in a chair is located neither in your mouth, jaw, nor anywhere else in your body. Moreover, many visual images cannot be construed as bodily sensations, as images of words can, since, no relevant physical event corresponding to the visual image occurs. His admission that visual images are always configured under psychological laws seems to commit Russell to a doctrine of mental particulars. For this reason, Russell appears not so much to adopt neutral monism, which rejects such entities, as to adapt it to his purposes.

c. Language, Facts, and Psychology

Immediately after the lectures conclude, while in prison writing up notes eventually published in the 1921 Analysis of Mind (Papers 8, p. 247), Russell introduces a distinction between what a proposition expresses and what it asserts or states. Among the things that are expressed in sentences are logical concepts, words like “not” and “or,” which derive meaning from psychological experiences of rejection and choice. In these notes and later writings, belief is explained in terms of having experiences like these about image propositions (Analysis of Mind, p. 251). Thus what we believe when we believe a true negative proposition is explained psychologically as a state of disbelief towards a positive image proposition (Analysis of Mind, p. 276). Despite this analysis of the meaning of words for negation, Russell continues to think that negative facts account for what a negative belief asserts, that is, for what makes it true. The psychological account doesn’t do away with the need for them, Russell explains, because the truth or falsity of a proposition is due to some fact, not to a subjective belief or state.

d. Universals

Russell continues to analyze truth in terms of relation to facts, and to characterize facts as atomic, negative, and so on. Moreover, he continues to assume that we can talk about the constituents of facts in terms of particulars and universals. He does not abandon his belief that there are universals; indeed, in the 1920s he argues that we have no images of universals but can intend or will that an image, which is always a particular, ‘mean’ a universal (“On Propositions,” Papers 8, p. 293). This approach is opposed by those like Frank P. Ramsey, for whom notions like “atomic fact” are analogous to “spoken word”: they index language rather than reality. For Ramsey - and others in the various emerging schools of philosophy for which metaphysics is anathema - Russell’s approach confuses categories about language with categories of things in the world and in doing so is too metaphysical and too realist.

e. The Syntactical View

To some extent, Russell accepts the syntactical view in the following sense. Beginning in 1918 he concedes that logical truths are not about the world but are merely tautologies, and he comes to admit that tautologies are nothing more than empty combinations of meaningless symbols. Yet Russell’s conception of language and logic remains in some respects deeply metaphysical. For example, when, following Ramsey’s suggestion, Russell claims in the 1925 second edition of Principia that a propositional function occurs only in the propositions that are its values (Principia, p. xiv and Appendix C), he again aligns that idea with a doctrine of predicates as incomplete symbols, that is, with a metaphysical doctrine of the distinction between universals and particulars. Opposing this, Ramsey praises what he thinks is Wittgenstein’s deliberate attempt to avoid metaphysical characterizations of the ultimate constituents of facts, a view he infers from Wittgenstein’s cryptic remark in the Tractatus Logico-Philosophicus that, in a fact, objects “hang together” like links in a chain.

6. 1930-1970: Anti-positivist Naturalism

The choice of years framing this final category is somewhat artificial since Russell’s work retains a great deal of unity with the doctrines laid down in the 1920s. Nevertheless, there is a shift in tone, largely due to the emergence of logical positivism, that is, the views proposed by the members of the Vienna Circle. Russell’s work in the remaining decades of his life must be understood as metaphysical in orientation and aim, however highly scientific in language, and as shaped in opposition to doctrines emanating from logical positivism and the legacy following Ludwig Wittgenstein’s claim that philosophical (metaphysical) propositions are nonsensical pseudo-propositions. Yet even as it remains metaphysical in orientation, with respect to logic Russell’s work continues to draw back from his early realism.

a. Logical Truths

In his 1931 introduction to second edition of Principles of Mathematics, Russell writes that, “logical constants…must be treated as part of the language, not as part of what the language speaks about,” adopting a view that he admits is “more linguistic than I believed to be at the time I wrote the Principles” (Principles, p. xi) and that is “less Platonic, or less realist in the medieval sense of the word” (Principles, p. xiv). At the same time he says that he was too generous when he first wrote the Principles in saying that a proposition belongs to logic or mathematics if it contains nothing but logical constants (understood as entities), for he now concedes there are extra-logical propositions (for example “there are three things”) that can be posed in purely logical terms. Moreover, though he now thinks that (i) logic is distinguished by the tautological nature of its propositions, and (ii) following Rudolf Carnap he explains tautologies in terms of analytic propositions, that is, those that are true in virtue of form, Russell notes that we have no clear definition of what it is to be true in virtue of form, and hence no clear idea of what is distinctive to logic (Principles, p. xii). Yet, in general, he no longer thinks of logical propositions as completely general truths about the world, related to those of the special sciences, albeit more abstract.

b. Empirical Truths

In his later work, Russell continues to believe that, when a proposition is false, it is so because of a fact. Thus against logical positivists like Neurath, he insists that when empirical propositions are true, “true” has a different meaning than it does for propositions of logic. It is this assumption that he feels is undermined by logical positivists like Carnap, Neurath and others who treat language as socially constructed, and as isolable from facts. But this is wrong, he thinks, as language consists of propositional facts that relate to other facts and is therefore not merely constructed. It is this he has in mind, when in the 1936 “Limits of Empiricism” (Papers 10), he argues that Carnap and Wittgenstein present a view that is too syntactical; that is, truth is not merely syntactical, nor a matter of propositions cohering. As a consequence, despite admitting that his view of logic is less realist, less metaphysical, than in the past, Russell is unwilling to adopt metaphysical agnosticism, and he continues to think that the categories in language point beyond language to the nature of what exists.

c. A Priori Principles

Against logical positivism, Russell thinks that to defend the very possibility of objective knowledge it is necessary to permit knowledge to rest in part on non-empirical propositions. In Inquiry into Meaning and Truth (1940) and Human Knowledge: Its Scope and Limits (1948) Russell views the claim that all knowledge is derived from experience as self-refuting and hence inadequate to a theory of knowledge: as David Hume showed, empiricism uses principles of reason that cannot be proved by experience. Specifically, inductive reasoning about experience presupposes that the future will resemble the past, but this belief or principle cannot similarly be proved by induction from experience without incurring a vicious circle. Russell is therefore willing to accept induction as involving a non-empirical logical principle, since, without it, science is impossible. He thus continues to hold that there are general principles, comprised of universals, which we know a priori. Russell affirms the existence of general non-empirical propositions on the grounds, for example, that the incompatibility of red/blue is neither logical nor a generalization from experience (Inquiry, p. 82). Finally, against the logical positivists, Russell rejects the verificationist principle that propositions are true or false only if they are verifiable, and he rejects the idea that propositions make sense only if they are empirically verifiable.

d. Universals

Though Russell’s late period work is empiricist in holding that experience is the ultimate basis of knowledge, it remains rationalist in that some general propositions must be known independently of experience, and realist with respect to universals. Russell argues for the existence of universals against what he sees as an overly syntactical view that eliminates them as entities. That is, he asserts that (some) relations are non-linguistic. Universals figure in Russell’s ontology, in his so-called bundle theory, which explains thing as bundles of co-existing properties, rejecting the notion of a substance as an unknowable ‘this’ distinct from and underlying its properties. (See Inquiry, Chapter 6.) The substance-property conception is natural, he says, if sentences like “this is red” are treated as consisting of a subject and a predicate. However, in sentences like "redness is here," Russell treats the word "redness" as a name rather than as a predicate. On the substance-property view, two substances may have all their properties in common and yet be distinct, but this possibility vanishes on the bundle theory since a thing is its properties. Aside from his ontology, Russell’s reasons for maintaining the existence of universals are largely epistemological. We may be able to eliminate a great many supposed universals, but at least one, such as is similar, will remain necessary for a full account of our perception and knowledge (Inquiry, p. 344). Russell uses this notion to show that it is unnecessary to assume the existence of negative facts, which until the 1940s he thought necessary to explain truth and falsity. For several decades his psychological account of negative propositions as a state of rejection towards some positive proposition coexisted with his account, using negative facts, of what justifies saying that a negative belief is true and a positive one is false. Thus Russell does not eliminate negative facts until 1948 in Human Knowledge: Its Scope and Limits, where one of his goals is to explain how observation can determine the truth of a negative proposition like “this is not blue” and the falsity of a positive one like “this is blue” without being committed to negative facts (Human Knowledge, Chapter IX). In that text, he argues that what makes “this is not blue” true (and what makes “this is blue” false) is the existence of some color differing from blue. Unlike his earlier period he now thinks this color other than blue neither is nor implies commitment to a negative fact.

e. The Study of Language

Russell’s late work assumes that it is meaningful and possible to study the relation between experience and language and how certain extra-linguistic experiences give rise to linguistic ones, for example, how the sight of butter causes someone to assert “this is butter” or how the taste of cheese causes someone to “this is not butter.” Language, for Russell, is a fact and can be examined scientifically like any other fact. In The Logical Syntax of Language (1934) Rudolph Carnap had argued that that a science may choose to talk in subjective terms about sense data or in objective terms about physical objects since there are multiple equally legitimate ways to talk about the world. Hence Carnap does not believe that in studying language scientifically we must take account of metaphysical contentions about the nature of experience and its relation to language. Russell opposes Rudolf Carnap’s work and logical positivism, that is, logical empiricism, for dismissing his kind of approach as metaphysical nonsense, not a subject of legitimate philosophical study, and he defends it as an attempt to arrive at the truth about the language of experience, as an investigation into an empirical phenomenon.

7. References and Further Reading

The following is a selection of texts for further reading on Russell’s metaphysics. A great deal of his writing on logic, the theory of knowledge, and on educational, ethical, social, and political issues is therefore not represented here. Given the staggering amount of writing by Russell, not to mention on Russell, it is not intended to be exhaustive. The definitive bibliographical listing of Russell’s own publications takes up three volumes; it is to be found in Blackwell, Kenneth, Harry Ruja, and Sheila Turcon. A Bibliography of Bertrand Russell, 3 volumes. London and New York: Routledge, 1994.

a. Primary Sources

i. Monographs

  • 1897. An Essay on the Foundations of Geometry. Cambridge, UK: Cambridge University Press.
  • 1900. A Critical Exposition of the Philosophy of Leibniz. Cambridge, UK: University Press.
  • 1903. The Principles of Mathematics. Cambridge, UK: Cambridge University Press.
  • 1910-1913. Principia Mathematica, with Alfred North Whitehead. 3 vols. Cambridge, UK: Cambridge Univ. Press. Revised ed., 1925-1927.
  • 1912. The Problems of Philosophy. London: Williams and Norgate.
  • 1914. Our Knowledge of the External World as a Field for Scientific Method in Philosophy. Chicago: Open Court. Revised edition, London: George Allen & Unwin, 1926.
  • 1919. Introduction to Mathematical Philosophy. London: George Allen & Unwin.
  • 1921. The Analysis of Mind. London: George Allen & Unwin.
  • 1927. The Analysis of Matter. London: Kegan Paul.
  • 1940. An Inquiry into Meaning and Truth. New York: W. W. Norton.
  • 1948. Human Knowledge: Its Scope and Limits. London: George Allen & Unwin.

ii. Collections of Essays

  • 1910. Philosophical Essays. London: Longmans, Green. Revised ed., London: George Allen & Unwin, 1966.
  • 1918. Mysticism and Logic and Other Essays. London: Longmans, Green.
  • 1956. Logic and Knowledge: Essays 1901-1950, ed. Robert Charles Marsh. London: George Allen & Unwin.
  • 1973. Essays in Analysis, edited by Douglas Lackey. London: George Allen & Unwin.

iii. Articles

  • “Letter to Frege.” (Written in 1902) In From Frege to Gödel, ed. J. van Heijenoort, 124-5. Cambridge, Mass.: Harvard Univ. Press, 1967.
  • “Meinong’s Theory of Complexes and Assumptions.” Mind 13 (1904): 204-19, 336-54, 509-24. Repr. Essays in Analysis.
  • “On Denoting.” Mind 14 (1905): 479-493. Repr. Logic and Knowledge.
  • Review of Meinong et al., Untersuchungen zur Gegenstandstheorie und Psychologie. Mind 14 (1905): 530-8. Repr. Essays in Analysis.
  • “On the Substitutional Theory of Classes and Relations.” In Essays in Analysis. Written 1906.
  • “On the Nature of Truth.” Proceedings of the Aristotelian Society 7 (1906-07): 28-49. Repr. (with the final section excised) as “The Monistic Theory of Truth” in Philosophical Essays.
  • “Mathematical Logic as Based on the Theory of Types.” American Journal of Mathematics 30 (1908): 222-262. Repr. Logic and Knowledge.
  • “On the Nature of Truth and Falsehood.” In Philosophical Essays.
  • “Analytic Realism.” Bulletin de la société française de philosophie 11 (1911): 53-82. Repr. Collected Papers 6.
  • “Knowledge by Acquaintance and Knowledge by Description.” Proceedings of the Aristotelian Society 11 (1911): 108-128. Repr. Mysticism and Logic.
  • “On the Relations of Universals and Particulars.” Proceedings of the Aristotelian Society 12 (1912): 1-24. Repr. Logic and Knowledge.
  • “The Ultimate Constituents of Matter.” The Monist, 25 (1915): 399-417. Repr. Mysticism and Logic.
  • “The Philosophy of Logical Atomism.” The Monist 28 (1918): 495-27; 29 (1919): 32-63, 190-222, 345-80. Repr. Logic and Knowledge. Published in 1972 as Russell’s Logical Atomism, edited and with an introduction by David Pears. London: Fontana. Republished in 1985 as Philosophy of Logical Atomism, with a new introduction by D. Pears.
  • “On Propositions: What They Are and How They Mean.” Proceedings of the Aristotelian Society. Sup. Vol. 2 (1919): 1 - 43. Repr. Logic and Knowledge.
  • “The Meaning of ‘Meaning.’” Mind 29 (1920): 398-401.
  • “Logical Atomism.” In Contemporary British Philosophers, ed. J.H. Muirhead, 356-83. London: Allen & Unwin, 1924. Repr. Logic and Knowledge.
  • Review of Ramsey, The Foundations of Mathematics. Mind 40 (1931): 476- 82.
  • “The Limits of Empiricism.” Proceedings of the Aristotelian Society 36 (1936): 131-50.
  • “On Verification.” Proceedings of the Aristotelian Society 38 (1938): 1-20.
  • “My Mental Development.” In The Philosophy of Bertrand Russell, ed. P.A. Schilpp, 1-20. Evanston: Northwestern University, 1944.
  • “Reply to Criticisms.” In The Philosophy of Bertrand Russell, ed. P.A. Schilpp. Evanston: Northwestern, 1944.
  • “The Problem of Universals.” Polemic, 2 (1946): 21-35. Repr. Collected Papers 11.
  • “Is Mathematics Purely Linguistic?” In Essays in Analysis, 295-306.
  • “Logical Positivism.” Revue internationale de philosophie 4 (1950): 3-19. Repr. Logic and Knowledge.
  • “Logic and Ontology.” Journal of Philosophy 54 (1957): 225-30. Reprinted My Philosophical Development.
  • “Mr. Strawson on Referring.” Mind 66 (1957): 385-9. Repr. My Philosophical Development.
  • “What is Mind?” Journal of Philosophy 55 (1958): 5-12. Repr. My Philosophical Development.

iv. The Collected Papers of Bertrand Russell

  • Volume 1. Cambridge Essays, 1888-99. (Vol. 1) Ed. Kenneth Blackwell, Andrew Brink, Nicholas Griffin, Richard A. Rempel and John G. Slater. London: George Allen & Unwin, 1983.
  • Volume 2. Philosophical Papers, 1896-99. Ed. Nicholas Griffin and Albert C. Lewis. London: Unwin Hyman, 1990.
  • Volume 3. Towards the “Principles of Mathematics,” 1900-02. Ed. Gregory H. Moore. London and New York: Routledge, 1994.
  • Volume 4. Foundations of Logic, 1903-05. Ed. Alasdair Urquhart. London and New York: Routledge, 1994.
  • Volume 6. Logical and Philosophical Papers, 1909-13. Ed. John G. Slater. London and New York: Routledge, 1992.
  • Volume 7. Theory of Knowledge: The 1913 Manuscript. Ed. Elizabeth Ramsden Eames. London: George Allen & Unwin, 1984.
  • Volume 8. The Philosophy of Logical Atomism and Other Essays, 1914-1919. Ed. John G. Slater. London: George Allen & Unwin, 1986.
  • Volume 9. Essays on Language, Mind, and Matter, 1919-26. Ed. John G. Slater. London: Unwin Hyman, 1988.
  • Volume 10. A Fresh Look at Empiricism, 1927-1942. Ed. John G. Slater. London and New York: Routledge, 1996.
  • Volume 11. Last Philosophical Testament, 1943-1968. Ed. John G. Slater. London and New York: Routledge, 1997.

v. Autobiographies and Letters

  • 1944. “My Mental Development.” The Philosophy of Bertrand Russell, ed. Paul A. Schilpp, 1-20. Evanston: Northwestern University.
  • 1956. Portraits from Memory and Other Essays. London: George Allen & Unwin.
  • 1959. My Philosophical Development. London: George Allen & Unwin.
  • 1967-9. The Autobiography of Bertrand Russell. 3 vols. London: George Allen & Unwin.

b. Secondary sources

i. General Surveys

  • Ayer, A.J.. Bertrand Russell. New York: Viking Press, 1972.
  • Dorward, Alan. Bertrand Russell: A Short Guide to His Philosophy. London: Longmans, Green, and Co, 1951.
  • Eames, Elizabeth Ramsden. Bertrand Russell’s Dialogue with His Contemporaries. Carbondale, Ill.: Southern Illinois Univ. Press, 1989.
  • Griffin, Nicholas, ed. The Cambridge Companion to Bertrand Russell. Cambridge, UK: Cambridge University Press, 2003.
  • Jager, Ronald. The Development of Bertrand Russell’s Philosophy. London: George Allen and Unwin, 1972.
  • Klemke, E.D., ed. Essays on Bertrand Russell. Urbana: Univ. of Illinois Press, 1970.
  • Sainsbury, R. M. Russell. London: Routledge & Kegan Paul, 1979.
  • Schilpp, Paul, ed. The Philosophy of Bertrand Russell. Evanston: Northwestern University, 1944.
  • Schoenman, Ralph, ed. Bertrand Russell: Philosopher of the Century. London: Allen & Unwin, 1967.
  • Slater, John G. Bertrand Russell. Bristol: Thoemmes, 1994.

ii. History of Analytic Philosophy

  • Griffin, Nicholas. Russell’s Idealist Apprenticeship. Oxford: Clarendon, 1991.
  • Hylton, Peter. Russell, Idealism and the Emergence of Analytic Philosophy. Oxford: Clarendon Press, 1990.
  • Irvine, A.D. and G.A. Wedeking, eds. Russell and Analytic Philosophy. Toronto: University of Toronto Press, 1993.
  • Monk, Ray, and Anthony Palmer, eds. Bertrand Russell and the Origins of Analytic Philosophy. Bristol: Thoemmes Press, 1996.
  • Pears, David. Bertrand Russell and the British Tradition in Philosophy. London: Fontana Press, 1967.
  • Savage, C. Wade and C. Anthony Anderson, eds. Rereading Russell: Essays on Bertrand Russell’s Metaphysics and Epistemology. Minneapolis: University of Minnesota Press, 1989.
  • Stevens, Graham. The Russellian Origins of Analytical Philosophy: Bertrand Russell and the Unity of the Proposition. London and New York: Routledge, 2005.

iii. Logic and Metaphysics

  • Costello, Harry. “Logic in 1914 and Now.” Journal of Philosophy 54 (1957): 245-263.
  • Frege, Gottlob. Philosophical and Mathematical Correspondence. Chicago: University of Chicago Press, 1980.
  • Griffin, Nicholas. “Russell on the Nature of Logic (1903-1913).” Synthese 45 (1980): 117-188.
  • Hylton, Peter. “Logic in Russell’s Logicism.” In The Analytic Tradition, ed. Bell and Cooper, 137-72. Oxford: Blackwell, 1990.
  • Hylton, Peter. “Functions and Propositional Functions in Principia Mathematica.” In Russell and Analytic Philosophy, ed. Irvine and Wedeking, 342-60. Toronto: Univ. of Toronto Press, 1993.
  • Linsky, Bernard. Russell’s Metaphysical Logic. Stanford: CSLI Publications, 1999.
  • Ramsey, Frank P. The Foundations of Mathematics. Paterson, NJ: Littlefield, Adams and Co, 1960. Repr. as Philosophical Papers. Cambridge, UK: Cambridge Univ. Press, 1990
  • Frege, Gottlob. “Letter to Russell.” In From Frege to Gödel, ed. J. van Heijenoort, 126-8. Cambridge, Mass.: Harvard Univ. Press, 1967.
  • Ramsey, F.P. “Mathematical Logic.” Mathematical Gazette 13 (1926), 185-194. Repr. Philosophical Papers, F.P. Ramsey, 225-44. Cambridge, UK: Cambridge Univ. Press, 1990.
  • Rouilhan Philippe de. “Substitution and Types: Russell’s Intermediate Theory.” In One Hundred Years of Russell’s Paradox, ed. Godehard Link, 401-16. Berlin: De Gruyter, 2004.

iv. Meaning and Metaphysics

  • Burge, T. “Truth and Singular Terms.” In Reference, Truth and Reality, ed. M. Platts, 167-81. London: Routledge & Keegan Paul, 1980.
  • Donnellan, K.S. “Reference and Definite Descriptions.” Philosophical Review 77 (1966): 281-304.
  • Geach, P., (1962). Reference and Generality. Ithaca, NY: Cornell University Press, 1962.
  • Hylton, Peter. “The Significance of On Denoting.” In Rereading Russell, ed. Savage and Anderson, 88-107. Minneapolis: Univ. of Minnesota, 1989.
  • Kneale, William. “The Objects of Acquaintance.” Proceedings of the Aristotelian Society 34 (1934): 187-210.
  • Kripke, S. Naming and Necessity. Cambridge, Mass.: Harvard University Press, 1980.
  • Linsky, B. “The Logical Form of Descriptions.” Dialogue 31 (1992): 677-83.
  • Marcus, R. “Modality and Description.” Journal of Symbolic Logic 13 (1948): 31-37. Repr. in Modalities: Philosophical Essays. New York: Oxford University Press, 1993.
  • Neale, S. Descriptions. Cambridge, Mass.: MIT Press Books, 1990.
  • Searle, J. “Proper Names.” Mind 67 (1958): 166-173.
  • Sellars, Wilfrid. “Acquaintance and Description Again.” Journal of Philosophy 46 (1949): 496-504.
  • Strawson, Peter F. “On Referring.” Mind 59 (1950): 320-344. Urmson, J.O. “Russell on Acquaintance with the Past.” Philosophical Review 78 (1969): 510-15.

v. Beliefs and Facts

  • Blackwell, Kenneth. “Wittgenstein’s Impact on Russell’s Theory of Belief.” M.A. thesis., McMaster University, 1974.
  • Carey, Rosalind. Russell and Wittgenstein on the Nature of Judgment. London: Continuum, 2007.
  • Eames, Elizabeth Ramsden. Bertrand Russell’s Theory of Knowledge. London: George Allen and Unwin, 1969.
  • Griffin, Nicholas. “Russell’s Multiple-Relation Theory of Judgment.” Philosophical Studies 47 (1985): 213-247.
  • Hylton, Peter. “The Nature of the Proposition and the Revolt Against Idealism.” In Philosophy in History, ed. Rorty, et al., 375-97. Cambridge, UK: Cambridge Univ. Press, 1984.
  • McGuinness, Brian. “Bertrand Russell and Ludwig Wittgenstein’s Notes on Logic.” Revue Internationale de Philosophie 26 (1972): 444-60.
  • Oaklander, L. Nathan and Silvano Miracchi. “Russell, Negative Facts, and Ontology.” Philosophy of Science 47 (1980): 434-55.
  • Pears, David. “The Relation Between Wittgenstein’s Picture Theory of Propositions and Russell’s Theories of Judgment.” Philosophical Review 86 (1977): 177-96.
  • Rosenberg, Jay F. “Russell on Negative Facts.” Nous 6 (1972), 27-40.
  • Stevens, Graham. “From Russell’s Paradox to the Theory of Judgment: Wittgenstein and Russell on the Unity of the Proposition.” Theoria, 70 (2004): 28-61.

vi. Constructions

  • Blackwell, Kenneth. “Our Knowledge of Our Knowledge.” Russell: The Journal of the Bertrand Russell Archives, no. 12 (1973): 11-13.
  • Carnap, Rudolf. The Logical Structure of the World & Pseudo Problems in Philosophy, trans. R. George. Berkeley: Univ. of California Press, 1967.
  • Fritz, Charles Andrew, Jr. Bertrand Russell’s Construction of the External World. London: Routledge and Kegan Paul, 1952.
  • Goodman, Nelson. The Structure of Appearance. Cambridge Mass: Harvard University Press, 1951.
  • Pincock, Christopher. “Carnap, Russell and the External World.” In The Cambridge Companion to Carnap, ed. M. Friedman and R. Creath. Cambridge, UK: Cambridge University Press, 2007.
  • Pritchard, H. R. “Mr. Bertrand Russell on Our Knowledge of the External World.” Mind 24 (1915), 1-40.
  • Sainsbury, R.M. “Russell on Constructions and Fictions.” Theoria 46 (1980): 19-36.
  • Wisdom, J. “Logical Constructions (I.).” Mind 40 (April 1931): 188-216.

vii. Logical Atomism

  • Hochberg, Herbert. Thought, Fact and Reference: The Origins and Ontology of Logical Atomism. Minneapolis: Univ. of Minnesota Press, 1978.
  • Lycan, William. “Logical Atomism and Ontological Atoms.” Synthese 46 (1981), 207-229.
  • Linsky, Bernard. “The Metaphysics of Logical Atomism.” In The Cambridge Companion to Bertrand Russell, ed. N. Griffin, 371-92. Cambridge, UK: Cambridge Univ. Press, 2003.
  • Livingston, Paul. “Russellian and Wittgensteinian Atomism.” Philosophical Investigations 24 (2001): 30-54.
  • Lycan, William. “Logical Atomism and Ontological Atoms.” Synthese 46 (1981): 207-29.
  • Patterson, Wayne A. Bertrand Russell’s Philosophy of Logical Atomism. New York: Peter Lang Publishing, 1993.
  • Pears, David. ‘Introduction.’ In The Philosophy of Logical Atomism, B. Russell, 1-34. Chicago: Open Court, 1985.
  • Rodríguez-Consuegra, Francisco. “Russell’s Perilous Journey from Atomism to Holism 1919-1951.” In Bertrand Russell and the Origins of Analytical Philosophy, ed. Ray Monk and Anthony Palmer, 217-44. Bristol: Thoemmes, 1996.
  • Simons, Peter. “Logical Atomism.” In The Cambridge History of Philosophy, 1870-1945, ed. Thomas Baldwin, 383-90. Cambridge, UK: Cambridge Univ. Press, 2003.

viii. Naturalism and Psychology

  • Garvin, Ned S. “Russell’s Naturalistic Turn.” Russell: The Journal of Bertrand Russell Studies, n.s. 11, no. 1 (Summer 1991).
  • Gotlind, Erik. Bertrand Russell’s Theories of Causation. Uppsala: Almquist and Wiksell, 1952.
  • O’Grady, Paul. “The Russellian Roots of Naturalized Epistemology.” Russell: The Journal of Bertrand Russell Studies, n.s. 15, no. 1 (Summer 1995).
  • Stevens, Graham. “Russell’s Re-Psychologising of the Proposition.” Synthese 151, no. 1 (2006): 99-124.

ix. Biographies

  • Clark, Ronald W. The Life of Bertrand Russell. London: Jonathan Cape Ltd, 1975.
  • Monk, Ray. Bertrand Russell: The Spirit of Solitude, 1872-1921. New York: The Free Press, 1996.
  • Monk, Ray. Bertrand Russell 1921-1970: The Ghost of Madness. London: Jonathan Cape, 2000.
  • Moorehead, Caroline. Bertrand Russell. New York: Viking, 1992.
  • Wood, Alan. Bertrand Russell: The Passionate Sceptic. London: Allen and Unwin, 1957.

Author Information

Rosalind Carey
City University of New York
U. S. A.

Phenomenology and Time-Consciousness

Edmund Husserl, founder of the phenomenological movement, employs the term "phenomenology" in its etymological sense as the activity of giving an account (logos) of the way things appear (phainomenon). Hence, a phenomenology of time attempts to account for the way things appear to us as temporal or how we experience time. Phenomenology offers neither metaphysical speculation about time’s relation to motion (as does Aristotle), nor the psychological character of time’s past and future moments (as does Augustine), nor transcendental-cognitive presumptions about time as a mind-dependent construct (as does Kant). Rather, it investigates the essential structures of consciousness that make possible the unified perception of an object that occurs across successive moments. In its nuanced attempts to provide an account of the form of intentionality presupposed by all experience, the phenomenology of time-consciousness provides important contributions to philosophical issues such as perception, memory, expectation, imagination, habituation, self-awareness, and self-identity over time.Within the phenomenological movement, time-consciousness is central. The most fundamental and important of all phenomenological problems, time-consciousness pervades Husserl’s theories of constitution, evidence, objectivity and inter-subjectivity. Within continental philosophy broadly construed, the movements of existential phenomenology, hermeneutics, post-modernism and post-structuralism, as well as the work of Martin Heidegger, Jean-Paul Sartre, Maurice Merleau-Ponty, Hans George Gadamer and Jacques Derrida, all return in important ways to Husserl’s theory of time-consciousness. After devoting considerable attention to Husserl’s reflections on time-consciousness, this article treats the developments of the phenomenological account of time in Heidegger, Sartre, and Merleau-Ponty.

Table of Contents

  1. Husserl, Phenomenology, and Time-consciousness
    1. Phenomenological Reduction and Time-Consciousness
    2. Phenomenology, Experienced Time and Temporal Objects
    3. Phenomenology Not to be Confused with Augustine’s Theory of Time
    4. Phenomenology and the Consciousness of Internal Time: Living-Present
    5. The Living-Present’s Double-Intentionality
  2. Heidegger on Phenomenology and Time
    1. Heidegger and Dasein’s Temporality
  3. Sartre and the Temporality of the “For-Itself”
  4. Merleau-Ponty and the Phenomenology of Ambiguity: The Subject as Time
  5. References and Further Reading
    1. Primary Sources
    2. Secondary Sources

1. Husserl, Phenomenology, and Time-Consciousness

Phenomenology maintains that consciousness, in its very nature as activity, is intentional. In its care for and interest in the world, consciousness transcends itself and attends to the world by a myriad of intentional acts, e.g., perceiving, remembering, imagining, willing, judging, etc.—hence Husserl’s claim that intentional consciousness is correlated (that is, co-related) to the world. Although the notion of intentionality includes the practical connotations of willful interest, it fundamentally denotes the relation conscious has to objects in the world. Of these many modes of intentionality, time-consciousness arguably constitutes the central one for understanding consciousness’s intentional, transcending character. Put differently, time-consciousness underscores these other intentional acts because these other intentional acts presuppose or include the consciousness of internal time. For this and other reasons, Husserl, in his On the Phenomenology of the Consciousness of Internal Time (1893-1917) (1991), deemed time-consciousness the most “important and difficult of all phenomenological problems” (PCIT, No. 50, No. 39). Together with Analyses Concerning Passive and Active Syntheses (2001), Cartesian Meditations (1997) and Die ‘Bernaur Manuskripte’ über das Zeitbewußtseins 1917/18 (2001), this work seeks to account for this fundamental form of intentionality that the experience of temporal (e.g., spatial and auditory) and non-temporal (e.g., mathematical and logical) objects alike presupposes.

All experience entails a temporal horizon, according to phenomenology. This claim seems indisputable: we rush, we long, we endure, we plan, we reminisce, we perceive, we speak, we listen, etc. To highlight the difficulty and importance of explaining the structures of consciousness that make possible the experience of time, Husserl, like his contemporaries Henri Bergson and William James, favored the example of listening to a melody. For a melody to be a melody, it must have distinguishable though inseparable moments. And for consciousness to apprehend a melody, its structure must have features capable of respecting these features of temporal objects. Certainly, we can “time” the moments of a temporal object, a melody, with discrete seconds (measured by clocks). But this scientific and psychological account of time, which, following Newton, considers time as an empty container of discrete, atomistic nows, is not adequate to the task of explaining how consciousness experiences a temporal object. In this case of Newtonian time, each tone spreads its content out in a corresponding now but each now and thus each tone remains separated from every other. Newtonian time can explain the separation of moments in time but not the continuity of these moments. Since temporal objects, like a melody or a sentence, are characterized by and experienced as a unity across a succession, an account of the perception of a temporal object must explain how we synthesize a flowing object in such a way that we (i) preserve the position of each tone without (ii) eliminating the unity of the melody or (iii) relating each tone by collapsing the difference in the order between the tones.

Bergson, James and Husserl realized that if our consciousness were structured in such a way that each moment occurred in strict separation from every other (like planks of a picket fence), then we never could apprehend or perceive the unity of our experiences or enduring objects in time otherwise than as a convoluted patchwork. To avoid this quantitative view of time as a container, Husserl’s phenomenology attempts to articulate the conscious experience of lived-time as the prerequisite for the Newtonian, scientific notion of time’s reality as a march of discrete, atomistic moments measured by clocks and science. In this way, Husserl’s approach to time-consciousness shares much in common with these popular nineteenth Century treatments of time-consciousness. Yet to appreciate fully Husserl’s account of time-consciousness—the uniqueness of his contribution beyond other popular nineteenth Century accounts (deWarren 2008), and the priority he affords it in his own thinking—we first must understand phenomenology’s methodological device, the phenomenological reduction.

a. Phenomenological Reduction and Time-consciousness

Husserl believed that every experience for intentional conscious has a temporal character or background. We experience spatial objects, both successive (e.g., a passing automobile) and stationary (e.g., a house), as temporal. We do not, on the other hand, experience all temporal objects (e.g., an imagined sequence or spoken sentence) as spatial. For the phenomenologist, even non-temporal objects (e.g., geometrical postulates) presuppose time because we experience their timeless character over time; for example, it takes time for me to count from one to five although these numbers themselves remain timeless, and it takes some a long time to understand and appreciate the force of timeless geometrical postulates (PCIT § 45; see Brough 1991). To this point, common sense views of time may find Husserl agreeable. Such agreement ceases, however, for those who expect Husserl to proclaim that time resembles an indefinite series of nows (like seconds) passing from the future through the present into the past (as a river flows from the top of a mountain into a lake). This common sense conception of time understands the future as not-yet-now, the past as no-longer-now, and the present as what now-is, a thin, ephemeral slice of time. Such is the natural attitude’s view of time, the time of the world, of measurement, of clocks, calendars, science, management, calculation, cultural and anthropological history, etc. This common sense view is not the phenomenologist’s, who suspends all naïve presuppositions through the reduction.

Phenomenology’s fundamental methodological device, the “phenomenological reduction,” involves the philosopher’s bracketing of her natural belief about the world, much like in mathematics when we bracket questions about whether numbers are mind-independent objects. This natural belief Husserl terms the “natural attitude,” under which label he includes dogmatic scientific and philosophical beliefs, as well as uncritical, every-day, common sense assumptions. Not a denial of the external world, like Descartes methodologically proposed, the phenomenological reduction neutralizes these dimensions of the natural attitude towards experience in order to examine more closely experience and its objects just as they appear to conscious experience (Ideas I §§ 44-49; Sokolowski 2000). Put less technically, one could consider phenomenology a critical rather than habitual or dogmatic approach to understanding the world. To call phenomenology a critical enterprise means that it is an enterprise guided by the goal of faithfully describing what experience gives us—thus phenomenology’s famed return to the things themselves—rather than defaulting to what we with our dogmas and prejudices expect from experience—thus phenomenology’s famed self-description as a “pressupositionless science” (Logical Investigations)

That the phenomenologist suspends her natural attitude means that a phenomenology of time bypasses the inquiry into both natural time considered as a metaphysical entity and scientific world time considered as a quantitative construct available for observation and necessary for calculation (PCIT § 2). Without prejudice to the sciences, the reduction also suspends all philosophical presuppositions about time’s metaphysical, psychological or transcendental-cognitive nature. Hence, the phenomenological reduction enables Husserl to examine the structures of consciousness that allow us to apprehend and thus characterize the modes of temporal objects appearing as now, past or future. As Husserlians often express it, Husserl concerns himself not with the content of an object or event in time (e.g., listening to a sentence) but with how an object or event appears as temporal (Brough 1991).

As this discussion about the effect of the reduction on Husserl’s account of time implies, Husserl distinguishes three levels of time for our consideration: (3) world[ly] or objective time; (2) personalistic or subjective time; and (1) the consciousness of internal time. We can make assessments and measurements, e.g., declaring things simultaneous or enduring, at the level of objective time only because we experience a succession of mental states in our subjective conscious life. Our awareness of objective time thus depends upon our awareness of subjective time. We are aware of subjective time, however, as a unity across succession of mental states because the consciousness of internal time provides a consciousness of succession that makes possible the apprehension and unification of successive mental states (PCIT No. 40; Sokolowski 2000).

Husserl’s contention that all experience presupposes (1) at first appears as an exhaustively subjective denial of time’s reality, particularly in light of the reduction. Moreover, since we believe that natural time precedes and will outlast our existence, we tend to consider (3) more fundamental than (1). As such, some may find Husserl’s privileging of (1) counterintuitive (Sokolowski 2000). Of course, such a passively received attitude or belief about time and our place therein amounts to cultural prejudice in favor of the scientific view of human beings as mere physical entities subject to the relentless march of time. A brief example may help us better understand Husserl’s objective and thus dispel these reservations: When listening to a fifty minute lecture (level 3), one may experience it as slow or as fast (level 2). Still, each listener’s consciousness has a structure (level 1) that makes it possible for her to apprehend (3) and (2). This structure in (1) functions in such a way that each listener can agree about the objective duration of the lecture while disagreeing about their subjective experience of it. If (1) changed subjectively as (2), then we never could reach a consensus or objective agreement about (3). For the phenomenologist, who seeks to give an account (logos) of the way things appear as temporal, the manifest phenomenon of time is not fundamentally worldly/objective or psychological/subjective time (Brough, 1991). Concerned with how temporal phenomena manifest themselves to conscious perceivers, the phenomenologist examines (1), namely the structures of intentional consciousness that make possible the disclosure of time as a worldly or psychological phenomenon. To begin to explain the priority of (1), Husserl highlights how the now and past are not a part of time considered according to the natural attitude view of (3) or (2).

b. Phenomenology, Experienced Time and Temporal Objects

It should be clear already that Husserl does not privilege the Newtonian view of time as a series of now, past and future moments considered as “things,” containers for “things,” or points on the imagined “time-line” (PCIT §§ 1-2, No. 51). Conversely, he considers the present, past, and future as modes of appearing or modes by which we experience things and events as now, no longer (past) or not yet (future). For example, though I experience the event of the space shuttle Columbia’s explosion as past, the past is not some metaphysical container of which the Columbia shuttle tragedy is a part; the past is the mode in which the Columbia shuttle tragedy appears to me. This does not mean that Husserl views time as something that flows willy-nilly, or that the time of the Columbia shuttle tragedy is contemporaneous with the time of your reading this entry. Husserl acknowledges that “time is fixed and that time flows” (PCIT § 31, No. 51). When we count from one to ten, two always occurs after one and before three regardless of how far our counting progresses; likewise, the temporal event of the Columbia shuttle tragedy occupies an unchanging, determinate temporal position in world-time, “frozen” between what came before and after it, ever-receding into the past of world time (history) without losing its place. Phenomenology helps to clarify the common sense understanding of time as a container—a metaphysical placeholder—that contains events. This common sense understanding of time as a container persists because we forget that we first understand these fixed temporal relations and position thanks to the modes of appearing, namely now, past and future (Brough, 1991).

As Husserlians put it, Husserl considers the now as conscious life’s absolute point of orientation from which things appearing as past and future alter (PCIT §§ 7, 14, 31, 33). Since the now and past are not a part of time but the modes by which things appear to me as temporal, each now that becomes past can accommodate many events simultaneously, e.g., one may remember where one was when the shuttle exploded, what anchor man one might have heard, what channel one was watching, who one was with, etc. (PCIT § 33; Brough 2005). The very fact that this experience becomes part of one’s conscious life implies that one experienced it in the now. Moreover, I can remember what events preceded and succeeded this tragedy, e.g., that my grade-school class filed into the auditorium or that my teacher sniffled as she led us back to our classroom. The very fact that one can place the event in relation to preceding and succeeding events implies both that one never experiences the now in isolation from the past and future and that one experiences the relation between now, past and future without collapsing these three modes of appearing (PCIT § 31).

These reflections on temporal objects and experienced time indicate that the flow of our conscious life is the condition for the possibility of the disclosure of temporal objects and experienced time, a condition that begins from the privileged standpoint of the now, which, again, nevertheless occurs in an interplay with past and future rather than in isolation from them. More than this descriptive account of some essential features of time’s appearance, however, Husserl’s phenomenology of time-consciousness concerns itself with the structure of the act of perceiving that allows us to apprehend a temporal object as unified across its manifold moments. Indeed, our preliminary reflections on time depend upon a series of successive events but a succession of experiences or perceptions is not yet an experience or perception of succession. Husserl turns his attention toward (1)—the transcendental level of internal time-consciousness—in order to explain how (2) and (3) become constituted conscious experiences.

c. Phenomenology Not to be Confused with Augustine’s Theory of Time

When we say that Husserl focuses his attention on (2) and (1), we mean that his writings on time-consciousness attempt to explain how time and experienced time appear to consciousness. This explanation begins, for Husserl, by confronting the paradox of how to account for the unity of a process of change that continues for an extended period of time, a unity that develops in succession, e.g., listening to a sentence or watching a film (PCIT No. 50). To unravel this theoretical knot, Husserl believed, philosophy must realize that, beyond the temporality of the object, the act of perceiving has its own temporal character (PCIT No. 32). Consider the phrase, “Peter Piper picked a pack of pickled peppers” at the word, “picked.” In this example, I hear “picked” yet somehow must hold onto “Peter” and “Piper” in just the order in which I originally apprehended them. Husserl contends that insofar as a temporal object such as a sentence occurs across time in a now that includes what is no longer, consciousness too must extend beyond the now; indeed, if all I heard were different words in each new now without connecting them to past related words, then I never would hear a sentence but only a barrage of sounding words. Consciousness not only must extend beyond the now, but it also must extend in such a way that it preserves the determinate temporal order of the words and modifies their orientation to the now. Indeed, if I preserved the words in a simultaneous or haphazard order, then I never would hear a sentence but only a jumble of words.

To account for the unity of succession in a way that avoids these difficulties, Husserl will not explain consciousness’ extension beyond the now in an act of perception by merely importing a view of Newtonian time into the mind or translating such a view of natural time into a transcendental condition of the mind. This was Kant’s dogmatic failure in the “Transcendental Aesthetic” of his Critique of Pure Reason (Crisis 104 ff.). Nor will Husserl’s account of the “perception” of a temporal object conclude, as Augustine’s did, that consciousness extends beyond the now thanks to its “present of things no longer” and a “present of thing yet to come” that echoed Augustine’s description of the soul’s distention (PCIT § 1; Kelly 2005). Such an Augustinian account of “the present of thing no longer” cannot explain the perception of a temporal object because it traps the heard contents in the now (as a present of things no longer remains present nevertheless). Augustine’s notion of a “present of things no longer” can explain consciousness’ extension beyond the now only as a result of a memorial recollection. But memory drags past nows—and the contents occurring therein—back into the present, thereby rendering past moments simultaneous with a present moment and effectively halting time’s flow. Any account of temporal awareness that explains consciousness’’ extension beyond the now by recourse to memory conflates the acts of memory and perception and thus proves inadequate to explain the conscious perception of a temporal object. Memory gives not the perception of a temporal object but always only what it is capable of giving: a memory (PCIT No. 50; Brough, 1991).

With respect to this problem of conflating memory and perception, Husserl indicates two consequences. First, the distention of the now through memory leaves us with a situation where, as Husserl admits, at any given moment I perceive only the actually present word of the sentence; hence, the whole of the enduring sentence appears in an act that is predominantly memory and only marginally perception (PCIT § 12). Experience tells us, however, that we “perceive” (hear) the whole sentence across its present (now) and absent (past or future) words rather than hearing its present word and remembering (or expecting) the others (PCIT § 7). Indeed, something quite different occurs when I hear a sentence and when I remember the event of the Columbia shuttle tragedy. Second, having conflated the past and the present by making recourse to memory as a means to explain consciousness’ extension beyond the now, such a theory violates the law of non-contradiction, for the mode of the present cannot present something as past, but only as present, and vice versa (PCIT No. 14). In short, on such Augustinian theory, everything remains ‘now’ and nothing can overcome that fact (Brough 1993; Kelly 2005).

The problem of the consciousness of time becomes properly phenomenological when Husserl asks how one explains the original consciousness of the past upon which one can recognize an object as past rather remembering a past moment. Put differently, the problem of time becomes phenomenological when Husserl begins to seek an account of the generation of a sense or consciousness of pastness upon which (the) perception (of a temporal object) and memory depend. Indeed, to claim that we remember something presupposes the very sense of the past we are trying to explain (Sokolowski 2000). An adequate account of the perception of a temporal object first requires a discussion of how consciousness extends beyond the now, i.e., an account of the difference between the consciousness of succession and the remembrance of a succession of consciousnesses (PCIT No. 47; Brough 1972).

d. Phenomenology and the Consciousness of Internal Time: Living-Present

Unlike previous theories addressing the consciousness of time, Husserl shifts his attention from an account of what is perceived as temporal to an account of the temporality of that which does the perceiving. Put differently, he tightens his focus, so to speak, recognizing that when one perceives a temporal object one also experiences the flow of the intentional act of perception (Brough 1991). In order to solve the aforementioned paradox of how to account for the unity of a temporal object over the succession of its parts (e.g., the sentence across it many words), Husserl turns his attention to consciousness’ lived experience, to the structures of consciousness at level (1) that make possible the unification of the manifold moments of that act of perception at level (2) and the perceived object at level (3) (PCIT No. 41).

To explain how consciousness extends beyond the now in its act of perception, Husserl begins to think that consciousness itself must have a “width.” And this is just to say that consciousness must have a sense of the past and a sense of the future to begin with (Sokolowski 2000). To this end, Husserl attempts to argue that consciousness extends to capture past moments of experience and temporal objects therein by “retaining” and “protending” the elapsed and yet to come phases of its experience and thereby the past words that do not presently exist (when I reach a certain point in listening to a sentence) yet remain related to the present experience (PCIT, No. 54; Zahavi, 2000). Rather than attempt to explain the unity of a succession of discrete consciousnesses correlated with a succession of discrete moments in a temporal object, Husserl attempts to explain the consciousness of succession that makes possible the apprehension of a succession of consciousnesses.

Husserl thus speaks almost exclusively of consciousness’ living-present, and he characterizes this life of consciousness with three distinguishable yet inseparable moments: primal impression, retention, and protention. This tripartite form or intentional structure of the living-present should not be thought of as discrete, independently occurring pieces in a process (or procession). Such an atomistic view of the living-present’s structure will not work. Were the moments of the living-present thought as such, we would have to remember or re-present each past state of consciousness. Not a knife-edged moment, Husserl describes the life of consciousness, the living-present, as extended like a comets tail, or saddle-back, to use the image William James preferred, moments comprising an identity in a manifold (James) (PCIT § 10).

Consciousness is no longer a punctual box with several acts functioning in it simultaneously and directing themselves to the appropriate instances of the object. Admittedly, it is difficult to talk of this level of the consciousness of internal time, and Husserl himself claims we are reduced to metaphors (PCIT §§ 34-36). In a perhaps inadequate metaphor, Husserl’s theory of the living-present might be thought of as presenting a picture of consciousness as a “block” with relevant “compartments” distinguished by “filters” or “membranes,” each connected to and aware of the other. In this life of consciousness, Husserl maintains, consciousness apprehends itself and that which flows within it. As Husserl describes it, retention perceives the elapsed conscious phase of experience at level (1) and thereby the past of the experience at level (2) and the past of the object at level (3). The moments of retention and protention in the tripartite form of consciousness that is the living-present make possible consciousness’ extension beyond the now in such a way that avoids the problem of simultaneity and enables consciousness to attend determinately to the temporal phases of the object of perception. Unlike Augustine’s notion of a present of things no longer, which remembered or re-presented a past content in the now, Husserl draws a distinction between memory and retention. On the one hand, memory provides a “consciousness of the [instant] that has been” (PCIT § 12). On the other hand, retention “designates the intentional relation of phase of consciousness to phase of consciousness” (PCIT No. 50), i.e., a “consciousness of the past of the [experience]” (PCIT No. 47) and thereby the instant of the object that has been.

This distinction does not mean that memory differs from retention merely as a matter of temporal distance, the former reaching back further into time. Rather, Husserl draws a structural distinction between memory and retention: The former is an active, mediated, objectifying awareness of a past object, while the latter is a passive, immediate, non-objectifying, conscious awareness of the elapsed phase of conscious experience. First, memory reveals itself to be an act under the voluntary auspices of consciousness, whereas retention occurs passively. Second, while memories occur faster or slower and can be edited or reconstructed, retention occurs “automatically” and cannot be varied at one’s whim (though it can, at level 2, be experienced as faster or slower, as noted above in our example of listening to a lecture). Third, remembering re-produces a completed temporal object, whereas retention works at completing the consciousness of a temporal object, unifying its presence and absence. Fourth, as the representation of a new intentional object, memory is an act of presenting something as past, as absent, whereas the retention that attempts to account for the perception of an object over time constitutes an intuition of that which has just passed and is now in some sense absent, an act of presenting something as a unity in succession. Fifth, memory provides us with a new intentional object not now intuitively presented as the thing itself “in person”—e.g., remembering my friend’s face when she is absent from me in this moment—whereas retention accounts for the perception across time of an object now intuitively presented for me—e.g., the progressive clarity of my perception of my friends face as she approaches me from the street. Sixth, despite memory’s character as a presenting act, when it represents to me my friend’s face it represents it in the now with a change in temporal index or a qualification of the remembered object as past, whereas retention holds on to that which is related to my present perception in a mode of absences (e.g., as when I hear “picked” while retaining “Peter Piper”). Seventh, memory depends upon or is “founded” upon retention as the condition of its very possibility, for memory could never represent an object as a completed whole if retention did not first play its role in constituting across time the object now remembered (PCIT, No 50; Zahavi; Brough 1991.

To explain time-consciousness at level (1), then, Husserl comes to favor the theory that consciousness of the past and future must be explained by the intentional direction of retention and protention to the past and future of consciousness’ lived experience rather than a mode of memorial apprehension that issues from the now to animate past impressions. Returning to our above example of listening to a sentence, when I hear “picked,” I do not remember “Peter Piper.” Rather, I intuitively perceive the sentence as a temporally differentiated yet nonetheless related to the current [of this] experience. To be sure, the words do not occur simultaneously; each word passes and yet remains relevant to the presently lived experience. The interpreter of Husserl must take care at this point not to read the turn to consciousness as entailing a loss of the perceived; rather, what is retained is precisely the impressional moment as experienced in that moment and having been retained in this experience. In fact, this account allows that the words, “Peter Piper,” have passed, metaphysically, but remain on hand in this apprehension of “picked” thanks to consciousness’ retention of its past phase of experience wherein it heard the related words, “Peter Piper.” As a moment of the intentional relationship between the phases of consciousness’ living-present, retention “automatically” experiences its intuitively present conscious life and determinately provides a consciousness of the past of the experience.

Husserl’s account of the living-present ultimately articulates the condition for the possibility of all objectifying acts, a condition itself not objectified. As such, the discussion of retention brings us to the bottom line, the final and most difficult layer of intentional analysis, namely consciousness’ double-intentionality (PCIT No. 54).

e. The Living-Present’s Double-Intentionality

The living-present marks the essence of all manifestation, for in its automatic or passive self-givenness the living-present makes possible the apprehension of the elapsed phases of the life of consciousness and thereby the elapsed moments of the transcendent spatio-temporal object of which the conscious self is aware. This is possible, Husserl argues, because the “flow” (PCIT § 37) of conscious life enjoys two modes of simultaneously operative intentionality. One mode of intentionality, which he terms Langsintentionalität, or horizontal intentionality, runs along protention and retention in the flow of the living-present. The other mode of intentionality, which Husserl terms the Querintentionalität, or transverse intentionality, runs from the living-present to the object of which consciousness is aware (PCIT No. 45; Brough 1991).

Husserl explains the unity of these two intentional modes as a consciousness wherein the Querintentionalität is capable of intending a temporal object across its successive appearings because the Langsintentionalität provides consciousness’ self-awareness and awareness of its experiences over time. As an absolute flowing identity in a manifold—of primal impression, retention and protention—the stream of conscious life in the living-present constitutes the procession of words in the sentence that appears and is experienced sequentially in accordance with the temporally distinct position of each word. Husserl thus describes consciousness as having a “double-intentionality”: the Querintentionalität, which objectively and actively grasps the transcendent object—the heard sentence—and the Langsintentionalität, which non-objectively and automatically or passively grasps consciousness’ lived-experience—the flow of the living-present (PCIT No. 45). That I hear the words of the fifty-minute lecture and feel myself inspired or bored is possible only on the basis of my self-awareness or consciousness of internal time.

Though Husserl terms this consciousness that is the special form of horizontal intentionality in the living-present a “flow,” he employs the label “metaphorically” because the living-present’s flow manifests itself, paradoxically, as a non-temporal temporalizing (PCIT § 32, No. 54). That the living-present temporalizes means that it grasps its past and future as absent without reducing its past and future to the present, thus freezing consciousness temporal flow. To capture Husserl’s image of a non-temporal flow more aptly, some commentators prefer the image of shimmering (Sokolowski 1974). As Husserl himself admits that we have no words for this time-constituting phenomenon, the image of shimmering seems a more appropriate descriptor, for Husserl understand the living-present paradoxically as a standing-streaming (PCIT No. 54). Though non-temporal, Husserl assigns the living-present a time-constituting status, for this absolute consciousness makes possible the disclosure of temporal objects insofar as it makes possible the disclosure of consciousness’’ temporality by accounting for our original sense of the past and of the future in the retentional and protentional dimension of the living-present (PCIT § 37).

Husserl must characterize the flow as non-temporal. If that which makes possible the awareness of a unity in succession itself occurred in succession, then we would need to account for the apprehension of the succession unique to the living-present, and so on and so forth, ad infnitum (PCIT, No. 39, No. 50). An infinite regress of consciousness, however, would mean that we never would achieve an answer to the question of what makes possible the consciousness of time. In order to avoid an infinite regress, then, and in accordance with experience, which tells us that we do apprehend time and temporal objects, Husserl describes the living-present’s flow as a non-temporal temporalizing. This argument in favor of the non-temporal character of the living-present brings us to the two senses in which the special form of intentional consciousness is an absolute consciousness.

First, Husserl characterizes the living-present as absolute because a non-temporal consciousness that needs no other consciousness behind it to account for its self-apprehension is just that, absolute, the bottom line. Second, as the absolute bedrock of intentional analysis (Sokolowski 2000), the absolute flow as a mode of intentionality peculiar to the living-present conveys a move away from a model of awareness or intentionality dependent upon a subject’s relation to an object. If philosophy construes all awareness according to an object-intentionality model of awareness, i.e., the dyadic relation of a subject (knower) to an object (known), then it can never account for the relation between knower and known in the case of self-consciousness. For example, when I am writing this entry, I am conscious of the computer on which I am typing, as well as myself as the one typing. To explain, philosophically, however, how I apprehend myself as the one typing, the dyadic object-intentionality model of awareness will not suffice. The issue, of course, concerns self-awareness and thus philosophy’s standard understanding of self-identity over time.

In the classic treatment of self-consciousness, John Locke in his Essay Concerning Human Understanding accounts for self-identity over time thanks to consciousness’ reflective grasp on its past states. Locke establishes this account by distinguishing (i) simple ideas of sense directed toward (iia) objects from (i) simple ideas of reflection directed toward (iib) the self. In both cases, (i) knows (iia) and (iib) in the same manner insofar as (i) takes (iia) and (iib) as objects while (i) itself goes unnoticed or unaccounted for. Locke’s account thus turns the self or subject into an object without ever really presenting the self. Even if a simple idea of reflection directs itself toward the self, one self (the reflecting self) remains subject while the other self (the reflected self) becomes the object. In self-awareness, however, no difference, distance or separation exists between the knower and the known. Forced to apprehend itself as an object in an exercise of simple sense reflection, the Lockean subject never coincides with itself, caught as it is in a sequence of epistemic tail chasing (Locke, 1959 I; Zahavi, 1999). Such tail chasing, moreover, entails an infinite regress of selves themselves never self-aware. Locke’s failure stems from his restriction of intentionality to the model of object-awareness, the dyadic model of awareness, where all awareness requires a subject knowing an object.

Husserl’s account of the unity of (1) this dynamic, shimmering living-present makes possible the consciousness of (2) psychological or subjective time and (3) worldly or objective time provides an alternative to the traditional account of awareness as merely an objectivating relation of a subject to object (Brough, 1991; Sokolowski, 1973; Zahavi, 1999). By retaining the elapsed phase of consciousness and thereby the past of the object, retention unifies consciousness’ flow and the time-span of the perceived temporal object, thus providing at once a non-objective self-awareness and an objective awareness of spatio-temporal entities.

Despite the heady accomplishments of Husserl’s theory of time-consciousness as founded in the living-present’s double-intentionality, contemporary phenomenologists still disagree about Husserl’s discovery. Some commentators, under the influence of Derrida’s critique of Husserl’s theory of the living-present (Derrida 1973), express reservations over the legitimacy of the status of the living-present as an absolute, non-temporal temporalizing, arguing that it amounts to a mythical construct (Evans, 1990). Yet decisive refutations of these criticisms, based on their insensitivity to the nuances of Husserl’s theory, are plenty (Brough, 1993; Zahavi, 1999). Still, even those who accept its legitimacy disagree about how best to explain the relation between levels (1) and (2) of time-consciousness (see Zahavi, 1999; Brough 2002). Interestingly, the very complexities and details of Husserl’s theory of internal time-consciousness, which remain a central point of debate for contemporary phenomenologists, proved germane to phenomenology’s development and alteration throughout the Twentieth Century.

2. Heidegger on Phenomenology and Time

If the double-intentionality of Husserl’s theory of consciousness proves fruitful, it is because it allows us to given an account of the temporality of individual experiences (e.g., listening to a sentence) as well as the temporal ordering of a multiplicity of experience (e.g., recognizing the classroom to which I return each week as the same room differentiated over a span of time) and all of these experiences as mine, as belonging to me. Husserl’s first follower, Martin Heidegger, took up the benefits of Husserl’s theory and developed them into his own unique brand of phenomenology. In fact, Heidegger developed his brand of phenomenology precisely in light of Husserl’s reflections on the intentionality unique to absolute time-constituting consciousness. As we shall see, Heidegger might put the point more forcefully, claiming that he developed his phenomenology in opposition to Husserl’s theory of absolute time-constituting consciousness. In any event, we can begin by identifying a fundamental difference between Husserl and Heidegger: Husserl emphasized the retentional side of the life of consciousness because he was interested in cognition, which builds up over time, while Heidegger emphasized the protentional or futural side of the subject because he is more interested in practical activity (the “in order to” or “for the sake of”).

According to Heidegger, the essence of absolute time-constituting consciousness amounted to a subject divorced and isolated from the world because Husserl construed absolute consciousness as a theory only about the a priori, presuppositionless and essential structures of consciousness that made possible the unified perception of an object occurring in successive moments. As an alternative to what he considered Husserl’s abstracted view of the human being, Heidegger suggests that philosophy cannot advance a proper understanding of the being of the human being by bracketing its and the world’s existence. Instead, we must understand the human being as being-in-the-world, Dasein, literally there-being; we only can understand what the world contributes to us and what we contribute to the world if we consider each as co-dependent without reducing one to the other. To put it differently, Husserl’s transcendental phenomenology provides an “upward” oriented approach while Heidegger’s ontological phenomenology provides a “downward” oriented approach, and their approaches stem from their different views of time (Macann 1991).

Heidegger maintains that Husserl’s phenomenology proves inadequate to the task of understanding Dasein’s relation to the world because Husserl fails to articulate adequately the relation between consciousness, or being, and time. Specifically, Husserl’s construction of the fundamental form of intentionality as absolute time-constituting consciousness remains, according to Heidegger, prisoner to the bias of pure presence. As Heidegger puts it, the bias of pure presence entails the reduction of “being” to the moment that “is” fully articulated in the conscious now at the expense of absence, i.e., what falls outside the conscious now, i.e., the moments of past and future. Such a view of consciousness, Heidegger insists, capitulates to the prejudice of presence because it implies that something can appear to consciousness only in the form of an object now given or before one in person and unified by consciousness across its manifold moments (BT, § 67c). At a general level of intentionality, Heidegger wants to correct Husserl’s overly cognitive assessment of the subject. For Heidegger, an intention or intentio literally conveys a sense of “stretching out” or “straining” (Heidegger 1925). For Heidegger, Dasein is being in the world, a being with goals and projects toward which it comports itself or toward which it stretches out. The projects toward which it stretches itself makes Dasein fundamentally futural in its intentional directedness toward the world.

Having failed to investigate the practical comportment of the subject, Heidegger argues, Husserl's view of consciousness seems to reduce all awareness to awareness of an object in the present, thus reducing the past to the present and consciousness' self-awareness to an object among objects (Dahlstron 1999). Together, these related consequences motivate Heidegger’s conclusion that Husserl fails to perform the phenomenological reduction completely. Or, better, Heidegger concluded that the performance of the reduction adulterates the view of the subject and thus should be abandoned. Heidegger’s version of phenomenology thus does not begin from a phenomenological reduction although competing views of this matter exist (Crowell 1990; Blattner 1999).

As mentioned already, Heidegger’s very conception of Dasein as co-dependent with the world displays, he believes, his difference from Husserl’s view of the human being as absolute time-constituting consciousness. Put negatively and in terms of his History of the Concept of Time (1925), Heidegger criticizes Husserl for not considering fully the existence of the human being, bracketing its existence in favor of an analysis of the essential features of consciousness’ intentional structures (Heidegger 1925). Put positively and in terms of his Being and Time (1927), Heidegger claims that Dasein’s essence is its existence (BT § 9). Hence, one might claim, Heidegger introduces the movement of existential phenomenology, a development in phenomenology concerned with the very existence of the human being, which we have seen is termed Dasein by Heidegger.

Concern with Dasein’s existence as its essence does necessarily reduce to the assumption that Heidegger takes existence in the sense of biological or genetic determinants. Though such factors may condition Dasein’s manner of existing, they do not determine it, according to Heidegger. Dasein is neither fully determined nor uninhibitedly free (BT 144). She exists in the mode of her possibilities and her possibilities are motivated by environmental influences, her skills and interests, etc. (Blattner, 1999). Dasein, for Heidegger, is thus a being concerned about her being, reckoning with the world through her activities and commitments. Centering his existential phenomenology on how the world appears to a being concerned about its being, Heidegger’s inquiry starts from how Dasein comports herself as manifest in the everyday activities of her life, activities to which she commits herself or about which she cares (BT § 7). Heideggerian phenomenology thus begins from an interest in how the world appears to a being that cares about its existence, an intentional being but one who, in intending the world, is primarily practical and secondarily contemplative. Less concerned with the Husserlian search for presuppositionless certainty and essential structures, Heidegger’s existential phenomenology amounts to an interpretive description or hermeneutics that attempts to express the unexpressed (or articulate the pre-predicative) mode of Dasein’s engagement with the world (BT § 7). And this manner of engagement finds its fullest expression in Heidegger’s account of Dasein’s temporality.

a. Heidegger and Dasein’s Temporality

The notion of Dasein’s projects proves crucial to understanding Heidegger’s analysis of Dasein’s temporality and its difference from Husserl’s phenomenology. In discussing Dasein’s projects, Heidegger takes the term etymologically; to pro-ject means to put out there or to put forward. That Dasein projects itself in the world implies something fundamental about it. Dasein finds itself thrown into a world historical circumstance and projects itself in that world. Born (thrown) into a time and culture not of one’s choosing, Dasein always already exists in the world and suffers some limitations from which she nevertheless may wiggle free thanks to her interests and concerns about the world and her existence therein. The way things matter to Dasein—how she finds herself affected, in Heidegger’s language—and her skills and interest constitute different possibilities for her, different ways of being-in-the-world. These possibilities, in turn, manifest themselves in Dasein’s projects, i.e., in how she puts herself forward or projects or comports herself. These conditions suggest to Heidegger that the essential mode of being in the world for Dasein is a temporal one. Of the three temporal dimensions characterizing Dasein, we may say: First, the fact that Dasein finds herself thrown into a world and characterized by certain dispositions, etc. implies a “pastness” to her being. Second, the fact she projects herself implies a “futurity” to her being. And, third, the fact that she finds herself busied with the world as she projects herself in an effort to fulfill the present tasks required by the goal that is her project implies a “presentness” to her being (Blattner 1999).

The fundamental characteristic of the being that cares about its being, Dasein, then, is temporality. But things are not as simple (or common-sense) as they seem thus far. Time resembles Dasein insofar as time projects itself or stands outside itself in its future and past without losing itself—time and Dasein thus appear ontologically similar, or similar in their ontological structure. Since the question concerns the being for whom its being is a concern, and since the fundamental structure of this being is its temporality, philosophy’s very attempt to understand Dasein fundamentally concerns the relation between being and time at a pre-predicative level of worldly-engagement, a level prior to articulated judgment, prior to the conscious conceptualizations of traditional metaphysics or Husserlian phenomenology; hence, the title of Heidegger’s famous work, Being and Time (Richardson 1967). In Heidegger’s terms, an “authentic” understanding of the being concerned about its being rests upon a proper understanding of that being’s temporality.

To understand Dasein, then, Heidegger first distinguishes originary or authentic time understood as Dasein’s way of being in the world from worldly- and ordinary-time understood inauthentically or uncritically by the common-sense, pre-philosophical mind (BT § 80). As the labels imply, Heidegger articulates a hierarchical structure between these levels of time, much like Husserl’s levels of time (Sokolowski 1974). The hierarchical structure envisioned by Heidegger looks like this: World-time grounds ordinary-time, and both in turn are grounded by originary-time.

To establish the fundamental feature of Dasein as originary temporality, Heidegger distances his view of Dasein’s temporality from all common sense understandings of time as a series of nows, thereby deferring the common sense understanding of past as no-longer-now and future as not-yet-now. His position depends on a distinction between how time shows itself to Dasein as world-time and ordinary-time, the latter being derivative of the former. World-time denotes the manner in which the world appears as significant to Dasein in its everyday reckoning with the world at a practical level through its projects. For example, the world appears to an academic with certain significances or importance. Objects like chalk, books, computers, and libraries all manifest themselves with a particular value, and time does, as well (just consider the fact that the new year begins in late August rather than the first day of January). When I sit in my office, the approaching time of three in the afternoon does not appear merely as an indifferent hour on the clock. Rather, it appears to me as the time when, according to my project, I must head to class—just as it may appear to a postal work as the time when she should return to the station from her route. For me, the time-span of my class does not merely appear as seventy-five successive minutes. Rather, the classroom time of my project appears to me as the time when I project myself toward my students, the material for the day’s discussion and the material equipment in the class that facilitates my teaching well. If my class begins to go poorly, however, I may become self-conscious about how well I meet the demands of my project as a teacher. When the focus of my attention shifts from my project to my failures, the time of my project ceases to be my primary focus. Perhaps in this case I shift my focus to the passing nows or seconds of each increasingly long minute. If such a shift occurs, Heidegger might claim that I shift from the mode of world-time to the mode of ordinary-time, the time understood as a measurable succession of nows, seconds, minutes, etc.

This time that measures successive nows, Heidegger deems ordinary-time, which depends upon world-time. Heidegger distinguishes the two by pointing out that the significance which colors world-time goes missing in the view of ordinary time and time appears no longer as the span of my project but the mere succession of punctual, atomistic nows (the Newtonian scientific view of time as an empty container or place holder). When the time-span of practical reckoning with the world ceases for Dasein, ordinary-time emerges (BT§ 80; Blattner 1999). The above example does not quite get Heidegger exactly right, however, for in it I remain interested in human concerns (except that now I am worried about them). What the example does convey is the shift in understanding time from a mode of time as an extended reckoning with the world laden with significance to a mode of time considered as a purely abstract marching of moments, a view of time most accurately associated with the mathematical and scientific view of time (but not to the mathematician or scientist working with this view of time).

All of these distinctions between world- and ordinary-time are meant to elaborate Heidegger’s view that as a series of projects Dasein is no mere entity in the world but a temporal structure peculiar to its kind of being-in-the-world that makes manifest world- and ordinary-time. For Heidegger, the now denotes a mode of Dasein’s manner of being that discloses the appearance of the world to us, i.e., Dasein’s way of being-in-the-world. As a series of projects, Dasein in its originary temporality is characterized by a tripartite mode of transcendence or process (albeit a non-sequential process, since Heidegger has distanced himself from the ordinary view of time). First, as transcendence, as that which goes from itself and to which the world comes, Dasein has a futural moment. Second, as transcendence, as that which manifests itself non-objectively while reckoning with that which stands before it, Dasein has a present moment as the place wherein the world appears to, or manifests itself to, that which cares about it. And, third, as transcendence, as that to which the world comes, Dasein has a past moment because that which comes and manifests itself comes and manifests itself to one who always already is there (Heidegger 1927; Richardson 1967). As transcendence, as temporality, Heidegger describes Dasein as “ecstatic,” where ecstatic means to stand out (Sokolowski 2000). As the kind of being that is always outside itself without leaving itself behind, Dasein is a process of separating and consolidating itself (Sokolowski 1974). Outside of itself in the future, Dasein projects itself and reckons with that about which it cares; outside of itself in the present, Dasein makes manifest or present the appearance of that to which it goes out in its interest and according to its projects; outside of itself in the past, Dasein drags along that which it has been, its life, which, in turn, colors its present experiences and future projects.

This union of past, present and future as modes of originary-time in Dasein’s being-in-the-world renders Dasein authentic—one with itself or its own—because the projection into the future makes the present and the past part of Dasein’s project—its essence is its existence. However, insofar as I assume a project or life-orientation passively and without realizing myself as responsible for that project, argues Heidegger, I live inauthentically. And this is because I am engaged in the world without a full understanding of myself within the world. Put differently, rather than consciously make myself who I am through my choices, I passively assume a role within society—hence the temptation to label Heidegger an existentialist, a label the he himself rejected.

Many rhetorical differences exist between how Husserl and Heidegger execute the phenomenological method, particularly the phenomenology of temporality. Despite these differences, Heidegger begins his inquiry into Dasein’s temporality much like Husserl began his consideration of absolute, time-constituting consciousness. Just as Husserl established that neither the now nor the consciousness of the now is itself a part of time, Heidegger begins his account of Dasein’s originary temporality with the observation that neither the now nor Dasein is itself a part of time (BT § 62). As Heidegger puts it, as always already being-in-the-world, Dasein’s temporality is neither before nor after nor already in terms of the way common sense understands time as a sequence of discrete, empty nows (BT § 65). Hence, Heidegger translates Husserl’s account of the levels of time into an account of Dasein’s originary temporality. Moreover, Heidegger and Husserl seemingly end on the same note, for Husserl describes the living-present as a non-objectivating transcendence, an intentional being that transcends itself toward the world, and this description equally characterizes Heidegger’s more practically oriented discussion of Dasein’s originary-temporality. Like Husserl’s notion of the living-present, Heidegger’s theory of Dasein’s structure as originary temporality considers Dasein a mode of objectivating not itself objectified, the condition for the possibility of all awareness of objects at the levels of worldly- and ordinary-time (BT § 70).

Still, an important difference exists with respect to their phenomenologies of time and time-consciousness. First, despite the implicit levels of time, Heidegger employs the phenomenological reduction quite ambivalently and ambiguously. Second, Heidegger explicitly rejects the outcome of the phenomenological reduction as a privileged access to absolute time-constituting consciousness. Third, Heidegger quite unequivocally privileges the moment of the future in his account of Dasein’s originary temporality. By emphasizing Dasein’s being-in-the-world as manifest through its throwness in the world, and its care for the world as manifest through its projects, Heidegger’s focuses on Dasein’s futural character distinguishes his account from Husserl’s, for Husserl emphasized the moment of retention in the living-present almost to the exclusion of any remarks on protention, the anticipatory moment of the living-present. For these reasons, Heidegger considered his phenomenology radically different from Husserl’s. In particular, Heidegger thought Husserl’s overly cognitive account of how consciousness constitutes a unified temporal object across a succession of moments articulated only one of the many issues surrounding the temporality of Dasein, a merely scientific or cognitive account of how consciousness presents an object in the world to itself. Husserl’s restrictive phenomenology of time, Heidegger argues, overlooks the existential dimension of Dasein’s temporality, how Dasein reckons with the world at a tacit level rather than how it cognizes the world. And in particular, Heidegger thought philosophy could assess Dasein’s manner of reckoning with the world only by examining its futural moment as manifest in the projects that characterize Dasein’s mode of existence as the ongoing realization of its possibilities or construction of its essence.

3. Sartre and the Temporality of the “For-Itself”

Heidegger’s innovative contributions to the phenomenology of time did not go unnoticed by later phenomenologists. Both Sartre and Merelau-Ponty adopted Heidegger’s view of Dasein as being-in-the-world, an entity whose essence is its existence. The originality of Sartre’s phenomenology of time lies not in his reflections on time, which, as we shall see, return to some rather pedestrian claims. Rather, Sartre’s unique contribution to the phenomenology of time lies in his understanding of how consciousness, the “for-itself,” relates to the world, the “in-itself.” What in their discussions of this fundamental mode of transcendence Husserl labeled absolute time-constituting consciousness, and Heidegger Dasein, Sartre termed the “for-itself.” Given Husserl and Heidegger’s differing views of consciousness’ mode of intentionality and its fundamental self-transcending nature in its mode of temporality, Sartre’s theory presents an unlikely marriage of the two.

Fusing Heidegger’s view of being-in-the-world with what he considered was a greater fidelity to Husserl’s notion of intentionality, Sartre considered the being of the “for-itself” an ecstatic temporal structure characterized by a sheer transcendence or intentionality. In his earliest work, Transcendence of the Ego (1939), Sartre defines the “for-itself” by intentionality, i.e., the Husserlian claim that consciousness transcends itself (Sartre 1936). As self-transcending, Sartre further delimits the “for-itself” as a being-in-itself-in-the-world. The “for-itself” is a field of being always already engaged with the world, as Heidegger expressed Dasein as intentional and thrown. For Sartre, however, in its activity of engaging the world the “for-itself” reveals itself as nothing, a “no-thing,” or not-the-being-of-which-it-is-conscious. Sartre further qualifies the being of the “for-itself” that always already is engaged with the world as a non-positional consciousness (Sartre 1936). A non-positional consciousness always already engaged the world, Sartre contends, consciousness does not take a position on itself but on the world; hence, consciousness is non-positional. To evidence his point, Sartre maintains that I, when late for a meeting and running to catch the subway, do not primarily concern myself with myself but only have a consciousness of the subway to be caught (Sartre 1936). Rather than taking a position on myself as I pursue the subway, I implicitly carry myself along as I tarry explicitly with the world. For this reason, Sartre argues that absolute consciousness in Husserl’s sense of the living-present does not unify a temporal experience because the unity of consciousness itself is found in the object (Sartre 1936).

This Sartrean view that the experience unifies itself not only recalls Heidegger’s insistence that Dasein is a self-consolidating process, but also renders the notion of an absolute time-constituting consciousness superfluous, according to Sartre. Indeed, Sartre believed that a deep fidelity to Husserl’s theory of intentionality necessitated the abandonment of Husserl’s notion of absolute consciousness; hence, he dramatically declared that the Husserlian notion of an absolute consciousness would mean the death of consciousness (Sartre 1936). If one assumes, with Husserl, the notion of a living-present characterized by the moments of retention, primal impression and protention, Sartre argues, consciousness dies of asphyxiation, so to speak. A consciousness divided in this way, according to Sartre, amounts to a series of instantaneous and discrete moments that themselves require connection. Such an instantaneous series of consciousness amounts to a caricature of intentionality, in Sartre’s view, because this kind of consciousness cannot transcend itself; as Sartre expresses it, an internally divided consciousness will suffocate itself as it batters in vain against the window-pains of the present without shattering them (Sartre 1943).

Sartre’s critique of the living-present or absolute time-constituting consciousness seems rather questionable. Indeed, this image leaves one wondering whether or not Sartre derives this caricatured view of time-consciousness from a caricature of Husserl’s view of intentionality. Nevertheless, Sartre abandons Husserl’s notion of the tripartite structure of absolute time-constituting consciousness in favor of something like Heidegger’s notion of Dasein’s ecstatic temporality and its projects and possibilities. And yet Sartres’ adaptation of Heidegger’s notion of Dasein’s possibilities seems questionable as well. Recall that Dasein’s possibilities were not purely uninhibited, that Dasein did not simply choose its projects and possibilities from a position of total freedom because of its thrown condition and affective dispositions. Sartre’s theory of the “for-itself” seems to reject the kinds of limiting conditions entailed by Heidegger’s notion of thrownness. Indeed, Sartre’s melodramatic image of a consciousness with cabin fever implies that he cannot fully embrace any limiting factors on how the “for-itself” fashions its essence through its existence. For Sartre, the “for-itself” is radically free (Blattner 1999), and the result of Sartre’s reflections on the temporality of the “for-itself” is a rather pedestrian view of temporality.

Like Husserl and Heidegger, Sartre does not consider the past, present and future as moments of time considered as contents or containers for contents. Rather, each marks a mode in which the “for-itself” makes manifest itself and the world. But Sartre’s account neither surpasses nor achieves either the rigor of Husserl’s analyses or the descriptive quality of Heidegger’s. For Sartre, the past of the “for-itself” amounts to that which was but is no longer—similar to the view of the past itself, which Augustine rejected, as that which was but is no-longer. By mirror opposite, the future of the “for-itself” amounts to which it intends to be but is not yet—similar to the view of the future itself, which Augustine rejected, as that which will be but is not yet. And between the two, the present of the “for-itself” is that which it is not, for its being is characterized as being-not-the-thing-of-which-it-is-conscious—similar to the view of the present, which Augustine rejected, as the thin, ephemeral slice of the now.

4. Merleau-Ponty and the Phenomenology of Ambiguity: The Subject as Time

Whether Husserl’s, Heidegger’s or Sartre’s account, for phenomenology we cannot separate the issue of time from the issue of subjectivity’s structure. And Merleau-Ponty’s discussion of temporality in Phenomenology of Perception (1945) is no exception. It is, however, the most exceptional case of the intertwining of these issues. Developing Heidegger’s notion of Dasein as being-in-the-world, Merleau-Ponty emphasizes the being of Dasein as its bodily comportment and declares the body an essentially intentional part of the subject. Since Merleau-Ponty wants to make the body itself intentional, it is no surprise that he intertwines time and the subject, (in)famously remarking that “we must understand time as the subject and the subject as time” (Merleau-Ponty 1945).

To situate Merleau-Ponty’s account in this trajectory of phenomenological theories of time, it is useful to bear in mind that his account amounts to an innovative synthesis of Husserl and Heidegger’s understandings of time. Though the same can and has been said of Sartre’s account, Merleau-Ponty’s synthesis of Husserl and Heidegger differs from Sartre’s on three important scores. First, Merleau-Ponty rejects the dualistic ontology of the "for-itself" and the "in-itself" that led Sartre to rashly criticize Husserl's notion of absolute consciousness and superficially adopt Heidegger's phenomenological account of Dasein's temporality as manifest in its projects and possibilities." Second, Merleau-Ponty will not adopt Heidegger’s notion of Dasein’s temporality as an alternative to some purported shortcoming of Husserl’s account of the mode of intentionality unique to absolute time-constituting consciousness. Rather, third, more sensitive to the subtleties of Husserl’s theory of absolute time-constituting consciousness in the living-present than even Heidegger, Merleau-Ponty proposes to think the “unthought” of Husserl’s account of time through an intensified version of Heidegger’s account of the self’s inseparability from time.

From the outset, the “Temporality” chapter of his Phenomenology of Perception explicitly links time to the problem of subjectivity, noting that the analysis of time cannot follow a “pre-established conception of subjectivity” (Merleau-Ponty 1945). On the one hand, Merleau-Ponty rejects the traditional idealist conception of subjectivity in favor of an account of subjectivity in “its concrete structure;” on the other hand, since we must seek subjectivity “at the intersections of its dimensions,” which intersections concern “time itself and … its internal dialectic,” Merleau-Ponty rejects the realistic conception of subjectivity’s states as Nacheinander, i.e., successive, punctual, atomistic instants that lack intersection (Merleau-Ponty 1945). Hence, our understanding of Merleau-Ponty’s account of temporality and subjectivity’s temporality should follow the “triadic” structure of the Phenomenology: reject realism and idealism to demonstrate the merits of phenomenology (Sallis 1971).

The intellectualist account of time as (in) the subject fails because it extracts the subject from time and reduces time to consciousness’ quasi-eternity. The realist account of the subject as (in) time fails because it reduces the subject to a perpetually new present without unity to its flow. Both failures force upon the philosopher the realization that she can resolve the problem of time and subjectivity only by forfeiting the commitment to a “notion of time … as an object of our knowledge.” If we no longer can consider time “an object of our knowledge,” we must consider it a “dimension of our being” (Merleau-Ponty 1945). Hence, an account of subjectivity’s temporality—of time as a dimension of our being—necessarily entails the development of a model of bodily consciousness’ pre-reflective, non-objectifying awareness beyond the “pre-established conception of subjectivity” that takes time as an object of our knowledge.

This means not that (1) “time is for someone” but that (2) “time is someone” (Merleau-Ponty 1945). Phenomenologists and commentators alike often attribute (1) to Husserl and (2) to Heidegger. This should not surprise us given that Heidegger himself seemed to ascribe (2) to himself and his examination of Dasein’s lived-temporality in opposition to (1) Husserl’s account of how consciousness synthesizes an object across time. Often one of Husserl’s most sympathetic and accurate commentators (in Phenomenology of Perception, at least) Merleau-Ponty suggests that Husserl’s theory of absolute time-constituting consciousness in the living-present with its tripartite intentional structure provided an account of how (2) made time appear for reflection as (1). In short, Merleau-Ponty understood better than Heidegger that Husserl’s theory of the living-present articulated a theory of lived-time. What remained unthought by Husserl according to Merleau-Ponty was the inseparability of time and the subject in the theory of the living-present. Hence, an ambiguity intentionally pervades the account of time provided in Phenomenology of Perception.

This ambiguity at hand in Phenomenology of Perception stems from Merleau-Ponty’s honest admission that one never can fully execute the phenomenological reduction: “the most important lesson the reduction teaches us is the impossibility of a complete reduction” (Merleau-Ponty 1945). Merleau-Ponty does not advocate discarding the reduction, however, as Heidegger somewhat equivocally did. Rather, he aims to explain that Husserl merely meant the reduction as a critical device that ensured phenomenologists would retain the stance of presuppositionlessness, the stance of a perpetual beginner. The motivation for Merleau-Ponty’s reading of Husserl’s phenomenological reduction is the fact that philosophical reflection always depends upon a pre-reflective lived experience, a lived experience that always occurs in the temporal flux of bodily consciousness. Under the influence of Heidegger’s theory of Dasein’s being-in-the-world, Merleau-Ponty fashions his starting point in the exploration of time as an attempt to provide an account of the structures of pre-reflective consciousness that make reflection possible. And much like Heidegger, who sought to articulate the pre-predicative element of lived experience, Merleau-Ponty believed that these structures of pre-reflective consciousness reveal themselves as primarily temporal. (For his part, Merleau-Ponty will refer to this pre-reflective consciousness as the “tacit cogito,” his expression for the non-objectivating, pre-reflective consciousness articulated throughout the phenomenologists we have considered in this entry.) Hence, one could argue, despite the watershed reflections Merleau-Ponty provides on embodiment, time proves the most fundamental investigation of Phenomenology of Perception (Sallis 1971).

Since phenomenology’s task includes providing an account of the pre-reflective’, lived experience that makes possible reflection, Merleau-Ponty turns to the structure of time as an exemplar of that which makes explicit the implicit. For Merleau-Ponty, time provides a model that sheds light on the structure of subjectivity because “temporal dimensions … bear each other out and ever confine themselves to making explicit what was implied in each, being collectively expressive of that one single explosion or thrust that is subjectivity itself” (Merleau-Ponty 1945). Since to make explicit that which is implied in each moment means to transcend, to go beyond, one could say that Merleau-Ponty’s paradoxical expression means that time and the subject share the same structure of transcendence. That time is the subject and the subject is time means that the subject exists in a world that always outstrips her yet remains a world lived through by the subject (Sallis 1971). To clarify this structure, Merleau-Ponty invokes “with Husserl the ‘passive synthesis’ of time,” for the passive and non-objectivating characteristic of time’s structure in (what Husserl called) the living-present marks the archetype of the self’s structure, its transcendence that makes possible self- and object-manifestation. The Husserlian notion of double-intentionality thus pervades Merleau-Ponty’s account (Merleau-Ponty 1945).

That the matter of a passive and non-objectivating synthesis takes Merleau-Ponty to a consideration of the structure of absolute time-constituting consciousness’ double-intentionality—its transcendence and self-manifestation—as the structure of time we know to be the case for two reasons. First, Merleau-Ponty tells us, “in order to become explicitly what it is implicitly, that is, consciousness, [the self] needs to unfold itself into multiplicity;” second, in addition to the distinction just implied between non-objectivating and objectivating awareness, i.e., pre-reflective’ and reflective consciousness, Merleau-Ponty elaborates this manner of unfolding by claiming that “what we [mean] by passive synthesis [is] that we make our way into multiplicity, but that we do not synthesize it” as intellectualist accounts of time such as Augustine’s suggest. A synthesis of the multiplicity of time’s moments and the moments of the self must be avoided because it would require a constituting consciousness that stands outside time, and “we shall never manage to understand how a … constituting subject is able to posit or become aware of itself in time.” To avoid this error of separating consciousness from that of which it is aware, Merleau-Ponty appeals to Husserl’s theory of the living-present’s absolute flow, a “[consciousness that] is the very action of temporalization—of flux, as Husserl has it—a self anticipatory … flow which never leaves itself” (Merleau-Ponty 1945).

Merleau-Ponty seemingly provides an existential-phenomenological account of Husserl’s theory of absolute time-constituting consciousness’ double-intentionality. Nevertheless, he adopts Husserl’s theory according to his characteristic philosophy of ambiguity. Indeed, Merleau-Ponty insists that “it is of the essence of time to be not only actual time, or time which flows, but also time which is aware of itself … the archetype of the relationship of self to self” (Merleau-Ponty 1945). Ultimately with such remarks Merleau-Ponty was on the verge of bringing phenomenology toward a theory of ontology, which theory emerged in earnest in his later work, The Visible and the Invisible (1961). In that work, Merleau-Ponty expressly rejects his Phenomenology of Perception for having retained the Husserlian philosophy of consciousness. And this move from phenomenology to ontology manifests itself in some of his most provocative observations about time. To say that he moves from phenomenology to ontology is to say that he rejects any privileging of the subject or consciousness as constituting time either as a perceptual object or through a lived experience. As he puts it in the working notes of his The Visible and the Invisible, “it is indeed the past that adheres to the present and not the consciousness of the past that adheres to the consciousness of the present” (Merleau-Ponty 1961). Time now is characterized as an ontologically independent entity and not a construct disclosed by consciousness. It is the essence of time to be time that is aware of itself, to be sure. But this time is no longer an archetype of the self’s non-objectivating self-awareness. Rather, time constitutes the subject according to Merleau-Ponty, who puts to rest the phenomenological notion of absolute time-constituting consciousness, arguably Husserl’s most important discovery.

5. References and Further Reading

a. Primary Sources

  • Augustine, A. Confessions. Trans. F. J. Sheed. Indianapolis: Hackett Publishing Co, 1999.
  • Derrida, J. Speech and Phenomena. Trans. D. Allison. Evanston: Northwestern University Press, 1973.
  • Heidegger, M. Sein und Zeit. Tübingen: Max Niemeyer, 1986; Being and Time. Trans. J. Macquarrie and E. Robinson. New York: Harper and Row, Publishers Inc, 1963.
  • Heidegger, M. Gesamtausgabe Band 20: Prolegomena zur Geschichte des Zeitbefriffs. Frankfut am Main: Vittorio Klosterman, 1979; The History of the Concept of Time Trans. T. Kisiel. Bloomington: Indian University Press, 1985.
  • Husserl, E. Zur Phänomenologie des inneren Zeitbewußtseins (1983-1917). Ed. R. Boehm. The Hague: Martinus Nijhoff, 1966; On the Phenomenology of the Consciousness of Internal Time (1983-1917). Trans. J. Brough. Dordrecht: Kluwer Academic Publishers, 1991.
  • Husserl, E. Analysen zur passiven Synthesis. Aus Vorlessungs- und Forschungsmauskripten (1918-1926). Ed. M. Fleisher. The Hague: Martinus Nijhoff, 1966; Analyses Concerning Passive and Active Synthesis: Lectures on Transcendental Logic. Trans. A. Steinbock. Dordrecht: Kluwer Academic Publishers, 2001. Husserl, E. Phatasie, Bildbewußtseins, Erinnerung. Ed. E. Marbach. Dordrecht: Kluwer Academic Publishers, 1980; Fantasy, Image-Consciousness, Memory. Trans. J. Brough. Dordrecht: Springer, 2005.
  • Husserl, E. Aktive Synthesen: Aus der Vorlesung ‘Transzendental Logik’ 1920-21. Ergäzungsband zu ‘Analysen sur passiven Synthesis.’ Ed. R. Breur. Dordrecht: Kluwer Academic Publishers, 2000; Analyses Concerning Passive and Active Synthesis: Lectures on Transcendental Logic. Trans. A. Steinbock. Dordrecht: Kluwer Academic Publishers, 2001.
  • Husserl, E. Die ‘Bernaur Manuskripte’ über das Zeitbewußtseins 1917/18. Ed. R. Bernet and D. Lohmar. Dordrecht: Kluwer Academic Publishers, 2001.
  • Locke, J. An Essay Concerning Human Understanding. New York: Oxford University Press, 1990.
  • Merleau-Ponty, M. Phenomenology of Perception. Trans. C. Smith. New York: Routledge & Keegan Paul Ltd, 1962.
  • Merleau-Ponty, M. The Visible and the Invisible. Trans. A. Lingis. Evanston: Northwestern University Press, 1969.
  • Sartre, J. P. Transcendence of the Ego. Trans. F. Williams and R. Kirkpatrick. New York: Farrar, Straus and Giroux, 1957.
  • Sartre, J. P. Being and Nothingness. Trans. H. Barnes. New York: Philosophical Library, 1956.

b. Secondary Sources

  • Blattner, W. Heidegger’s Temporal Idealism. New York: Cambridge University Press, 1999.
  • Brough, J. B. “The Emergence of Absolute Consciousness in Husserl’s Early Writings on Time-Consciousness.” Man and World (1972).
  • Brough, J. B. “Translator’s Introduction.” In E. Husserl, On the Phenomenology of the Consciousness of Internal Time (1893-1917). Trans. by J. Brough. Dordrecht: Kluwer Academic Publishers, 1991.
  • Brough, J. B. “Husserl and the Deconstruction of Time,” Review of Metaphysics 46 (March 1993): 503-536.
  • Brough, J. B. “Time and the One and the Many (In Husserl’s Bernaur Manuscripts on Time Consciousness),” Philosophy Today 46:5 (2002): 14-153.
  • Dalhstrom, D. “Heidegger’s Critique of Husserl.” In Reading Heidegger from the Start: Essays in His Earliest Thought. Edited by T. Kisiel and J. van Buren. Albany: State University of New York Press, 1994.
  • de Warren, N. The Promise of Time. New York: Cambridge University Press, forthcoming.
  • Evans, J. C. “The Myth of Absolute Consciousness.” In Crises in Continental Philosophy. Edited by A Dallery, et. al. Albany: State University of New York Press, 1990.
  • Held, K. Lebendige Gegenwart. The Hague: Martinus Nijhoff, 1966.
  • Kelly, M. “On the Mind’s ‘Pronouncement’ of Time: Aristotle, Augustine and Husserl on Time-consciousness. Proceedings of the American Catholic Philosophical Association, 2005.
  • Macann, Christopher. Presence and Coincidence. Dordrecht: Kluwer Academic Publishers, 1991.
  • Richardson, W. Heidegger: Through Phenomenology to Thought. The Hague: Martinus Nijhoff, 1967.
  • Sallis, J. “Time, Subjectivity and the Phenomenology of Perception.” The Modern Schoolman XLVIII (May 1971): 343-357.
  • Sokolowski, R. Husserlian Meditations. Evanston: Northwestern University Press, 1974.
  • Sokolowski, R. Introduction to Phenomenology. New York: Cambridge University Press, 2000.
  • Wood, D. The Deconstruction of Time. Atlantic Highlands: Humanities Press International, 1989.
  • Zahavi, D. Self-awareness and Alterity: A Phenomenological Investigation. Evanston: Northwestern University Press, 1999.
  • Zahavi, D. Husserl’s Phenomenology. Palo Alto: Stanford University Press, 2003.

Author Information

Michael R. Kelly
Boston College
U. S. A.


Functionalism is a theory about the nature of mental states. According to functionalism, mental states are identified by what they do rather than by what they are made of. This can be understood by thinking about artifacts like mousetraps and keys. In particular, the original motivation for functionalism comes from the helpful comparison of minds with computers. But that is only an analogy. The main arguments for functionalism depend on showing that it is superior to its primary competitors: identity theory and behaviorism. Contrasted with behaviorism, functionalism retains the traditional idea that mental states are internal states of thinking creatures. Contrasted with identity theory, functionalism introduces the idea that mental states are multiply realized. Objectors to functionalism generally charge that it classifies too many things as having mental states, or at least more states than psychologists usually accept. The effectiveness of the arguments for and against functionalism depends in part on the particular variety in question, and whether it is a stronger or weaker version of the theory. This article explains the core ideas behind functionalism and surveys the primary arguments for and against functionalism.

Table of Contents

  1. Functionalism Introduced
  2. The Core Idea
  3. Being as Doing
  4. The Case for Functionalism
  5. Searle’s Chinese Room
  6. Zombies
  7. Stronger and Weaker Forms of Functionalism
  8. Conclusion
  9. References and Further Reading
    1. References
    2. Suggested Reading

1. Functionalism Introduced

Functionalism is a theory about the nature of mental states. According to functionalists, mental states are identified by what they do rather than by what they are made of. Functionalism is the most familiar or “received” view among philosophers of mind and cognitive science.

2. The Core Idea

Consider, for example, mouse traps. Mouse traps are devices for catching or killing mice. Mouse traps can be made of most any material, and perhaps indefinitely or infinitely many designs could be employed. The most familiar sort involves a wooden platform and a metal strike bar that is driven by a coiled metal spring and can be released by a trigger. But there are mouse traps designed with adhesives, boxes, poisons, and so on. All that matters to something’s being a mouse trap, at the end of the day, is that it is capable of catching or killing mice.

Contrast mouse traps with diamonds. Diamonds are valued for their hardness, their optical properties, and their rarity in nature. But not every hard, transparent, white, rare crystal is a diamond—the most infamous alternative being cubic zirconia. Diamonds are carbon crystals with specific molecular lattice structures. Being a diamond is a matter of being a certain kind of physical stuff. (That cubic zirconia is not quite as clear or hard as diamonds explains something about why it is not equally valued. But even if it were equally hard and equally clear, a CZ crystal would not thereby be a diamond.)

These examples can be used to explain the core idea of functionalism. Functionalism is the theory that mental states are more like mouse traps than they are like diamonds. That is, what makes something a mental state is more a matter of what it does, not what it is made of. This distinguishes functionalism from traditional mind-body dualism, such as that of René Descartes, according to which minds are made of a special kind of substance, the res cogitans (the thinking substance.) It also distinguishes functionalism from contemporary monisms such as J. J. C. Smart’s mind-brain identity theory. The identity theory says that mental states are particular kinds of biological states—namely, states of brains—and so presumably have to be made of certain kinds of stuff, namely, brain stuff. Mental states, according to the identity theory, are more like diamonds than like mouse traps. Functionalism is also distinguished from B. F. Skinner’s behaviorism because it accepts the reality of internal mental states, rather than simply attributing psychological states to the whole organism. According to behaviorism, which mental states a creature has depends just on how it behaves (or is disposed to behave) in response to stimuli. In contrast functionalists typically believe that internal and psychological states can be distinguished with a “finer grain” than behavior—that is, distinct internal or psychological states could result in the same behaviors. So functionalists think that it is what the internal states do that makes them mental states, not just what is done by the creature of which they are parts.

As it has thus far been explained, functionalism is a theory about the nature of mental states. As such, it is an ontological or metaphysical theory. And this is how it will be discussed, below. But it is also worthwhile to note that functionalism comes in other varieties as well. Functionalism could be a philosophical theory about psychological explanations (that psychological states are explained as functional states) or about psychological theories (that psychological theories take the form of functional theories.) Functionalism can also be employed as a theory of mental content, both as an account of the intentionality of mental states in general (what makes some states intentional is that they function in certain ways) or of particular semantic content (what makes some state have the content “tree” is that it plays a certain role vis-à-vis trees.) Finally, functionalism may be viewed as a methodological account of psychology, the theory that psychology should be pursued by studying how psychological systems operate. (For detailed discussion of these variations, see Polger, 2004, ch. 3.)

Often philosophers and cognitive scientists have subscribed to more than one of these versions of functionalism together. Sometimes it is thought that some require others, or at least that some entail others when combined with certain background assumptions. For example, if one believes, following Franz Brentano, that “intentionality is the mark of the mental,” then any theory of intentionality can be converted into a theory of the ontological nature of psychological states. If so, intentional functionalism may entail metaphysical functionalism.

All this being said, metaphysical functionalism is the central doctrine and probably the most widely endorsed. So in what follows the metaphysical variety will be the focus.

3. Being as Doing

Before looking at the arguments for and against functionalism, it is necessary to clarify the idea that, for mental states, being is doing.

Plausibly a physical stuff kind such as diamond has a physical or structural essence, i.e., being a thing of a certain composition or constitution, quite independently of what they do or can be used to do. It happens that diamonds can cut glass, but so can many other things that are not diamonds. And if no diamond ever did or could cut glass (perhaps Descartes’ evil demon assures that all glass is impenetrable), then they would not cease to be diamonds.

But it is also plausible that not all stuffs are made up in this way. Some things may be essentially constituted by their relations to other things, and by what they can do. The most obvious examples are artifacts like mousetraps and keys. Being a key is not a matter of being a physical thing with a certain composition, but it is a matter of being a thing that can be used to perform a certain action, namely, opening a lock. Lock is likewise not a physical stuff kind, but a kind that exists only in relation to (among other things) keys. There may be metal keys, wood keys, plastic keys, digital keys, or key-words. What makes something a key is not its material composition or lack thereof, but rather what it does, or could do, or is supposed to do. (Making sense of the claim that there is something that some kinds of things are supposed to do is one of the important challenges for functionalists.)

The activities that a key does, could do, or is supposed to do may be called its functions. So one can say that keys are essentially things that have certain functions, i.e., they are functional entities. (Or the kind key is a functional kind.)

The functionalist idea is, in some forms, quite ancient. One can find in Aristotle the idea that things have their functions or purposes—their telos— essentially. In contemporary theories applied to the mind, the functions in question are usually taken to be those that mediate between stimulus (and psychological) inputs and behavioral (and psychological) outputs. Hilary Putnam’s contribution was to model these functions using the contemporary idea of computing machines and programs, where the program of the machine fixes how it mediates between its inputs and standing states, on one hand, and outputs and other standing states, on the other. Modern computers demonstrate that quite complex processes can be implemented in finite devices working by basic mechanical principles. If minds are functional devices of this sort, then one can begin to understand how physical human bodies can produce the tremendous variety of actions and reactions that are associated with our full, rich mental lives. The best theory, Putnam hypothesized, is that mental states are functional states—that the kind mind is a functional kind.

The initial inspiration for functionalism comes from the useful analogy of minds with computing machines, as noted above. Putnam was certainly not the first to notice that this comparison could be theoretically fruitful. But in his “functionalist papers” of the 1950s and 1960s, he methodically explored the utility, and oversaw the transition of the idea from mere analogy to comprehensive theory, culminating with his classic defense of the functional state theory in his 1967 paper, “The Nature of Mental States.” There Putnam advanced the case for functionalism as a serious theoretical hypothesis, and his argument goes beyond the mere claim that it is fruitful to think of minds as being in many ways similar to machines. This argument aims to establish the conclusion that the best theory is the one that holds that minds “just are” machines of a certain sort.

4. The Case for Functionalism

Many arguments for functionalism depend on the actuality or possibility of systems that have mental states but that are either physically or behaviorally distinct from human beings. These arguments are mainly negative arguments that aim to show that the alternatives to functionalism are unacceptable. For example, behaviorists famously held that psychological states are not internal states at all, whether physical or psychical. But, the argument goes, it is easy to imagine two creatures that are behaviorally indistinguishable and that differ in their mental states. This line of reasoning is one of a family of “perfect actor” or “doppelgänger” arguments, which are common fare in philosophy of mind:

P1. If behaviorism is true, it is not possible for there to be a perfect actor or doppelgänger who behaves just like me but has different mental states or none at all.

P2. But it is possible for there to be a perfect actor or doppelgänger who behaves just like me but has different mental states or none at all.

P3. Therefore, behaviorism is not true. (by modus tollens)

In a well-known version of this argument, one imagines that there could be “Super-Spartans” who never exhibit pain behavior (such as flinching, saying “ouch”) or even any dispositions to produce pain behavior (Putnam 1963).

The most famous arguments for functionalism are responses not to behaviorism but to the mind-brain identity theory. According to the identity theory, “sensations are brain processes” (Smart 1959). If mental state kinds are (identical to) kinds of brain states, then there is a one-to-one relation between mental state kinds and brain state kinds. Everything that has sensation S must have brain state B, and everything that has brain state B must have sensation S. Not only that, but this one-to-one correlation must not be accidental. It must be a law of nature, at least, and perhaps must hold with an even stronger sort of necessity. Put this way, the mind-brain identity theory seems to make a very strong claim, indeed. As Hilary Putnam notes,

the physical-chemical state in question must be a possible state of a mammalian brain, a reptilian brain, a mollusc’s brain (octopuses are mollusca, and certainly feel pain), etc. At the same time, it must not be a possible (physically possible) state of the brain of any physically possible creature that cannot feel pain. Even if such a state can be found, it must be nomologically certain that it will also be a state of the brain of any extraterrestrial life that may be found that will be capable of feeling pain before we can even entertain the supposition that it may be pain. (Putnam 1967: 436)

The obvious implication is that the mind-brain identity theory is false. Other mammals, reptiles, and mollusks can experience pain, but they do not have brains like ours. It seems to follow that there is not a one-to-one relation between sensations and brain processes, but rather a one-to-many relation. Mental states, then, are not uniquely realized (as the identity theory requires); they are instead multiply realized.

And even if (by chance) it turns out that mammals, reptiles, and mollusks all have similar brains (so that in fact there is a one-to-one correlation), certainly one can recognize the possibility that it might be discovered that terrestrial or extraterrestrial creatures who experience pains but do not have brains like those of human beings. So it is surely not necessary that there is a one-to-one relation between mental state kinds and brain states kinds, but that is exactly what the identity theory would require. This is bad news for the identity theory, but it is good news for functionalism. For functionalism says that what makes something a mental state is what it does, and it is fully compatible with the diverse brains of mammals, reptiles, and mollusks that they all have mental states because their different brains do the same things, that is, they function in the same ways. Functionalism is supported because it is a theory of mind that is compatible with the likely degree of multiple realization of mental states.

Another pair of arguments for functionalism are what can be called the Optimistic and Pessimistic Arguments. The optimistic argument leans on the possibility of building artificial minds. The Optimistic Argument holds that even if no one ever discovers a creature that has mental states but differs from humans in its brain states, surely one could build such a thing. That is, the possibility of artificial intelligence seems to require the truth of something like functionalism. Functionalism views the mind very much as an engineer does: minds are mechanisms, and there is usually more than one way to build a mechanism. The Optimistic Argument, then, is a variation on the multiple realization argument discussed above; but this version does not depend on empirical facts about how our world is in fact, as the multiple realization argument does.

The Pessimistic Argument claims that the alternatives to functionalism would leave people unable to know about and explain the mental states of one another, or of other creatures. After all, if two creatures function in the same ways, achieve the same results, have isomorphic internal states, etc., then what could justify the claim that one has mental states and the other does not? The identity theory says that the justification has to do with what kinds of stuff the creatures are made of—only the one with the right kind of brain counts as having mental states. But this flies in the face of our ordinary practices of understanding, attributing, and explaining mental states. If someone says, “I am in pain,” or “I believe that it is sunny outside,” one doesn’t have to cut the speaker open and find out whether they have a human brain in order to know that they have a pain or a belief. One knows that because the speaker not only produce those noises (as the behaviorist might say), but because they have internal states that function in certain ways. One can test this, as psychologists often do, by running experiments in a laboratory or, as ordinary people do, by asking questions and observing replies. That is, we can find out how the systems function. And if functionalism is correct, that is all we need to know in order to have knowledge of other minds. But if the identity theory is correct, then those methods are at best heuristics, and the observer may yet be wrong. One cannot know for certain that the speaker has pains or beliefs unless one knows what kind of brain the speaker has. Without knowing about brains, we can only infer that others have beliefs on the basis of the behavioral symptoms they exhibit, and we already know (see above, regarding behaviorism and Super-Spartans) that those can lead us astray. But that is crazy, the argument goes, and if one really believed it then (given that in general one doesn’t know what kinds of brains other people have) nobody would be justified in believing anything about the beliefs of other people and creatures . And that is crazy.

The trouble with the Optimistic Argument is that it is question-begging. It assumes that one can create artificial thinking things without duplicating the kinds of brain states that human beings have, and that is just what the identity theory denies. The trouble with the Pessimistic Argument is that it seems to exploits a very high standard for knowledge of other minds — namely infallibility or certainty. The objection gets its grip only if the requirement to infer facts about others minds does undermine the possibility of knowledge about those minds. But we regularly acquire knowledge by inference or induction, and there is no special reason to think that inferences about minds are more problematic than other inferences.

The multiple realization argument is much more nuanced. Its interpretation is a matter of some dispute. Although there has been increasing resistance to the argument lately, it remains the most influential reason for favoring functionalism over the alternatives. And even if the multiple realization argument is unsound, that result would only undermine one argument for functionalism and not the thesis itself.

The next two sections will consider two objections to functionalism that aim to show that the theory is untenable. Both objections assume that mental states are, as the functionalist insists, multiply realizable. The objections try to show that because of its commitment to multiple realization, functionalism must accept certain unpalatable consequences. The conclusion of each argument is that functionalism is false.

5. Searle’s Chinese Room

John Searle’s “Chinese Room Argument is aimed at computational versions of functionalism, particularly those that specify the relevant functions in terms of inputs and outputs without fixing the internal organization of the processes. Searle stipulates that “Strong AI” is the thesis than an appropriately programmed computer literally has mental states, and that its program thereby constitutes an explanation of its mental states and (following the functionalist inspiration) of human mental states (1980). Searle then describes a scenario in which a system that carries out the program consists in some books and pieces of paper, a pencil, he himself—John Searle—all inside a room. People on the outside pass questions written in Chinese into the room. And Searle, by following the directions (the program) in the books, is able to produce answers to those questions. But Searle insists that he does not understand Chinese and has no beliefs about the questions and answers. After all, one may suppose with him, he doesn’t even recognize that they are questions and answers written in Chinese, or any language at all for that matter. And he thinks it would be absurd to say that the room itself understands Chinese or has beliefs about the questions and answers. So, he concludes, the version of functionalism represented by Strong AI must be false. Having the right functions, at least when they are specified only by inputs and outputs, is not sufficient for having mental states.

Searle’s Chinese Room is a version of the “twin” or “doppelgänger” style objections to functionalism, in which some system is specified to be functionally isomorphic to a mental system, e.g., one that understands stories written in Chinese. Since functionalism holds that being is doing, two systems that do the same things (that is, that are functionally the same) should also be the same with respect to their mental states. But if Searle is correct, the system including the books and himself is functionally but not psychologically identical to a person who understands Chinese. And if so, this is incompatible with functionalism.

Searle considers a number of responses to his thought experiment, and offers his own replies. Probably the most serious response is that Searle begs the question when he asserts that the whole collection of stuff in the room including the books and himself, i.e., the whole system, does not understand. The “Systems Reply” holds that if functionalism is true then the whole system does understand Chinese, just as a Chinese speaker does even though it would be wrong to say that her brain or her tongue or some part of her understands Chinese by itself.

On the other hand, Searle’s example does dramatically illustrate a worry that has been expressed by others: Even if there are many ways of being a thinking thing, it does not follow that anything goes. In the Chinese Room thought experiment, nothing is specified about the details of instructions that Searle follows, the program. It is simply stipulated that it produces the correct outputs appropriate to the inputs. But many philosophers think that it would undermine the claim that the room understands if, for example, the program turned out to be a giant look-up table, a prepared list of all possible questions with the corresponding appropriate answer (Block 1978). The giant look-up table seems like too “dumb” a way to implement the system to count as understanding. So it’s not unreasonable to say that Searle has shown that input-output functionalism can’t be the whole story about mental states. Still, that’s a much more modest conclusion than Searle aimed for.

6. Zombies

Searle’s Chinese Room objection focuses on contentful mental states like belief and understanding, what are generally called intentional states. But some philosophers conclude that functionalism is a good theory of intentional states but that it nevertheless fails because it cannot explain other sorts of mental states—in particular, they say that it cannot explain sensations and other conscious mental states.

Putting the point in terms of Searle’s Chinese Room: the whole system might, in some sense, understand Chinese or produce responses that are about the questions; but, in Thomas Nagel’s famous phrase, there is nothing that “it is like” to be the Chinese Room. The whole system does not enjoy what it is doing, it does not experience sensations or emotions, and it does not feel pains or pleasures. But Searle himself does have experiences and sensations—he is a conscious being. So, the reasoning goes, even if functionalism works for intentional states, it does not work for consciousness.

Early versions of this concern were discussed under the name “absent qualia.” But the current fashion is to cast the discussion in term of twins or doppelgängers called zombies. (This terminology was introduced by Robert Kirk 1974, but has recently, for lack of a better expression, taken on a life of its own.) The general idea is that there might be two creatures which are physically or functionally identical but that differ in the mental states in a particularly dramatic way: one has normal conscious mental states, and the other has none at all. The second twin is the philosophical “zombie.”

The logical structure of the zombie argument is just the same as with the other twin and doppelgänger arguments, like the Super-Spartans discussed above:

P1*. If functionalism is true, it is not possible for me to have a zombie twin, i.e., a doppelgänger who functions just like me but has no mental states.

P2*. But it is possible for me to have a zombie twin.

P3*. Therefore, functionalism is not true. (by modus tollens)

There are several differences between the premises of the zombies argument and those of the earlier argument against behaviorism. First, while most versions of functionalism entail P1*, it is not obvious that all must. Fred Dretske, for example, endorses a version of functionalism that rejects P1* (1995). But more crucially, the justification for P2* is far less clear than that for P2. P2 makes a very weak claim, because mere behavior—movement, rather than what some philosophers would call action—is relatively easy to generate. This much as been commonplace among those who theorize about the mind at least as far back as Descartes’ familiarity with mechanical statues in European water gardens. P2* makes a potentially much stronger claim. It seems to suggest that the zombie could be not just behaviorally identical but also functionally identical in any arbitrary sense of function and in as much specificity as one might want. But this is quite controversial. In the most controversial form, one might suppose that “functional” identity could be arbitrarily fine-grained so as to include complete physical identity. In this variation, the twins would be physically identical creatures, one of whom has conscious mental states and the other of whom lacks consciousness altogether.

The challenge for the functionalist, as Ned Block has argued, is to find a notion of function and a corresponding version of functionalism that solve “the problem of inputs and outputs” (Block 1978). Functionalism must be specified in terms of functions (inputs and outputs) that are sufficiently general to allow for multiple realization of mental states, but sufficiently specific to avoid attributing mental states to just about everything. This is tricky. A version of functionalism that is too specific will rule out certain genuinely psychological systems, and thereby prove to be overly “chauvinistic.” A version of functionalism that is too general will attribute mental states to all sorts of things that one doesn’t ordinarily take to have them, and thereby prove to be overly “liberal.” Is there any non-arbitrary cut-off between liberalism and chauvinism? Is there any way to navigate between this Scylla and Charybdis? This is the big unanswered question for functionalists.

7. Stronger and Weaker Forms of Functionalism

At this point two clarifications are in order. These clarifications reveal some ways in which functionalism comes in stronger or weaker versions.

The first clarification pertains to the varieties of functionalism. As noted in Section 2, there are many versions of functionalism. Here the focus has been on metaphysical versions. But the variations described earlier (metaphysical, intentional, semantic, explanatory, methodological, and theoretical) represent only one dimension of the ways in which various functionalisms differ. Functionalist theories can also be distinguished according to which mental phenomena they are directed toward. The standard way of classifying mental states is as intentional (such as beliefs and desires) or conscious or qualitative (such as sensations and feelings.) Of course some philosophers and psychologists believe that all mental states turn out to be of one sort. (Most commonly they hold that all kinds of mental states are intentional states of one sort or another.) But that need not be a factor here, for the classification is only for expository purposes. Specifically, one can hold that functionalism is a theory of intentional states, of conscious states, or of both. The strongest claim would be that functionalism applies to all mental states. William Lycan (1987) seems to hold this view. Weaker versions of functionalism apply to only one sort of mental state or the other. For example, Jaegwon Kim (2005) appears to hold that something like functionalism applies to intentional states but not to qualitative states.

The second clarification pertains to the scope or completeness of a functionalist theory. Functionalism claims that the nature of mental states is determined by what they do, by how they function. So a belief that it is sunny, for example, might be constituted in part by its relations to certain other beliefs (such as that the sun is a star), desires (such as the desire to be on a beach), inputs (such as seeing the sun), and outputs (such as putting on sunglasses.) Now consider the other beliefs and desires (in the above example) that partially constitute the nature of the belief that it is sunny. In the strongest versions of functionalism, those beliefs and desires are themselves functional states, defined by their relations to inputs, outputs, and other mental states that are in turn functionally constituted; and so on. In this case, every mental state is completely or purely constituted by its relations to other things, without remainder. Nothing can exist as a mental state on its own, only in relation to the others. In contrast, weaker versions of functionalism could allow some mental states to be basic and non-functional For example, if functionalism applies to all mental states, one could hope to explain intentional states functionally while allowing for conscious mental states to be basic. Then the belief that it is sunny might be constituted, in part, by its relations to certain sensations of warmth or yellowness, but those sensations might not be functional states. Generally speaking, philosophers who do not specify otherwise are assuming that functionalism should be the strong or pure variety. Impure or weak versions of functionalism—what Georges Rey calls “anchored” versions—do not succeed in explaining the mental in terms of purely non-mental ingredients. So whatever other value they might have, they fall short as metaphysical theories of the nature of mental states. Some would deny that weak theories should count as versions of functionalism at all.

8. Conclusion

There are many more variations among functionalist theories than can be discussed herein, but the above clarifications are sufficient to give a flavor of the various nuances. It is safe to say that in one version or another, functionalism remains the most widely accepted theory of the nature of mental states among contemporary theorists. Nevertheless, recently, perhaps in view of the difficulties of working out the details of functionalist theories, some philosophers have been inclined to offer supervenience theories of mental states as alternatives to functionalism. But as Jaegwon Kim correctly pointed out, supervenience simply allows us to pose the question about the nature of mental states, it is not an answer. The question is: Why do mental states supervene on the physical states of the creatures that have them, or at least of the world altogether? Functionalism provides one possible answer: Mental states supervene on physical states because mental states are functional states, i.e., they are realized by physical states. Much remains to be said about such a theory, and to many philosophers the arguments for it do not seems as decisive in 2008 as they did in 1968. But there is no denying that it is an intriguing and potentially powerful theory.

9. References and Further Reading

a. References

  • Block, N. (ed.) 1980a. Readings in Philosophy of Psychology, Volume One. Cambridge, MA: Harvard University Press.
  • Block, N. (ed.) 1980b. Readings in Philosophy of Psychology, Volume Two. Cambridge, MA: Harvard University Press.
  • Block, N. and J. Fodor. 1972. What Psychological States Are Not. Philosophical Review 81: 159-181.
  • Chalmers, D. 1995. Facing up to the problem of consciousness. Journal of Consciousness Studies, 2, 3: 200-219.
  • Cummins, R. 1975. Functional analysis. The Journal of Philosophy LXXII, 20: 741-765.
  • Fodor, J. 1968. Psychological Explanation. New York: Random House.
  • Fodor, J. 1974. Special sciences, or the disunity of science as a working hypothesis. Synthese 28: 97-115. Reprinted in Block 1980a.
  • Kim, J. 2005. Physicalism, or Something Near Enough. Princeton: Princeton University Press.
  • Kirk, R. 1974. Zombies v. Materialists. Proceedings of the Aristotelian Society, 48: 135-152.
  • Lewis, D. 1970. How to Define Theoretical Terms. Journal of Philosophy 68: 203-211.
  • Lewis, D. 1972. Psychophysical and Theoretical Identifications. The Australasian Journal of Philosophy 50: 249-258.
  • Lewis, D. 1980. Mad Pain and Martian Pain. In Block (ed.) 1980b.
  • Lycan, W. 1981. Form, Function, and Feel. Journal of Philosophy 78: 24-50.
  • Lycan, W. 1987. Consciousness. Cambridge, MA: The MIT Press.
  • Millikan, R. 1989. In Defense of Proper Functions. Philosophy of Science 56: 288-302.
  • Polger, T. 2000. Zombies Explained. In Dennett’s Philosophy: A Comprehensive Assessment, D. Ross, A. Brook, and D. Thompson (Eds). Cambridge, MA: The MIT Press.
  • Putnam, H. 1960. Minds and Machines. In Hook (ed) Dimensions of Mind (New York: New York University Press). Reprinted in Putnam (1975c).
  • Putnam, H. 1963. Brains and Behavior. Analytical Philosophy, Second Series, ed. R. J. Butler (Oxford: Basil Blackwell): 211-235. Reprinted in Putnam (1975c).
  • Richardson, R. 1979. Functionalism and Reductionism. Philosophy of Science 46: 533-558.
  • Richardson, R. 1982. How not to reduce a functional psychology. Philosophy of Science, 49, 1: 125-137.
  • Searle, J. 1980. Minds, Brains, and Programs. The Behavioral and Brain Sciences 3, 3: 417-424.
  • Shapiro, L. 2000. Multiple Realizations, The Journal of Philosophy, 97, 635-654.
  • Shapiro, L. 2004. The Mind Incarnate, Cambridge, MA: The MIT Press.
  • Shoemaker, S. 1975. Functionalism and Qualia. Philosophical Studies 27: 291-315. Reprinted in Block (1980a).
  • Shoemaker, S. 1984. Identity, Cause, and Mind. New York: Cambridge University Press.
  • Smart J. J. C. 1959. Sensations and Brain Processes. Philosophical Review, LXVIII: 141-156.
  • Sober, E. 1985. Panglossian Functionalism and the Philosophy of Mind. Synthese 64: 165-193.
  • Wright, L. 1973. Functions. Philosophical Review 82, 2: 139-168.

b. Suggested Reading

  • Block, N. 1978. Troubles with functionalism. C. W. Savage (ed.), Minnesota Studies in the Philosophy of Science, Vol. IX (Minneapolis, MN: University of Minnesota Press). Reprinted in Block (1980a).
  • Block, N. 1980c. Introduction: What is functionalism? In Block (1980b).
  • Kim, J. 1996. Philosophy of Mind. Boulder, CO: Westview.
  • Polger, T. 2004. Natural Minds. Cambridge, MA: The MIT Press.
  • Putnam, H. 1967. Psychological Predicates. Reprinted in Block (1980) and elsewhere as “The Nature of Mental States.”
  • Rey, G. 1997. Contemporary Philosophy of Mind. Boston: Blackwell Publishers.
  • Shoemaker, S. 1981. Some Varieties of Functionalism. Philosophical Topics 12, 1: 83-118. Reprinted in Shoemaker (1984).
  • Van Gulick, R. 1983. Functionalism as a Theory of Mind. Philosophy Research Archives: 185-204.

Author Information

Thomas W. Polger
University of Cincinnati
U. S. A.

Mind and Multiple Realizability

The claim that mental types are multiply realizable has played an important role in supporting antireductionism in philosophy of mind. The multiple-realizability thesis implies that mental types and physical types are correlated one-many not one-one. A mental state such as pain might be correlated with one type of physical state in a human and another type of physical state in, say, a Martian or pain-capable robot. This has often been taken to imply that mental types are not identical to physical types since their identity would require one type of mental state to be correlated with only one type of physical state. The principal debate about multiple realizability in philosophy of mind concerns its compatibility or incompatibility with reductionism. On the assumption that reduction requires mental-physical type identities, the apparent multiple realizability of mental types, such as a pain being both a type of human brain state and a type of robot state, has been understood to support antireductionism. More recent work has challenged this understanding.

The antireductionist argument depends on the following premises:

  1. Mental types are multiply realizable;
  2. If mental types are multiply realizable, then they are not identical to physical types;
  3. If mental types are not identical to physical types, then psychological discourse (vernacular or scientific) is not reducible to physical theory.

Among these claims, the most controversial has been Premise 1, the multiple-realizability thesis. Antireductionists have supported it both a priori by appeal to conceivability-possibility principles, and a posteriori by appeal to findings in biology, neuroscience, and artificial intelligence research. Reductionists have criticized these arguments, and they have also directly challenged the antireductionist premises.

Reductionist challenges to Premises 1 and 2 claim that antireductionists dubiously assume that psychophysical relations must be reckoned relative to our current mental and physical typologies. Contrary to this assumption, some reductionists argue that future scientific investigation will result in the formulation of new mental and/or physical typologies which fail to support the antireductionist premises. Typology-based arguments of this sort have been among the most important and most widely discussed reductionist responses to the multiple-realizability argument. Responses that target Premise 3 have been less popular. They argue either that psychophysical reduction can be carried out without identity statements linking mental and physical types, or else that ontological issues concerning the identity or nonidentity of mental and physical types are completely orthogonal to the issue of reduction.

The multiple-realizability thesis has also played an important role in recent discussions about nonreductive physicalism. The antireductionist argument has often been taken to recommend some type of nonreductive physicalism. Recently, however, Jaegwon Kim has effectively stood the argument on its head. He argues that physicalists who endorse multiple realizability are committed either to denying that mental types are genuine properties, ones that make a causal difference to their bearers, or else they are committed to endorsing some type of reductionism which identifies mental types with physical types.

Table of Contents

  1. Multiple Realizability and the Antireductionist Argument
    1. Multiple Realizability and Multiple Correlatability
    2. Identity Theory, Functionalism and the Realization Relation
    3. Defining Multiple Realizability
    4. Multiple Realizability and Mental-Physical Type Identities
    5. Type Identities and Psychophysical Reductionism
  2. Arguments for the Multiple-Realizability Thesis
    1. Conceptual Arguments for the MRT
    2. Empirical Arguments for the MRT
  3. Responses to the Antireductionist Argument
    1. Typology-Based Responses
      1. New Mental Typologies: The Local Reduction Move
      2. New Physical Typologies I
      3. New Physical Typologies II: The Disjunctive Move
        1. Law-Based Criticisms
        2. Metaphysical Criticisms
      4. Coordinate Typologies
    2. Reduction-Based Responses
  4. Multiple Realizability and Nonreductive Physicalism
  5. References and Further Reading

1. Multiple Realizability and the Antireductionist Argument

Multiple-realizability theses claim that it is possible for the tokens of a certain type to be realized by tokens of two or more distinct types. Multiple-realizability theses can be applied to a broad range of types: chemical, biological, social, mathematical. But what has been of primary interest in philosophy of mind is the purported multiple realizability of mental types. In what follows, the multiple-realizability thesis (MRT) will be understood as the claim that specifically mental types are multiply realizable.

Roughly, a type φ is multiply realizable if and only if it is possible for φ-tokens to be realized by tokens of two or more distinct types. If, for instance, it is possible for tokens of the mental type pain to be realized by tokens of the types c-fiber firing and q-fiber firing, where c-fiber firingq-fiber firing, then pain is a multiply-realizable mental type. Debate about the MRT in philosophy of mind has principally concerned its compatibility or incompatibility with reductionism. The MRT has been widely understood to have antireductionist implications. It seems to imply that mental types are not identical to physical types. If psychophysical reduction requires mental-physical type identities, then the MRT seems to imply that psychophysical reductionism is false.

The antireductionist argument is roughly as follows: Suppose a certain type of mental state – pain, say – is multiply realizable. We discover, for instance, that Alexander’s pains are intimately correlated in a way we label ‘realization’ with a certain type of physical occurrence, the firing of his c-fibers. We also discover, however, that Madeleine’s pains are realized not by c-fiber firing but by a distinct type of physical occurrence, q-fiber firing. Since c-fiber firing does not in any way involve q-fiber firing, and q-fiber firing does not in any way involve c-fiber firing, we conclude that pain can occur without c-fiber firing, and that it can also occur without q-fiber firing. We conclude, in other words, that neither c-fiber firing nor q-fiber firing is by itself necessary for the occurrence of pain. In that case, however, it seems that pain cannot be identical to either type of physical occurrence since identity implies necessary coextension. If having a mass of 1 kilogram is identical to having a mass of 2.2 pounds, then necessarily something has a mass of 1 kilogram if and only if it has a mass of 2.2 pounds. Likewise, if pain is identical to c-fiber firing, then necessarily anything that has pain will also have c-fiber firing; and if pain is identical to q-fiber firing, then necessarily anything that has pain will also have q-fiber firing. Madeleine, however, experiences pain without c-fiber firing, and Alexander experiences pain without q-fiber firing. Since pain is not correlated with a single physical type, it seems that pain cannot be identical to a physical type. Moreover, because the identity of type M and type P implies that necessarily every M-token is a P-token, we need not actually discover the correlation of pain with diverse physical types; the bare possibility of such correlations is sufficient for the argument to succeed. If the case of Alexander and Madeleine is even possible, it would follow that pain is not a physical type; and, says the argument, it seems intuitively certain or at least overwhelmingly probable that this type of situation is possible not only for pain, but for all mental types. Since psychophysical reductionism requires that mental types be identical to physical types, psychophysical reductionism must be false.

The foregoing line of reasoning has been extremely influential since 1970. It is largely responsible for what has been and continues to be a widespread, decades-long consensus that psychophysical reductionism must be false. The argument trades on the following premises:

  1. Mental types are multiply realizable;
  2. If mental types are multiply realizable, then they are not identical to physical types;
  3. If mental types are not identical to physical types, then psychological discourse (vernacular or scientific) is not reducible to physical theory.

These premises will be considered in order.

a. Multiple Realizability and Multiple Correlatability

The term ‘multiple realizability’ is often used as a label for any claim to the effect that mental and physical types are correlated one-many. Properly speaking, however, multiple realizability is tied to the notion of realization. Since the notion of realization is tied to a particular account of mental properties and psychological language it will be helpful to distinguish the multiple-realizability thesis from a more general multiple-correlatability thesis (MCT), a claim to the effect that φ-tokens might be correlated with tokens of more than one type.

The form of a bare multiple-correlatability argument against psychophysical identification is something like the following:

1. If mental type M = physical type P, then necessarily every M-token is a P-token and vice versa;

2. It is not necessarily the case that every M-token is a P-token and vice versa;

Therefore, mental type M ≠ physical type P.

Given reasonable assumptions the first premise follows from Leibniz’s law: type-identity implies necessary token coextension. Premise 2 states the MCT: M-tokens and P-tokens needn’t be correlated one-one. An MCT does not specify whether M- and P-tokens are systematically related to each other or in what way. It is thus weaker than the MRT which claims specifically that tokens of one type realize tokens of the other type.

One important observation here is that the MRT is not the only way of endorsing an MCT. Bealer (1994), for instance, defends an MCT in a way that does not appeal to realization at all. Moreover, even Putnam, who is often credited with having been the first to advance a multiple-realizability argument against psychophysical identity theory, appealed to a bare MCT as opposed to an MRT:

Consider what the brain-state theorist has to do to make good his claims. He has to specify a physical-chemical state such that any organism… is in pain if and only if (a) it possesses a brain of a suitable physical-chemical structure; and (b) its brain is in that physical-chemical state. This means that the physical-chemical state in question must be a possible state of a mammalian brain, a reptilian brain, a mollusc’s brain… etc. At the same time, it must not be a possible… state of the brain of any physical possible creature that cannot feel pain… [I]t is not altogether impossible that such a state will be found… [I]t is at least possible that parallel evolution, all over the universe, might always lead to one and the same physical “correlate” of pain. But this is certainly an ambitious hypothesis (Putnam 1967a: 436).

Putnam claims it is highly unlikely that pain is correlated with exactly one physico-chemical state. There is no mention of realization.

The notion of realization was introduced in connection with functionalism, the theory Putnam advanced as an alternative to the identity theory. According to functionalism mental types are not identical to physical types; they are instead realized by physical types. Putnam argued that functionalism was more plausible than the identity theory precisely because it was compatible with mental types being correlated one-many with physical types. Before discussing this point, however, it will be helpful to say a word about functionalism since the term ‘functionalism’ has been used to refer to theories of at least two different types: a type originally inspired by a computational model of psychological discourse and developed in a series of papers by Putnam (1960, 1964, 1967a, 1967b); and a type of identity theory endorsed by Lewis (1966, 1970, 1972, 1980) and independently by Armstrong (1968, 1970). Talk of realization has been used in connection with both.

b. Identity Theory, Functionalism and the Realization Relation

Early identity theorists claimed that psychological discourse was like theoretical discourse in the natural sciences. Mental states, they said, were entities postulated by a theory to explain the behavior of persons in something analogous to the way atoms, forces, and the like were entities postulated by a theory to explain motion and change generally (Sellars 1956: 181-87; 1962: 33-34; Putnam 1963: 330-331, 363; Feigl 1958: 440ff.; Fodor 1968a: 93; Churchland 1989: 2-6). The entities postulated by psychological discourse – beliefs, desires, pains, hopes, fears – were to be identified on the basis of empirical evidence with entities postulated by the natural sciences, most likely entities postulated by neuroscience. Originally, identity theorists supposed that theoretical identifications of this sort were a matter of choice. Empirical data would support correlations between mental and physical types such as ‘Whenever there is pain, there is c-fiber firing’, and scientists would then choose to identify the correlated types on grounds of parsimony. Identifying pain with c-fiber firing would yield a more elegant theory than merely correlating the two, and it would avoid the potentially embarrassing task of having to explain why pain and c-fiber firing would be correlated one-one if they were in fact distinct (Smart 1962). Lewis (1966) criticized this model of theoretical identification, and advanced an alternative which was also endorsed independently by Armstrong (1968, 1970).

According to the Lewis-Armstrong alternative, theoretical identifications are not chosen on grounds of parsimony, but are actually implied by the logic of scientific investigation. In our ordinary, pre-scientific dealings we often introduce terms to refer to things which we identify on the basis of their typical environmental causes and typical behavioral effects. We introduce the term ‘pain’, for instance, to refer to the type of occurrence, whatever it happens to be, that is typically caused by pinpricks, burns, and abrasions, and that typically causes winces, groans, screams, and similar behavior. That type of occurrence then becomes a target for further scientific investigation which aims to discover what it is in fact. Pain is thus identified by definition with the type of occurrence that has such-and-such typical causes and effects, and that type of occurrence is then identified by scientific investigation with c-fiber firing. Pain is thus identified with c-fiber firing by the transitivity of identity. Call this sort of view the Lewis-Armstrong identity theory.

By contrast with the Lewis-Armstrong identity theory, functionalism claims that psychological states are postulates of abstract descriptions which deploy categories analogous to those used in computer science or information-theoretic models of cognitive functioning. Functionalists agree with identity theorists that psychological discourse constitutes a theory, but they disagree about what type of theory it is. Psychological discourse is not like a natural scientific theory, functionalists claim, but like an abstract one. The mental states it postulates are analogous to, say, the angles and lines postulated by Euclidean geometry. We arrive at Euclidean principles by abstraction, a process in which we focus on a narrow range of properties and then construct “idealized” descriptions of them. We focus, for instance, on the spatial properties of the objects around us. We ignore what they are made of, what colors they have, how much they weigh, and the like, and focus simply on their dimensions. We then idealize our descriptions of them: slightly crooked lines, for instance, we describe as straight; deviant curves we describe as normal, and so on. According to functionalists, something analogous is true of psychological discourse. It provides abstract descriptions of real-world systems, descriptions which ignore the physical details of those systems (the sorts of details described by the natural sciences), and focus simply on a narrow profile of their features. Originally Putnam suggested that those features were analogous to the features postulated by Turing machines.

A Turing machine is an abstract description which postulates a set of states related to each other and to various inputs and outputs in certain determinate ways described by a machine table. A certain machine table might postulate states, S1,…,Sn, inputs, I1,…, Im, and outputs O1,…,Op, for instance, which are related in ways expressed by a set of statements or instructions such as the following:

If the system is in state S13 and receives input I7, then the system will produce output O32, and enter state S3.

According to Putnam’s original proposal, which has come to be called machine functionalism, psychological descriptions are abstract descriptions of this sort. They postulate relations among sensory inputs, motor outputs, and internal mental states. The only significant difference between Turing machine descriptions and psychological descriptions, Putnam (1967a) suggested, was that psychological inputs, outputs, and internal states were related to each other probabilistically not deterministically. If, for instance, Eleanor believes there are exactly eight planets in our solar system, and she receives the auditory input, “Do you believe there are exactly eight planets in our solar system,” then she will produce the verbal output, “Yes,” not with a deterministic probability of 1, but with a probability between 1 and 0.

Functionalists need not endorse a Turing machine model of psychological discourse; they could instead understand psychological discourse by appeal to models in, say, cognitive psychology; but in general, they make two claims. First, psychological discourse is abstract discourse which postulates an inventory of objects, properties, states or the like which are related to each other in ways expressed by the theory’s principles. Second, the behavior of certain concrete systems maps onto the objects, properties, or states that psychological discourse postulates. The notion of realization concerns this second claim.

Let T be a theory describing various relations among its postulates, S1,…,Sn.The relations among the concrete states of a certain concrete system might be in some way isomorphic to the relations among S1,…,Sn. If T says that state S1 results in state S2 with a probability of .73 given state S15, it might turn out that, for instance, Alexander’s brain state B5 results in brain state B67 with a probability of .73 given neural stimulus B4. It might turn out, in other words, that states B5, B67, and B4 in Alexander’s brain provide a model of the relations among S1, S2, and S15 in T. If this were true for all of Alexander’s brain states, one might say that T described a certain type of functional organization, an organization which was realized by Alexander’s brain, and one might call Alexander’s brain a realization of T. The states of Alexander’s brain are related to each other in ways that are isomorphic with the ways in which S1,…,Sn, are related according to T. In fact, concrete systems in general might be said to realize the states postulated by abstract descriptions. The wooden table realizes a Euclidean rectangle; the movements of electrons through the silicon circuitry of a pocket calculator realize a certain algorithm; the movements of ions through the neural circuitry of Alexander’s brain realize a belief that 2 + 2 = 4, and so forth.

Realization, then, is a relation between certain types of abstract descriptions, on the one hand, and concrete systems whose states are in a relevant sense isomorphic with those postulated by abstract descriptions, on the other. Philosophers of mind have offered several different accounts of this relation. Putnam (1970: 313-315) suggested a type of account which has proved very influential. Realization, he said, can be understood as a relation between higher-order and lower-order types (he used the term ‘properties’) or tokens of such types. Higher-order types are ones whose definitions quantify over other types. Second-order types, for instance, are types whose definitions quantify over first-order types, and first-order types are types whose definitions quantify over no types. Effectively what Putnam suggested is that having mental states amounted to having some set of (first-order) internal states related to each other in ways that collectively satisfied a certain functional description. Being in pain, for instance, might be defined as being in some concrete first-order state S1 which results in a concrete first-order state S2 with a probability of .73 given a concrete state S15. In other words, the various Si postulated by theory T can be understood as variables ranging over concrete first-order state types such as brain state types. To say, then, that Alexander’s brain is currently realizing a state of pain is just to say that the triple < B5, B67, B4 > of concrete first-order states of his brain satisfies the definition of being in pain, a definition which quantifies over concrete first-order states of some sort.

The concept of realization is understood slightly differently in connection with the Lewis-Armstrong identity theory. That difference reflects the more general difference between the identity theory and functionalism. Functionalism takes mental states to be states postulated by an abstract description, whereas the Lewis-Armstrong identity theory takes mental states to be concrete physical states which have been described in terms of an abstract vocabulary. To help illustrate this difference consider a very rough analogy with a Platonic versus Aristotelian understanding of geometrical objects. The Platonist claims that ‘rectangle’ refers to an abstract object postulated and/or described by Euclidean geometry. The Aristotelian, by contrast, claims that ‘rectangle’ is a way of referring to various concrete objects in terms of their dimensions. There is a roughly analogous sense in which the functionalist claims that ‘pain’ expresses a type of abstract state whereas the Lewis-Armstrong identity theorist claims that ‘pain’ expresses a concrete type of physical state such as c-fiber firing. According to the identity theorist ‘pain’ refers to a physical state by appeal to a narrow profile of that state’s properties such as its typical causes and effects. According to the Lewis-Armstrong identity theory, then, what a theory such as T provides is not an inventory of abstract states, but an apparatus for referring to certain physical ones. On the Lewis-Armstrong theory those physical states, the ones expressed by the predicates and terms of T, provide a realization of T.

Because the multiple-realizability argument for antireductionism principally concerns the functionalist notion of realization, the term 'realize' and its cognates should be taken to express that notion in what follows.

c. Defining Multiple Realizability

Let us consider again the rough definition of multiple realizability stated earlier: a type φ is multiply realizable if and only if it is possible for φ-tokens to be realized by tokens of two or more distinct types. To make this more precise it will be helpful to draw some distinctions.

First, Shoemaker (1981) distinguishes what he calls a state’s core realizer from what he calls its total realizer. Consider again the theory T and Alexander’s brain. If B5 is the type of brain state which corresponds to S1 in T, then B5-tokens are core realizers of S1-tokens in Alexander’s brain. The total realizer of an S1-token, on the other hand, includes tokens of B5 together with tokens of the other types of states in Alexander’s brain whose relations to one another are collectively isomorphic with the relations among S1,…,Sn, expressed in T. The MRT has typically been understood to be a claim about core realizers.

Second, it is helpful to clarify ambiguities in the scope of the modal operator. The foregoing definition of multiple realizability is unclear, for instance, about whether or not -tokens must be realized by tokens of more than one type in the same world, or whether it is sufficient that -tokens be realized by tokens of more than one type in different worlds. Similarly, it is unclear about which worlds are relevant: nomologically possible worlds? metaphysically possible worlds? The following definition clears up these ambiguities:

[Def] A type M is multiply realizable iffdf. (i) possiblyM, P-tokens are core realizers of M-tokens, and (ii) possiblyM, Q-tokens are core realizers of M-tokens, and (iii) PQ.

Here, ‘possiblyM’ designates metaphysical possibility. (The subscript ‘M’ will be used henceforth to indicate that a modal operator covers metaphysically possible worlds.) Metaphysical possibility is all that is needed for the multiple-realizability argument to proceed. If M were identical to P, then it would not be possible for M-tokens to exist without P-tokens (or vice versa) in any possible world irrespective of other factors such the laws of nature obtaining at those worlds.

Consider again the original example concerning pain. According to the foregoing definition of multiple realizability, pain is multiply realizable if and only if there is a metaphysically possible world in which tokens of, say, c-fiber firing are core realizers of pain-tokens, and there is a metaphysically possible world in which tokens of a different type – say, q-fiber firing – are core realizers of pain-tokens. Hence, if token c-fiber firings are core realizers of Alexander’s pain-tokens in world w1, and token q-fiber firings are core realizers of Madeleine’s pain-tokens in world w2, then pain is a multiply-realizable mental type. Moreover, if w1 and w2 are identical with the actual world, then we can say not only that pain is multiply realizable, but that pain is also multiply realized.

d. Multiple Realizability and Mental-Physical Type Identities

As mentioned earlier, the MRT is one way of endorsing an MCT. The second premise of the antireductionist argument reflects this idea. It claims that if mental types are multiply realizable, then they are not identical to physical types. The argument for this premise trades on the following claim:

P1. Necessarily, for mental type M and physical type P, if M is multiply realizable, then it is not necessarilyM the case every M-token is a P-token and vice versa.

The antecedent of this conditional expresses the MRT, and the consequent expresses an MCT.

Claim P1 is supported by an additional assumption: mental types are not necessarilyMcorealized. If, for instance, a Q-token realizes an M-token, then the M-token needn’t be realized by some other token in addition. Hence, to show that M-tokens and P-tokens needn’t be correlated one-one it is sufficient to show that it is possible to have an M-token without having a P-token. Suppose, then, that in world w there is a Q-token that realizes an M-token. In order for it to follow from this that M-tokens couldM occur without P-tokens, we need to assume that, say, a Q-token doesn’t itself require a P-token – that a Q-token could realize an M-token on its own. We might call this assumption Corealizer Contingency: mental types don’t needM to be co-realized. Corealizer Contingency implies that it is possibleM for an M-token to be realized by, say, a Q-token alone, and hence it is possibleM that there might be an M-token without there being a P-token. The conclusion that M is not identical to P if M is multiply realizable now follows from the following premise:

P2. If type M = type P, then necessarilyM every M-token is a P-token and vice versa.

According to P2 the identity of M- and P-types requires the necessaryM coextension of M- and P-tokens. By the foregoing argument, however, if M is multiply realizable it is not necessarilyM the case that there is an M-token if and only if there is a P-token. Hence, it follows that if M is multiply realizable, it is not identical to P.

Now for some terminology. For types φ and ψ, call φ one of ψ's realizing types just in case possiblyM a φ-token realizes a ψ-token. In that case, one can say that the argument based on P1 and P2 purports to show that if M is multiply realizable, M is not identical to any of its realizing types.

e. Type Identities and Psychophysical Reductionism

Psychophysical reductionism claims that psychological discourse is reducible to some type of natural scientific theory such as a neuroscientific one. Paradigmatically, intertheoretic reduction reflects a certain type of ontological and epistemological situation. Domain A is included within Domain B, but for reasons concerning the way people are outfitted epistemically, they have come to know A-entities in a way different from the way they have come to know other B-entities. They have therefore come to describe and explain the behavior of A-entities using a theoretical framework, TA, which is different from the theoretical framework they have used to describe and explain the behavior of other B-entities, the framework TB. The result is that they do not initially recognize the inclusion of Domain A in Domain B. People later discover, however, that Domain A is really part of Domain B; A-entities really just are B-entities of a certain sort, and hence the behavior of A-entities can be exhaustively described and explained in B-theoretic terms. This situation is reflected in a certain relationship between TA and TB. The principles governing the behavior of A-entities, the principles expressed by the law statements of TA, are just special applications of the principles governing the behavior of B-entities in general – the principles expressed by the law statements of TB. The laws of TA, they say, are reducible to the laws of TB; and they say that they are able to provide a reductive description and explanation of A-behavior in B-theoretic terms. A-statements can be derived from B-statements given certain assumptions about the conditions that distinguish A-entities from B-entities of other sorts – so-called boundary conditions. The descriptive and explanatory roles played by the law statements of TA, the reduced theory, are thus taken over by the law statements of the more inclusive reducing theory, TB.

Consider an example. Kepler’s laws are thought to have been reduced to Newton’s. Newton’s laws imply that massive bodies will behave in certain ways given the application of certain forces. If those laws are applied to planetary bodies in particular – if, in other words, people examine the implications of those laws within the boundaries of our planetary system – the laws predict that those bodies will behave in roughly the way Kepler’s laws describe. Kepler’s laws, the laws of the reduced theory, are therefore shown to be special applications of Newton’s laws, the laws of the reducing theory. To the extent that they are accurate, Kepler’s laws really just express the application of Newton’s laws to planetary bodies. One upshot of this circumstance is that people can appeal to Newton’s laws to explain why Kepler’s laws obtain: they obtain because Newtonian laws imply that a system operating within the parameters of our planetary system will behave in roughly the way Kepler’s laws describe.

Intertheoretic reduction is thus marked by the inclusion of one domain in another, and by the explanation of the laws governing the included domain by the laws governing the inclusive one. There have been many attempts to give a precise formulation of the idea of intertheoretic reduction. Those attempts trade on certain assumptions about the nature of theories and the nature of explanation. One of the earliest and most influential attempts was Ernest Nagel’s (1961). Nagel endorsed a syntactic model of theories and a covering-law model of explanation. Roughly, the syntactic model of theories claimed that theories were sets of law statements, and the covering-law model of explanation claimed that explanation was deduction from law statements (Hempel 1965). According to Nagel’s model of reduction, to say that TA was reducible to TB was to say that the law statements of TA were deducible from the law statements of TB in conjunction with statements describing various boundary conditions and bridge principles if necessary. Bridge principles are empirically-supported premises which connect the vocabularies of theories which do not share the same stock of predicates and terms. On the Nagel model of reduction, bridge principles are necessary for intertheoretic reduction if the reduced theory’s vocabulary has predicates and terms which the vocabulary of the reducing theory lacks. Suppose, for instance, that LA is a law statement of TA which is slated for deduction from LB, a law statement of TB:

LA For any x, if A1(x), then A2(x);
LB For any x, if B1(x), then B2(x).

Since the vocabulary of TB does not include the predicates A1 or A2, additional premises such as the following are required for the deduction:

ID1 A1 = B1
ID2 A2 = B2;

Given ID1 and ID2, LA can be derived from LB by the substitution of equivalent expressions.

The reduction of thermodynamics to statistical mechanics is often cited as an example of reduction via bridge principles. The term ‘heat’, which occurs in the law statements of thermodynamics, is not included in the vocabulary of statistical mechanics. As a result, the deduction of thermodynamic law statements from mechanical ones requires the use of additional premises connecting the theories’ respective vocabularies. An example might be the following:

Heat = mean molecular kinetic energy.

Identity statements of this sort are called theoretical identifications. The theoretical identification of X with Y is supposed to be marked by two features. First, the identity is supposed to be discovered empirically. By analogy, members of a certain linguistic community might use the name ‘Hesperus’ to refer to a star that appears in the West in early evening, and they might use the name ‘Phosphorus’ to refer to a star that appears in the East in early morning, and yet they might not know but later discover that those names refer to the same star. Second, however, unlike the Hesperus–Phosphorus case, in the case of theoretical identifications, at least one of the predicates or terms, ‘X’ or ‘Y’, is supposed to belong to a theory.

There are numerous episodes of theoretical identification in the history of science, cases in which we developed descriptive and explanatory frameworks with different vocabularies the predicates and terms of which we later discovered to refer to or express the very same things. The terms ‘light’ and ‘electromagnetic radiation with wavelengths of 380 – 750nm’, for instance, originally belonged to distinct forms of discourse: one to electromagnetic theory, the other to a prescientific way of describing things. Those terms were nevertheless discovered to refer to the very same phenomenon. In the Nagel model of reduction, theoretical identifications operate as bridge principles linking the vocabulary of the reduced theory with vocabulary of the reducing theory. They therefore underwrite the possibility of intertheoretic reduction.

The Nagel model of reduction has been extensively criticized, and alternative models of reduction have been based on different assumptions about the nature of theories and explanation. But the idea that reduction involves the inclusion of one domain in another implies that the entities postulated by the reduced theory be identical to entities postulated by the reducing theory. In claiming to have reduced Kepler’s laws to Newton’s, for instance, the assumption is that planets are massive bodies, not merely objects the behaviors of which are correlated with the behaviors of massive bodies.

To illustrate the necessity of identity for reduction, imagine that Domains A and B comprise completely distinct entities whose behaviors are nevertheless correlated with each other. It turns out, for instance, that the principles governing the instantiation of A-types and those governing the instantiation of B-types are isomorphic in the following sense: for every A-law there is a corresponding B-law, and vice versa; and in addition, tokens of A-types are correlated one-one with tokens of B-types. Given this isomorphism, biconditionals such as the following end up being true:

BC1 Necessarily, for any x, A1(x) if and only if B1(x);
BC2 Necessarily, for any x, A2(x) if and only if B2(x).

Such biconditionals could underwrite the deduction of law statements such as LA from law statements such as LB. What they could not underwrite, however, is the claim that TA is reducible to TB. The reason is that A and B are completely distinct domains which merely happen to be correlated. This is not a case in which one domain is discovered to be part of another, more inclusive domain, and hence it is not a case in which the laws of one domain can be explained by appeal to the laws of another. Without identity statements such as ID1 and ID2, there is no inclusion of one domain in another, and without that sort of inclusion, there is no explanation of the reduced theory’s laws in terms of the reducing theory’s laws. (See Causey 1977: Chapter 4; Schaffner 1967; Hooker 1981: Part III.)

Sklar (1967) argued that reduction requires bridge principles taking the form of identity statements by appeal to an example: the Wiedemann-Franz law. The Wiedemann-Franz law expresses a correlation between thermal conductivity and electrical conductivity in metals. It allows for the deduction of law statements about the latter from law statements about the former. This deducibility, however, has never been understood to warrant the claim that the theory of electrical conductivity is reducible to the theory of heat conductivity, or vice versa. Rather, it points in the direction of a different reduction, the reduction of the macroscopic theory of matter to the microscopic theory of matter.

Suppose, then, that we apply the foregoing account of reduction to psychological discourse. Since that account claims that theoretical identifications are necessary for intertheoretic reduction, the upshot is that psychophysical reduction requires mental-physical type identities. The reduction of psychological discourse to some branch of natural science would require that mental entities be identified with entities postulated by the relevant branch of natural science. It could not involve two distinct yet coordinate domains. This is clear if we imagine a case involving psychophysical parallelism. Suppose two completely distinct ontological domains, one comprising bodies, the other nonphysical Cartesian egos, were governed by principles that happened to be isomorphic in the sense just described: the laws governing the behavior of bodies parallel the laws governing the behavior of the Cartesian egos, and the states of the Cartesian egos are distinct from but nevertheless correlated one-one with certain bodily states. In that case, it would be possible to make deductions about the behavior of Cartesian egos on the basis of the behavior of bodies, but this deducibility would not warrant the claim that the behavior of Cartesian egos was reducible to the behavior of bodies. The behavior of bodies might provide a helpful model or heuristic for understanding or predicting the behavior of Cartesian egos, but it would not provide a reducing theory which explained why the laws governing Cartesian egos obtained. The same point would follow if some type of neutral monism were true – if, say, mental and physical phenomena were correlated, but were both reducible to some third conceptual framework which was neither mental nor physical but neutral. Mere correlations between mental and physical types, even ones which are lawlike, are not sufficient to underwrite psychophysical reduction. Psychophysical reductionism requires the identity of mental and physical types.

Consider now the putative implications of this claim in conjunction with the MRT. Psychophysical reduction requires psychophysical type identities. If mental types are multiply realizable, then they are not identical to any of their physical realizing types. But if mental types are not identical to physical types (the tacit assumption being that the only physical candidates for identification with mental types are their realizing types), then psychological discourse is not reducible to physical theory.

2. Arguments for the Multiple-Realizability Thesis

Section 1 discussed the connection between multiple realizability and antireductionism. Antireductionists argue that if mental types are multiply realizable, then psychophysical reductionism is false. But why suppose that mental types are multiply realizable? Why suppose the MRT is true? The MRT has been supported in at least two ways: by appeal to conceptual or intuitive considerations, and by appeal to empirical findings in biology, neuroscience, and artificial intelligence research. In this section, arguments of both types will be considered.

a. Conceptual Arguments for the MRT

Conceivability arguments for the MRT claim that conceivability or intuition is a reliable guide to possibility. If that is the case, and it is conceivable that mental types might be correlated one-many with physical types, then it is possible that mental types might be correlated one-many with physical types. And, say exponents of the argument, one-many psychophysical correlations are surely conceivable. Consider the broad range of perfectly intelligible scenarios science fiction writers are able to imagine – scenarios in which robots and extraterrestrials with physiologies very different from ours are able to experience pain, belief, desire, and other mental states without the benefit of c-fibers, cerebral hemispheres, or other any of the other physical components that are correlated with mental states in humans. If these scenarios are conceivable and conceivability is a more or less reliable guide to possibility, then we can conclude that these scenarios really are possible. Conceivability arguments for the MRT, then, trade on the following premises:

CA1 If it is conceivable that mental types are multiply realizable, then mental types are multiply realizable;

CA2 It is conceivable that mental types are multiply realizable.

Therefore, mental types are multiply realizable.

Conceivability-Possibility Principles (CPs) have been a staple in philosophy of mind at least since Descartes. He used a CP to argue for the real distinction of mind and body in Meditation VI:

...because I know that everything I clearly and distinctly conceive can be made by God as I understand it, it is sufficient that I am able clearly and distinctly to conceive one thing apart from another to know with certainty that the one is different from the other – because they could be separated, at least by God… Consequently, from the fact that I know that I exist, and I notice at the same time that nothing else plainly belongs to my nature or essence except only that I am a thinking thing, I rightly conclude that my essence consists solely in being a thinking thing... [B]ecause I have on the one hand a clear and distinct idea of myself, insofar as I am merely a thinking thing and not extended, and on the other hand, a distinct idea of the body insofar as it is merely an extended thing and not thinking, it is certain that I am really distinct from my body, and can exist without it (AT VII, 78).

Descartes’ argument trades on three premises. First, clear and distinct conceivability is a reliable guide to possibility. In particular, if it is clearly and distinctly conceivable that x can exist apart from y, then it is possible for x to exist apart from y. Second, I can form a clear and distinct conception of myself apart from my body. Hence, I can exist without it. But third, if x can exist without y, then clearly x cannot be y. Hence, I cannot be my body. CPs have become controversial in part because of their association with arguments of this sort. Jackson’s (1982, 1986) knowledge argument and Searle’s (1980) Chinese Room argument as well as a host of other arguments concerning the possibility of inverted spectra, absent qualia, and the like trade on CPs.

Unrestricted CPs, ones that do not qualify the notion of conceivability or limit the scope of the modal operator, have clear counterexamples. Some of those counterexamples concern the scope of the operator. DaVinci, for instance, conceived of humans flying with birdlike wings despite the physical impossibility of such flight. Similarly, prior to the twentieth century people might have conceived that it was possible for there to be a solid uranium sphere with a mass exceeding 1,000 kg – another physical impossibility. Other counterexamples concern the notion of conceivability. It is unclear, for instance, whether the conceptions people form of things while drunk or drugged or in various other circumstances can serve as reliable guides to possibility.

Because of examples of this sort, exponents of CPs do not endorse unrestricted versions of them, but versions limited to a particular type of conceivability, a particular scope for the modal operator, and a particular subject matter for the claim or scenario being conceived. Descartes, for instance, spoke of clear and distinct conceivability, and took the scope of the modal operator to cover metaphysically possible worlds – or as he puts it, the range of circumstances God could have brought about. A CP along these lines is immune to counterexamples such as the uranium sphere and human birdlike flight since these examples pertain to nomological or physical possibility. Roughly, p is nomologically possible exactly if p is consistent with the laws of nature, and p is physically possible exactly if p is consistent with the laws of physics (physical possibility and nomological possibility are the same if the laws of physical are the same as the laws of nature). Since we can know these laws only through scientific investigation, it seems likely that our conceptions of nomological and physical possibilities can only be as reliable as our best scientific knowledge allows them to be. The same can be said of technological possibility or other kinds of possibility that involve consistency with conditions that are knowable only a posteriori.

Metaphysical possibility, on the other hand, involves compossibility with essences – the features things need to exist in any metaphysically possible world. Knowledge of essences does not necessarily depend on empirical considerations. Whether or not it does marks the difference between empirical essentialists and conceptual essentialists. Roughly, empirical essentialists claim that our knowledge of essences is analogous to our knowledge of the laws of physics or of nature: we can learn about them only a posteriori. Conceptual essentialists disagree: we can come to know essences a priori.

Descartes is a prototypical conceptual essentialist. He thinks it is possible to discover something’s essence by means of a certain kind of conceptual analysis. Consider, for instance, his argument in Meditation II that his essence consists in thinking alone:

Can I not affirm that I have at least a minimum of all those things which I have just said pertain to the nature of body? I attend to them... [N]othing comes to mind... Being nourished or moving? Since now I do not have a body, these surely are nothing but figments. Sensing? Surely this too does not happen without a body... Thinking? Here I discover it: It is thought; this alone cannot be separated from me... I am therefore precisely only a thinking thing… (AT VII, 26-27).

The procedure Descartes follows for forming a clear and distinct conception of something’s essence is roughly as follows. First, he reckons that the object in question has certain properties. He then considers whether it can exist without these properties by “removing” them from the object one-by-one in his thought or imagination. If he can conceive of the object existing without a certain property, he can conclude that that property does not belong to the object’s nature or essence. He thus takes himself to arrive by turns at a clearer, more distinct conception of what the object essentially is. When he applies this procedure to himself, he initially reckons that he has various bodily attributes such as having a face, hands, and arms, and being capable of eating, walking, perceiving, and thinking. He then considers whether he could still exist without these features by “removing” them from himself conceptually. He concludes that he could exist without all of them except the property of thinking. He can form no conception of himself without it, he says, whereas he can form a clear and distinct conception of himself without any bodily attributes. He concludes, therefore, that he can form a clear and distinct conception of himself as a thinking thing alone apart from his body or any other.

Conceptual essentialism was en vogue for a long time in modern philosophy, but empirical essentialism experienced a revival in the late twentieth century due to the work of Kripke (1972) and Putnam (1975b). According to empirical essentialists, discerning something’s essence is not a task that can be accomplished from an armchair. It requires actual scientific investigation since the conceptions we initially form of things may not correspond to their essential properties. We might have learned to identify water, for instance, by a certain characteristic look or smell or taste, but if we brought a bottle of water to a distant planet with a strange atmosphere that affected our senses in unusual ways, the contents of the bottle might no longer look, smell, or taste to us the same way. This would not mean that the substance in the bottle was no longer water; it would still be the same substance; it would simply be affecting our senses differently on account of the planet’s strange atmosphere. It would still be water, in other words, despite the fact that it did not have the characteristics we originally associated with water. The essential features of water would remain the same even if its “accidental” features underwent a change. According to empirical essentialists, the essential features of something, the features that enable us to claim that, for instance, the contents of the bottle are essentially the same on Earth and on the distant planet, are features it is up to science to discern -- features which might not correspond to our intuitive, prescientific conception of water.

Empirical essentialists tend to be inhospitable to conceivability-possibility arguments of the sort represented by CA1 and CA2. They can attack the argument in the following ways. First, against CA1, they can argue that the conceivability of multiple realizability is a guide to possibility which is only as reliable as our best scientific knowledge of mental phenomena and their realizers, and that in its current incomplete state, our scientific knowledge does not provide us with the resources sufficient to act as a reliable guide to possibility in this matter. Against CA2, on the other hand, they can argue that in our current state of scientific knowledge we cannot conceive of mental types being multiply realizable for either of two reasons: (a) we don’t know enough about mental types and their realizers to form any clear conception of whether or not they are multiply realizable, or (b) we do know enough about mental types and their realizers to form a clear conception that they are not multiply realizable.

b. Empirical Arguments for the MRT

Empirical arguments for the MRT largely avoid the aforementioned worries concerning CPs. They generalize from findings in particular scientific disciplines. Various scientific disciplines, they claim, provide inductive grounds that support the possibility of mental types being realized by diverse physical types. Those disciplines include evolutionary biology, neuroscience, and cognitive science – artificial intelligence research in particular.

The argument Putnam (1967a) originally advanced against the identity theory is an example of an appeal to evolutionary biology. According to Putnam, what we know about evolution suggests that in all likelihood it is possible for a given mental type to be correlated with multiple diverse physical types. Block and Fodor (1972: 238) and Fodor (1968a; 1974) have advanced similar arguments.

We can formulate the appeal to biology in roughly the following way. The phenomenon of convergent evolution gives us good reason to suppose there are beings in the universe that are mentally similar to humans. One reason for this is that the possession of psychological capacities would seem to be (at least under certain circumstances) selectively advantageous. The ability to experience pain, for instance, would seem to increase my chances of survival if, say, I am in danger of being burned alive. The pain I experience would contribute to behavior aimed at removing the threat. Likewise, if I am in danger of being eaten by a large carnivore, my chances of survival will be enhanced if I am able to feel fear and to respond to the threat appropriately. Similarly, it is plausible to suppose that in many circumstances my chances of surviving and successfully reproducing will be improved by having more or less accurate beliefs about the environment – knowing or believing that fires and large carnivores are dangerous, for instance. There are, in short, many reasons for thinking that possessing mental states of the sort humans possess would be selectively advantageous for beings of other kinds. This gives us some reason to suppose that there might be beings in the universe that are very similar to us mentally. On the other hand, there are analogous reasons to suppose that those beings are probably very different from us physically. The last forty years of biological research have shown us that life can evolve in a broad range of very different environments. Environments once thought incapable of supporting life such as deep sea volcanic vents have been discovered to support rich and diverse ecosystems. It seems very likely, then, that living systems will be capable of evolving in a broad range of environments very different from those on Earth. In that case, however, it seems very unlikely that mentally-endowed creatures evolving in those environments will be physically just like humans. Our current state of biological knowledge suggests, then, that there are most likely beings in the universe who are like us mentally but who are unlike us physically. Evolutionary biology thus gives us some reason to suppose the MRT is true.

A second kind of argument appeals not to evolutionary biology but to neuroscience. One such argument, for instance, appeals to the phenomenon of brain plasticity (Block and Fodor 1972: 238; Fodor 1974: 104-106; Endicott 1993). Brain plasticity is the ability of various parts of the brain or nervous system to realize cognitive or motor abilities. (See Kolb and Whishaw 2003: 621-641 for a description of brain plasticity and research related to it.) If the section of motor cortex that controls, say, thumb movement is damaged, cells in the adjacent sections of cortex are able to take over the functions previously performed by the damaged ones. What this seems to suggest is that different neural components are capable of realizing the same type of cognitive operation. And this gives us some reason to suspect it is possible for tokens of one mental type to be realized by tokens of more than one physical type.

Finally, a third type of empirical argument appeals to work in artificial intelligence (AI) (Block and Fodor, op cit.; Fodor, op cit). Some AI researchers are in the business of constructing computer-based models of cognitive functioning. They construct computational systems that aim at mimicking various forms of human behavior such as linguistic understanding. Incremental success in this type of endeavor would lend further support to the idea that mental types could be realized by diverse physical types: not just by human brains but by silicon circuitry.

One criticism of empirical arguments for the MRT is that they are merely inductive in character (Zangwill 1992: 218-219): the denial of multiple realizability is still consistent with their premises. In addition, Shapiro (2004) argues against the appeal to biology on the grounds that a view which denies the MRT is just as probable given convergent evolution as a view which endorses it. Against the appeal to neuroscience, moreover, Bechtel and Mundale (1999) argue that the argument’s principle of brain state individuation is unrealistically narrow. Real neuroscientific practice individuates brain states more broadly. In addition, the neuroscientific data is compatible with there being a single determinable physical type which simply takes on multiple determinate forms (Hill 1991). Finally, the appeal to AI would seem to be little more than a promissory note. That work hasn’t produced anything approaching a being with psychological capacities like our own. The argument is thus little different from a conceptual argument for the MRT. Moreover, there are arguments purporting to show that silicon-based minds are impossible. Searle’s (1980) Chinese Room argument is an example.

3. Responses to the Antireductionist Argument

Reductionists have several ways of responding to the multiple-realizability argument. It will be helpful to divide them into two groups. Typology-based responses target Premises 1 and 2 of the antireductionist argument: the MRT and the claim that the MRT is incompatible with mental-physical type identities. Reduction-based responses, on the other hand, target Premise 3 of the antireductionist argument, the claim that mental-physical type identities are necessary for reduction. These responses will be discussed in order.

a. Typology-Based Responses

Typology-based responses to the multiple-realizability argument take the definition of ‘multiple realizability’ to include a condition relating types to specific typologies. A condition of this sort was left implicit in the definition of multiple realizability given in Section 1-c. An explicit statement of such a condition would take something like the following form:

[Def*] A type M is multiply realizable relative to typologies T and T* iff df. (i) M is a type postulated by T; (ii) P and Q are types postulated by T*; (iii) possiblyM, P-tokens are core realizers of M-tokens; (iv) possiblyM, Q-tokens are core realizers of M-tokens, and (v) PQ.

According to typology-based responses, the multiple-realizability argument trades on the unwarranted and highly dubious assumption that psychophysical relations must be reckoned only relative to our current mental and physical typologies. In all likelihood, they claim, future scientific investigation will result in the formulation of new mental and/or physical typologies which will no longer support the MRT or the claim that it implies the non-identity of mental and physical types.

Kim (1972), it seems, was the first to appreciate the range of typology-based strategies available to opponents of the multiple-realizability argument. They include the postulation of a new mental typology, the postulation of a new physical typology, and the postulation of both a new mental and a new physical typology. The first strategy includes the local reduction move. The second strategy includes the postulation of overarching physical commonalities, the postulation of broad physical types, and the disjunctive move. Finally, the third strategy includes the coordinated typology strategy, the idea that mental and physical typologies will develop in a coordinated way that yields one-one mental-physical type correlations. These options are represented in Figure 1.


Figure 1: Typology-based Responses

Relative to our current mental and physical typologies, the MRT implies that a mental type, M, is correlated with multiple physical types P1,…,Pn as in Column I. Psychophysical identification requires, however, that each mental type line up with a single physical type. Reductionists can respond to the argument either by “breaking up” M into a number of “narrower” mental types M1,…,Mn each of which corresponds to a single physical type as in Column II. This is the strategy represented by the local reduction move. Reductionists can also respond, however, by “gathering” the diverse physical types together under a single overarching physical type, P, which corresponds to M as in Column III. This is the strategy represented by the postulation of overarching physical commonalities, the postulation of broad physical types, and the disjunctive move. Finally, reductionists can respond by claiming that mental and physical typologies will both be altered in various ways that eventually yield one-one correlations between mental and physical types as in Column IV.

Typology-based responses can be understood to target either Premise 1 or Premise 2 of the antireductionist argument. Which they are understood to target depends on whether any of the types in question are defined relative to our current typologies. Consider an example. Someone who claims that the mental types postulated by our current typology will be retained in a new typology alongside more “narrow” mental types which are correlated one-one with physical types will claim that that Premise 2 is false: the MRT is compatible with mental-physical type identities. By contrast, someone who claims that the mental types postulated by our current typology will not be retained in a new typology will claim instead that the MRT is false: all mental types are really of a narrow variety; each corresponds to a single physical type.

i. New Mental Typologies: The Local Reduction Move

The local reduction move (LRM) has also been called an appeal to ‘narrow mental types’, or to ‘species-specific’ or ‘structure-’ or ‘domain-specific reductions’. Its exponents include Kim (1972: 235; 1989; 1992), Lewis (1969, 1980), Enc (1983: 289-90), P.M. Churchland (1988: 40-41), P.S. Churchland (1986: 356-358), Causey (1977: 147-149), and Bickle (1998). According to the LRM, a mental predicate or term such as ‘pain’, which seems to express a single mental type, really expresses multiple diverse mental types. The case of ‘pain’ is analogous to the case of ‘jade’. The latter was originally taken to refer to a single mineralogical type. Scientific investigation revealed, however, that ‘jade’ really corresponds to two distinct mineralogical types: jadeite and nephrite. Exponents of the LRM claim that mental predicates and terms are the same way. ‘Pain’ doesn’t express a single overarching mental type found in humans, in Martians, and in robots; ‘pain’ is instead an imprecise term which corresponds to multiple diverse mental types including pain-in-humans, pain-in-Martians, and pain-in-robots. As a result, we shouldn’t be seeking to identify physical types with “broad” mental types such as pain; we should instead be seeking to identify them with “narrower” mental types such as pain-in-humans, pain-in-Martians, and pain-in-robots.

In support of the LRM, Enc (ibid.) has drawn an analogy with thermodynamics (cf. Churchland 1986 and Churchland 1988). Heat, he argues, is multiply realized at the level of microphysical interactions. Temperature-in-gases is different from temperature-in-solids, which is different from temperature-in-plasmas and temperature-in-a-vacuum. The multiple realizability of heat, however, does not imply that thermodynamics has not been reduced to statistical mechanics; it merely implies that the reduction proceeds piecemeal. Temperature-in-gases is identified with one type of mechanical property; temperature-in-plasmas, with a different mechanical property, and so on. Thermodynamics is thus reduced to statistical mechanics one lower-level domain at a time through the mediation of restricted domain-specific thermodynamic types: temperature-in-gases, temperature-in-solids, and the like. Something similar could be true of psychophysical reduction. Psychology could reduce to physical theory by way of various domain-specific mental types such as pain-in-humans and pain-in-Martians.

Several criticisms of the LRM have appeared in the literature. Zangwill (1992: 215), for instance, argues that the thermodynamic example is irrelevant to the philosophy of mind. Another criticism claims that narrower mental types would be too narrow for the explanatory purposes psychological discourse aims to satisfy (cf. Putnam 1975c: 295-298; Fodor 1974: 114; Pylyshyn 1984: Chapter 1; Endicott 1993: 311-312). Science seeks the broadest, most comprehensive generalizations it can get, the argument claims, but the LRM seems to violate this methodological canon since the narrow mental types it postulates would prevent us from formulating broad cross-species generalizations. Sober (1999) attacks the argument’s major premise: science doesn’t always work by seeking the broadest, most comprehensive generalizations. Moreover, even if narrow mental types didn’t allow for the formulation of the most comprehensive generalizations, we might still be better off with local reductions for a variety of reasons including ontological parsimony and the value of grounding higher-level explanations in mental-physical type identities. (Endicott 1993: 311). (Bickle 1998: 150ff. criticizes this objection to the LRM in other ways as well.)

A third criticism claims that the LRM would fail to explain what all the phenomena called ‘pain’ have in common (Block 1980b: 178-9). Against this, Kim (1992) has argued that diverse types such as pain-in-humans and pain-in-Martians would still have in common their satisfaction of a certain functional description or causal role, and this commonality would be sufficient to explain the commonalities among diverse instances of pain.

A final criticism of the LRM claims that there are no mental types narrow enough to line up with physical types in a way that would support reduction. Endicott (1993: 314-318) argues that if we postulate mental types narrow enough to avoid multiple realizability we risk postulating types that are so narrow it no longer makes sense to speak of a reduction of types as opposed to a mere identification of tokens. The burden for exponents of the LRM, then, is to postulate types with the right sort of grain: narrow enough to avoid the implications of the multiple-realizability argument, but not so narrow that the notion of reduction drops out of the picture. (Endicott (1993) criticizes the LRM in other ways as well.)

ii. New Physical Typologies I

Reductionists can also respond to the multiple-realizability argument by positing new physical typologies. Kim states the idea in the following terms:

…the mere fact that the physical bases of two nervous systems are different in material composition or physical organization with respect to a certain scheme of classification does not entail that they cannot be in the same physical state with respect to a different scheme (Kim 1972: 235).

At least three suggestions have been advanced in the literature to this effect. The first claims that we might discover something had in common by all of the apparently diverse realizers of a mental type. We could discover, for instance, that c-fiber firing in humans and q-fiber firing in Martians actually have something interesting in common – that they are in fact instances of a broader physical type which is correlated one-one with pain. According to this strategy, the diverse realizers of a mental type are analogous to electricity, magnetism, and light – types of phenomena which initially seemed diverse but which were later discovered to belong to a single overarching type.

Hill (1991: 105) suggests something like the postulation of overarching physical commonalities in the following terms:

[I]t is not enough to appeal to a case in which a single qualitative characteristic is associated with two or more distinct neurophysiological state-types. One must go on to provide an exhaustive characterization of the distinct levels of description and explanation that belong to neuroscience, and show that no such level harbors a kind under which all of the states in question may be subsumed (Hill 1991: 105).

Shapiro (2000, 2004) has a similar idea. Although aluminum and steel count as diverse types relative to one scheme of classification, he argues, they don’t count as diverse realizations of corkscrews because they have too much in common relative to the performance of the activities that qualify something as a corkscrew. (Gillett 2003 criticizes Shapiro’s argument.) Similarly, Bechtel and Mundale (1999) cite examples from cognitive neuroscience which suggest that there are lower-level properties which are nevertheless the same in a more general functional respect.

The discovery of overarching commonalities is not the only way of developing a new physical typology. Reductionists might decide to individuate realizing types in a way that comprises a broad swath of environmental factors. Antony and Levine (1997), for instance, argue that we should understand realization in terms of the total realizers of mental types instead of their core realizers (see Section 1-c). If realizers are individuated this broadly, however, mental types will no longer be multiply realizable.

Finally, reductionists could develop a new physical typology on the basis of disjunctive physical types. If reductionists are willing to countenance the existence of disjunctive properties, they could identify a mental type with the disjunction of its realizing types. This particular response to the multiple-realizability argument has generated an extensive literature, and deserves separate treatment.

iii. New Physical Typologies II: The Disjunctive Move

The possibility of identifying mental types with disjunctive physical types has repeatedly asserted itself in the literature on multiple realizability. Given an inventory of basic physical predicates P1,…,Pn the idea is to use Boolean operations to construct disjunctive predicates which express disjunctive types (e.g. P1vP3, P7vP15vP39). Putnam (1967) dismissed the disjunctive move out of hand, but it has since been taken very seriously. Kim (1978), Clapp (2001), and Antony (1998, 2003), for instance, have all defended it in one way or another.

Criticisms of the disjunctive move have been thoroughly discussed in the literature (Antony 1999, 2003; Antony and Levine 1997; Block 1980b, 1997; Block and Fodor 1972; Clapp 2001; Endicott 1991, 1993; Fodor 1974, 1997; Jaworski 2002; Kim 1972, 1978, 1984, 1992, 1998; Macdonald 1989; Melnyk 2003; Owens 1989; Pereboom 2002; Pereboom and Kornblith 1991; Putnam 1967a; Seager 1991; Teller 1983). The criticisms discussed in what follows fall into two broad categories: law-based criticisms and metaphysical criticisms. In discussing them, it will be helpful to introduce the following terms: if P1,…,Pn are the types that realize mental type M, call P1,…,Pn an R-disjunction, and call a generalization featuring an R-disjunction as its antecedent an R-disjunctive generalization.

1) Law-Based Criticisms

Law-based criticisms of the disjunctive move focus on the nature of scientific laws. They claim that predicates such as ‘believes’, ‘desires’, and ‘is in pain’ express genuine properties. If mental types are genuine properties, and mental types are identical to R-disjunctive types, then it follows by the indiscernibility of identicals that R-disjunctive types must be genuine properties as well. Fodor (1974) suggested, however, that genuine properties were expressed by the predicates of law statements – a plausible idea if genuine properties make a causal or explanatory difference to their bearers, and causal/explanatory regularities are expressed by law statements. Law-based criticisms of the disjunctive move argue that R-disjunctive generalizations are not genuine law statements, and because they are not genuine law statements, R-disjunctive predicates do not express genuine properties.

Methodological criticisms of the disjunctive move such as Fodor’s (1997: 157-9) claim that the postulation of R-disjunctive types violates standard canons of scientific method. Standard inductive practice aims at formulating the strongest generalizations warranted by the limited available evidence, and closed law statements, as Fodor calls them, are stronger than open ones. Closed law statements are law statements that do not feature open-ended disjunctive predicates such as a psychological generalization with the form ‘Necessarily, for any x, if Mx, then M*x’. Open law statements are law statements that do feature open-ended disjunctive predicates. An example would be an R-disjunctive generalization with the form ‘Necessarily, for any x, if P1x v P2x v… then M*x’. Given reasonable assumptions, the MRT implies that a given mental type will be correlated with an indefinitely large number of realizing types. Consequently, the MRT will most likely imply the existence of open generalizations of the latter sort as opposed to closed generalizations of the former one. Because scientific practice aims at formulating the strongest generalizations, and closed generalizations are stronger than open ones, standard scientific method dictates a preference for closed generalizations over open generalizations such as those featuring R-disjunctions. There are good methodological reasons, then, for supposing that R-disjunctive generalizations are not genuine law statements and that their predicates do not express genuine properties. The problem with this argument is that its point is merely methodological. It does not rule out the possibility of there being R-disjunctive types or R-disjunctive laws (a point Fodor recognizes). It thus falls short of refuting the disjunctive move.

Other law-based criticisms correspond to two different features of law statements: their ability to ground explanations, and their projectibility – their ability to be confirmed by their positive instances. Explanation-based criticisms of the disjunctive move claim that R-disjunctive generalizations cannot express laws because they do not function explanatorily the way law statements do. One such criticism claims, for instance, that explanations must be relevant to our explanatory interests, and appeals to R-disjunctive generalizations are clearly irrelevant to the interests we have in explaining human behavior (Pereboom and Kornblith 1991; Putnam 1975c, 1981). If, for instance, we want to know why Caesar ordered his troops to cross the Rubicon, it doesn’t satisfying our interests to respond, “Because he was either in neural state N1 or in neural state N2 or…” One criticism of this argument is that the notion of relevance is highly context dependent. Although there are good reasons to suppose appeals to R-disjunctive generalizations are irrelevant in “pedestrian” contexts such as the context involving Caesar’s actions, there are also good reasons to suppose that appeals to R-disjunctive generalizations might be relevant in scientific contexts in which reduction is at stake (Jaworski 2002).

Confirmation-based criticisms, on the other hand, claim that R-disjunctive generalizations cannot express laws because they are not confirmed in the way law statements are. In particular, they are not projectible; they are not confirmed by their positive instances. Exponents of confirmation-based criticisms include Owens (1989) and Seager (1990), but Kim’s (1992) version of this criticism is both the best developed and most widely discussed representative of this approach.

Kim’s argument trades on two premises. First, if some evidence e confirms p and p entails q, then e also confirms q. Second, no generalization can be confirmed without the observation of some of its positive instances. Given these premises, the argument purports to show that generalizations with disjunctive antecedents cannot express laws. If they did express laws, they would be confirmed by their positive instances the way all law statements are. But clearly they are not, the argument claims. To show this, assume for the sake of argument that generalizations with disjunctive antecedents are confirmed by their positive instances – call this the Disjunctive Confirmation Hypothesis. Consider now an example: every piece of jade, says Kim, is a piece of either jadeite or nephrite, and vice versa. Suppose, then, that a certain number of jadeite samples confirm the following:

(1) All jadeite is green.

Since each piece of jadeite is also a piece of jade (that is a piece of jadeite or nephrite) each piece of green jadeite is also a positive instance of (2):

(2) All jade is green (i.e. all jadeite or nephrite is green).

So if (1) is confirmed by the samples of jadeite, then by the Disjunctive Confirmation Hypothesis, so is (2). But ‘∀x((Jx v Nx) → Gx)’ implies ‘∀x(NxGx)’ in the predicate calculus, so if (2) is confirmed by the samples, then by Kim’s first premise, so is (3):

(3) All nephrite is green.

The problem, however, is that none of the samples are samples of nephrite. Because no generalization can be confirmed without the observation of some positive instances (Kim’s second premise), we must reject the assumption which sanctioned this confirmation procedure, namely the Disjunctive Confirmation Hypothesis. (A parallel example: suppose a sexually active adult is a sexually active man or woman, and that a certain number of sexually active men confirm ‘No sexually active man becomes pregnant’. Parity of reasoning yields the conclusion that those men confirm ‘No sexually active adult becomes pregnant’, and hence ‘No sexually active woman becomes pregnant’!) If the Disjunctive Confirmation Hypothesis is rejected, however, it follows that R-disjunctive generalizations fail to be confirmed in a lawlike manner and hence fail to express laws.

The principal shortcoming of this argument is that many disjunctive predicates are capable of occurring in law statements. Suppose, for instance, that ‘All emeralds are green’ expresses a law statement. Consider a term that is necessarily coextensive with ‘emeralds’ such as ‘emeralds in the northern hemisphere or elsewhere’. Since this term expresses the same class as ‘emeralds’ it seems that ‘All emeralds in the northern hemisphere or elsewhere are green’ will be confirmed by its positive instances if ‘All emeralds are green’ is. But if these are both law statements, then there will have to be some way of distinguishing legitimate disjunctive predicates such as ‘is a northern or a non-northern emerald’ from illegitimate disjunctive predicates such as ‘is jadeite or nephrite’, and it seems the only way of doing that is to consider the objects to which these predicates apply. Hence, says Kim, “There is nothing wrong with disjunctive predicates as such; the trouble arises when the kinds denoted by the disjoined predicates are heterogeneous… so that instances falling under them do not show the kind of ‘similarity’, or unity, that we expect of instances falling under a single kind” (Kim 1992: 321). A confirmation-based criticism seems to depend, therefore, on some type of metaphysical criticism.

2) Metaphysical Criticisms

Metaphysical criticisms of the disjunctive move claim the idea of a disjunctive property is somehow metaphysically suspect. There are at least two arguments of this sort.

Armstrong (1978: II, 20) argues that accepting disjunctive properties would violate the principle that the same property is present in its diverse instances. Objects a and b, for instance, might both have the disjunctive property PvQ despite the fact that a has it by virtue of having property P instead of Q, and b has it by virtue of having Q instead of P. Clapp (2001) criticizes this argument on the grounds that determinables and their corresponding determinates seem to provide counterexamples. For example, being red, being blue, being yellow, and so forth, are determinates of the determinable being colored. Since everything that is colored must be a determinate shade, anything that satisfies the predicate ‘is blue, or is red, or is yellow,…’ will also satisfy the predicate ‘is colored’. Consequently, if a is red and b is blue, they will have in common the property being colored.

A second metaphysical criticism argues that mental types cannot be identical to R-disjunctive types because R-disjunctions do not express natural kinds. One basic assumption of the multiple-realizability debate is that mental types are natural kinds. Consequently, if mental types are identical to R-disjunctive types, the latter must be natural kinds as well. But R-disjunctive types are not natural kinds, the argument claims. The reason is that natural kindhood is based on similarity, and instances of R-disjunctions are not similar to each other in the right sort of way (Fodor 1974: 109ff.; 1997: 156, Block 1978: 266, Macdonald 1989: 36-7, Armstrong 1978: Vol. II, 20, Kim 1992, Antony and Levine 1997: 87ff.).

Individual instances or members of a natural kind are similar in important ways that have a bearing on, for instance, the projectibility of law statements. The generalization ‘All Ks are F’ is projectible only if Ks remain similar across actual and counterfactual circumstances in ways that have a bearing on their F-ness. Only if Ks are similar to each other in these ways can the observation of any K provide evidence about the F-ness of any other K. Inductive projection about Ks requires, then, that Ks be similar to each other in stable ways. One version of this similarity-based argument understands the relevant similarity in terms of causality (Kim 1992). Kim labels this the “Principle of Causal Individuation of Kinds”: “Kinds in a science are individuated on the basis of causal powers; that is, objects and events fall under a kind, or share in a property, insofar as they have similar causal powers” (Kim 1992: 326). The argument, then, is that R-disjunctive types can qualify as natural kinds only if they are causally similar – only if, for instance, R-disjunctive tokens have similar effects. But, the argument claims, R-disjunctive tokens are not causally similar. If they were causally similar; if, for instance, c-fiber firing and q-fiber firing produced the same effects, they probably wouldn’t qualify as diverse realizers of pain. The causal diversity of R-disjunctive tokens seems to be an implication of the MRT. Consequently, R-disjunctive types are not natural kinds.

Criticisms of this argument have sometimes appealed to the considerations that support physical commonalities among R-disjuncts (See Section 3-a-iii). Block (1997), Antony and Levine (1997), Shapiro (2000), and others have argued, for instance, that diverse physical realizers must have something interesting in common in order to satisfy the functional descriptions associated with mental states. If being in pain amounts to being in some lower-order physical state with such-and-such typical effects, then c-fiber firing and q-fiber firing must each be able to produce those effects to qualify as instances of pain. They must therefore be causally similar to that extent at least. Importantly, critics of this argument have typically not sought to defend the disjunctive move per se, but rather implications the argument has for nonreductive physicalism (see Section 4 below.)

iv. Coordinate Typologies

Another typology-based response to the antireductionist argument claims that mental and physical typologies are to some extent interdependent, and as a result they will eventually converge in a way that yields one-one correlations between mental and physical types. Something like this idea is suggested by Kim:

The less the physical basis of the nervous system of some organisms resembles ours, the less temptation there will be for ascribing them sensations or other phenomenal events (Kim 1972: 235).

Similarly, Enc argues (1983: 290) that our mental typology will eventually be altered to reflect our lower-level scientific investigations. Couch (2004) makes a similar point: if scientists find physical differences among the parts of a system, they are likely to seek higher-level functional differences as well. (Cf. Hill 1991: Chapter 3.)

One argument in favor of coordinate typologies is suggested by Kim (1992), Bickle (1998: Chapter 4), and Bechtel and Mundale (1999). The idea is roughly that there can be higher-level regularities only if they are grounded in lower-level ones. Consequently, if we discuss higher-level regularities such as those expressed by familiar psychological generalizations, we have good reason to think these are underwritten by regularities at lower levels. This dependence of higher-level regularities on lower-level regularities gives us some reason to suspect that mental and physical typologies will tend to converge. (Sungsu Kim (2002) criticizes Bechtel and Mundale’s argument. Couch (2004) defends it.)

b. Reduction-Based Responses

Reduction-based responses to the multiple-realizability argument attack the claim that reduction requires bridge principles taking the form of identity statements. Robert Richardson (1979), for instance, argues that a Nagelian account of intertheoretic reduction can be underwritten by one-way conditionals. Consider again the theories TA and TB discussed in Section 1e. Imagine that TA is slated for reduction to TB, and that LA is a law statement of TA which is supposed to be derived from LB, a law statement of TB:

LA For any x, if A1(x), then A2(x);
LB For any x, if B1(x), then B2(x).

Since the vocabulary of TB does not include the predicates A1 or A2, additional premises linking the vocabularies of the two theories are required. Earlier, in Section 1-e, we said that the derivation of LA from LB required bridge principles taking the form of identity statements:

ID1A1 = B1
ID2A2 = B2;

It seems, however, that LA might be derived from LB on the basis of bridge principles along the following lines instead:

C1 Necessarily, for any x, if B1(x), then A1 (x);
C2 Necessarily, for any x, if B2 (x), then A2(x).

If one-way conditionals of this sort are sufficient for reductive derivations, then the non-identity of mental and physical types is not incompatible with reductionism after all. Reductive derivations might proceed via bridge principles such as C1 and C2 even if identity statements along the lines of ID1 and ID2 are false.

The problem with this understanding of reduction, one indicated by Patricia Kitcher (1980) in her criticism of Richardson, is that a derivation via one-way conditionals does not result in ontological simplification (cf. Bickle 1998: 119-120). It doesn’t show that what we originally took to be two kinds of entities are really only one. Ontological simplification of this sort is taken to be a central feature of reduction – the upshot of showing that A-entities are really just B-entities.

Reduction-based responses to the multiple-realizability argument have not been as popular as typology-based responses on account of widespread commitment to the idea that reduction involves ontological simplification (Sklar 1967; Schaffner 1967; Causey 1972; 1977: Chapter 4; Hooker 1981: Part III; Churchland 1986). Yet Bickle (2003) has recently suggested another type of reduction-based response. It claims not that bridge principles along the lines of C1 and C2 are sufficient for reduction, but that ontological issues concerning the identity or non-identity of properties are completely orthogonal to the issue of reduction. If that is the case, then issues concerning psychophysical reduction could be addressed independently of issues concerning the identity or non-identity of mental and physical types.

4. Multiple Realizability and Nonreductive Physicalism

Multiple realizability has recently played an important role in the attempt to articulate an acceptable form of nonreductive physicalism (NRP). NRP can be characterized by a commitment to three claims, roughly:

Physicalism: Everything is physical – all objects, properties, and events are the sort that can be exhaustively described and/or explained by the natural sciences.

Mental Realism: Some mental types are genuine properties.

Antireductionism: Mental and physical types are not identical.

Jaegwon Kim has articulated a well-known difficulty for a particular type of NRP: realization physicalism. Realization physicalism claims that properties postulated by nonphysical frameworks are higher-order properties that are realized by lower-order properties or their instances in the sense described in Section 1-b. Having a mental property amounts to having some lower-order property that satisfies a certain associated description or condition. Having pain, for instance, might be defined as having some lower-order property that is typically caused by pinpricks, abrasions, burns, and the like, and that typically causes wincing, groaning, and escape-directed movements. Here ‘…is typically caused by pinpricks, abrasions, burns… and typically causes wincing, groaning, escape-directed movements’ expresses the condition associated with being in pain. Any properties whose instances satisfy this causal profile count as instances of pain, and the lower-order properties (or property instances) that satisfy that condition are said to realize pain.

Kim argues that realization physicalism is an unstable theory: either its commitment to Mental Realism and Antireductionism imply a rejection of Physicalism, or else its commitment to Physicalism and Mental Realism imply a rejection of Antireductionism. His argument trades on two assumptions.

First, Kim assumes that genuine properties are ones that make a causal difference to their bearers. We can distinguish between two senses of ‘property’. Properties in a broad or latitudinarian sense are roughly the ontological correlates of predicates. Properties in a narrow, causal sense, on the other hand, are properties in the broad sense that make a causal difference to their bearers. Hence, weighing 1 kg and weighing 2.2 pounds are different properties in the broad sense since they correspond to different predicates, but they are not different properties in the causal sense since they are necessarily coextensive and influence the causal relations into which their bearers enter in exactly the same ways. One might even do well to eliminate talk of broad properties altogether, says Kim (1998: Chapter 4), and speak instead simply of properties in the causal sense which are expressible by different predicates. Hence, there is a single (causal) property expressed by the predicates ‘weighs 1 kg’ and ‘weighs 2.2 pounds’.

Second, Kim assumes that if physicalism is true, the only genuine (i.e. causal) properties that exist are physical properties. Denying this, he says, would be tantamount to denying physicalism; it would be to accept the existence of “emergent causal powers: causal powers that magically emerge at a higher level” (1992: 326).

Given these assumptions, Kim poses the following difficulty for realization physicalists. According to Antireductionism, mental types are not identical to physical types. In that case, however, it is unclear how mental types could manage to be genuine properties. If Physicalism is true, then all causal properties are physical. This seems to imply a principle along the following lines (it is stated here without the qualifications Kim adds):

If a higher-order property M is realized by a lower-order property P, then the causal powers of this instance of M are identical to the causal powers of P.

Kim (1992: 326) calls this the ‘Causal Inheritance Principle’. This principle would appear to present realization physicalists with an uncomfortable choice. They could (a) deny the causal status of mental types; that is, they could reject Mental Realism and deny that mental types are genuine properties. Alternatively, they could (b) reject Physicalism; that is, they could endorse the causal status of mental types, but deny their causal status derives from the causal status of their physical realizers. Or finally, they could (c) endorse Mental Realism and Physicalism, and reject Antireductionism. Given the assumption that mental types are genuine properties, a commitment to Physicalism would imply that mental types are identical to physical types. This is the option Kim favors. Kim is nevertheless sympathetic with the idea that the mental types postulated by our current mental typology are multiply realizable relative to the physical types postulated by our current physical typologies. He argues, moreover, that R-disjunctive types cannot be natural kinds for reasons discussed in Section 3-a-iii-3. If those types are not natural kinds, however, then we have good reason to suppose that the mental types postulated by our current mental typology are not natural kinds either. Each of those mental types is necessarily coextensive with an R-disjunction, and no mental type can have causal powers beyond those of the individual disjuncts. If those disjuncts are causally dissimilar, then instances of the corresponding mental type must be causally dissimilar as well. Suppose, however, that causal similarity is necessary for natural kind status. In that case, it follows that the mental types postulated by our current mental typology cannot be natural kinds. Consequently, Kim favors the local reduction move discussed in Section 3-a-i. We need a new mental typology that postulates new narrow mental types that are correlated one-one with physical types.

5. References and Further Reading

  • Antony, Louise M. 1999. "Multiple Realizability, Projectibility and the Reality of Mental Properties." Philosophical Topics 26: 1-24.
  • Antony, Louise M. 2003. "Who’s Afraid of Disjunctive Properties?" Philosophical Issues 13: 1-21.
  • Antony, Louise and Levine, Joseph. 1997. "Reduction with Autonomy." In Tomberlin 1997.
  • Armstrong, D.M. 1968. A Materialist Theory of Mind. London: Routledge & Kegan Paul.
  • Armstrong, D.M. 1970. "The Nature of Mind." In The Mind/Brain Identity Theory. C.V. Borst, ed. London: Macmillan, 67-79. Reprinted in Block 1980a, 191-199.
  • Armstrong, D.M. 1978. A Theory of Universals: Universals and Scientific Realism, Vol. II.. Cambridge University Press.
  • Bealer, George. 1994. "Mental Properties." Journal of Philosophy 91: 185-208.
  • Bechtel, William and Jennifer Mundale. 1999. "Multiple Realizability Revisited: Linking Cognitive and Neural States." Philosophy of Science, 66: 175-207.
  • Bickle, John. 1998. Psychoneural Reduction: The New Wave. Cambridge, MA: MIT Press.
  • Bickle, John. 2003. Philosophy and Neuroscience: A Ruthlessly Reductive Account. Dordrecht: Kluwer Academic Publishers.
  • Block, Ned. 1978. Troubles With Functionalism. In Perception and Cognition: Issues in the Foundations of Psychology. Minnesota Studies in the Philosophy of Science, vol. 9. C.W. Savage, ed. Minneapolis: University of Minnesota Press, 261-325.
  • Block, Ned, ed. 1980a. Readings in Philosophy of Psychology, 2 vols. Cambridge, MA: Harvard University Press.
  • Block, Ned. 1980b. "What is Functionalism?" In Block 1980a: 171-84.
  • Block, Ned. 1997. "Anti-Reductionism Slaps Back." In Tomberlin 1997: 107-32.
  • Block, Ned and Jerry Fodor. 1972. "What Psychological States Are Not." Philosophical Review 80: 159-81. Reprinted with revisions by the authors in Block 1980a, 237-50.
  • Causey, Robert L. 1972. "Attribute-Identities in Microreduction." Journal of Philosophy 82: 8-28.
  • Causey, Robert L. 1977. Unity of Science. Dordrecht, Holland: D. Reidel Publishing Company.
  • Churchland, Paul M. 1989. A Neurocomputational Perspective. Cambridge, MA: MIT Press.
  • Churchland, Patricia S. 1986. Neurophilosophy. Cambridge, MA: MIT Press.
  • Clapp, Lenny. 2001. "Disjunctive Properties: Multiple Realizations." Journal of Philosophy 3: 111-36.
  • Couch, Mark. 2004. "Discussion: A Defense of Bechtel and Mundale." Philosophy of Science, 71: 198-204.
  • Enç, Berent. 1983. "In Defense of the Identity Theory." Journal of Philosophy 80: 279-298.
  • Endicott, Ronald P. 1991. "Macdonald on Type Reduction via Disjunction." Southern Journal of Philosophy 29: 209-14.
  • Endicott, Ronald P. 1993. "Species-Specific Properties and More Narrow Reductive Strategies." Erkenntnis 38: 303-21.
  • Endicott, Ronald P. 2007. "Reinforcing the Three 'R's: Reduction, Reception, and Replacement." In Schouten and Looren de Jong 2007, 146-171.
  • Feigl, Herbert. 1958. "The Mental and the Physical." Minnesota Studies in the Philosophy of Science, Vol. 2. Minneapolis: University of Minnesota Press, 370-497.
  • Fodor, Jerry. 1968a. Psychological Explanation: An Introduction to the Philosophy of Psychology. New York: Random House.
  • Fodor, Jerry. 1968b. "The Appeal to Tacit Knowledge in Psychological Explanation." Journal of Philosophy 65: 627-40.
  • Fodor, Jerry. 1974. "Special Sciences, or The Disunity of Science as a Working Hypothesis." Synthese 28: 97-115.
  • Fodor, Jerry. 1997. "Special Sciences: Still Autonomous After All These Years." In Tomberlin 1997, 149-64.
  • Gillett, Carl. 2003. "The Metaphysics of Realization, Multiple Realization and the Special Sciences." Journal of Philosophy 100: 591-603
  • Hempel, Carl. 1965. Aspects of Scientific Explanation. New York: The Free Press.
  • Hill, Christopher S. 1991. Sensations: A Defense of Type Materialism. Cambridge UP.
  • Hooker, Clifford. 1981. "Towards a General Theory of Reduction. Part III: Cross-Categorial Reductions." Dialogue 20: 496-529.
  • Jackson, Frank. 1982. "Epiphenomenal Qualia." Philosophical Quarterly 32: 127-136.
  • Jackson, Frank. 1986. "What Mary Didn’t Know." The Journal of Philosophy 58: 291-95.
  • Jaworski, William. 2002. "Multiple-Realizability, Explanation, and the Disjunctive Move." Philosophical Studies 108: 298-308.
  • Kim, Jaegwon. 1972. "Phenomenal Properties, Psychophysical Laws, and the Identity Theory." Monist 56: 177-92. Selections reprinted in Block 1980a, 234-36.
  • Kim, Jaegwon. 1978. "Supervenience and Nomological Incommensurables." American Philosophical Quarterly 15: 149-56.
  • Kim, Jaegwon. 1984. "Concepts of Supervenience." Philosophy and Phenomenological Research 45: 153-76. Reprinted in Kim 1993, 53-78.
  • Kim, Jaegwon. 1989. "The Myth of Nonreductive Physicalism." Proceedings and Addresses of the American Philosophical Association 63. Reprinted in Kim 1993, 265-84.
  • Kim, Jaegwon. 1992. "Multiple Realization and the Metaphysics of Reduction." Philosophy and Phenomenological Research 52: 1-26. Reprinted in Kim 1993, 309-35.
  • Kim, Jaegwon. 1993. Supervenience and Mind. Cambridge University Press.
  • Kim, Jaegwon. 1998. Mind in a Physical World. Cambridge, MA: MIT Press/Bradford Books.
  • Kim, Sungsu. 2002. "Testing Multiple Realizability: A Discussion of Bechtel and Mundale." Philosophy of Science 69: 606-610.
  • Kolb, Bryan and Whishaw, Ian Q. 2003. Fundamentals of Human Neuropsychology. 5th Edition. New York, NY: Worth Publishers.
  • Kripke, Saul A. 1972. Naming and Necessity. Cambridge, MA: Harvard University Press.
  • Lewis, David. 1966. "An Argument for the Identity Theory." Journal of Philosophy 66: 17-25.
  • Lewis, David. 1969. "Review of Art, Mind, and Religion." Journal of Philosophy 66: 23-35. Reprinted in Block 1980a, 232-233.
  • Lewis, David. 1970. "How to Define Theoretical Terms." Journal of Philosophy 67: 427-446.
  • Lewis, David. 1972. "Psychophysical and Theoretical Identifications." Australasian Journal of Philosophy 50: 249-258. Reprinted in Block 1980a, 207-215.
  • Lewis, David. 1980. "Mad Pain and Martian Pain." In Block 1980a, 216-222.
  • Macdonald, Cynthia. 1989. Mind-Body Identity Theories. London: Routledge.
  • Melnyk, Andrew. 2003. A Physicalist Manifesto. Cambridge University Press.
  • Nagel, Ernest. 1961. The Structure of Science. Indianapolis, IN: Hackett.
  • Owens, David. 1989. "Disjunctive Laws?" Analysis 49: 197-202.
  • Pereboom, Derk. 2002. "Robust Nonreductive Materialism." Journal of Philosophy 99: 499-531.
  • Pereboom, Derk and Hilary Kornblith. 1991. "The Metaphysics of Irreducibility." Philosophical Studies 63: 125-45.
  • Polger, Thomas. 2004. Natural Minds. Cambridge, MA: MIT Press.
  • Putnam, Hilary. 1960. "Minds and Machines." In Dimensions of Mind. Sidney Hook, ed. (New York: New York University Press), 148-179. Reprinted in Putnam 1975a, 362-385.
  • Putnam, Hilary. 1963. "Brains and Behavior." In Analytical Philosophy Second Series. R.J. Butler, ed. (Oxford: Basil Blackwell & Mott), 1-19. Reprinted in Putnam 1975a, 325-341.
  • Putnam, Hilary. 1964. "Robots: Machines or Artificially Created Life?" Journal of Philosophy 61: 668-691. Reprinted in Putnam 1975a, 386-407.
  • Putnam, Hilary. 1967a. "Psychological Predicates." In Art, Mind, and Religion. Capitan, W.H. and Merrill, D.D., eds. (University of Pittsburgh Press), 37-48. Reprinted as ‘The Nature of Mental States’ in Putnam 1975a, 429-40.
  • Putnam, Hilary. 1967b. "The Mental Life of Some Machines." In Intentionality, Minds, and Perception. Hector-Neri Castañeda, ed. (Detroit: Wayne State University Press), 177-200. Reprinted in Putnam 1975a, 408-428.
  • Putnam, Hilary. 1970. "On Properties." In Essays in Honor of Carl G. Hempel. N. Rescher, et al., ed. Dordrecht, Holland: D. Reidel.
  • Putnam, Hilary. 1975a. Mind, Language, and Reality: Philosophical Papers, vol. 2. Cambridge University Press.
  • Putnam, Hilary. 1975b. "The Meaning of ‘Meaning’." In Language, Mind and Knowledge, Minnesota Studies in the Philosophy of Science, Vol. 7. Keith Gunderson, ed. (University of Minnesota Press). Reprinted in Putnam 1975a, 215-271.
  • Putnam, Hilary. 1975c. "Philosophy and Our Mental Life." In Putnam 1975a, 291-303.
  • Putnam, Hilary. 1981. "Reductionism and the Nature of Psychology." In Mind Design: Philosophy, Psychology, Artificial Intelligence, ed. John Haugeland, 205-19. Montgomery, VT: Bradford Books.
  • Pylyshyn, Zenon. 1984. Computation and Cognition. Cambridge, MA: MIT Press.
  • Richardson, Robert. 1979. "Functionalism and Reductionism." Philosophy of Science 46: 533-558.
  • Schaffner, K.F. 1967. "Approaches to Reduction." Philosophy of Science 34: 137-47.
  • Schouten, Maurice and Looren de Jong, Huib. 2007. The Matter of the Mind. Blackwell Publishing.
  • Seager, William. 1991. "Disjunctive Laws and Supervenience." Analysis 51: 93-8.
  • Searle, John. 1980. "Minds, Brains, and Programs." The Behavioral and Brain Sciences 3: 417-57.
  • Sellars, Wilfrid. 1956. "Empiricism and the Philosophy of Mind." In The Foundations of Science and the Concepts of Psychology and Psychanalysis, Minnesota Studies in the Philosophy of Science, Vol. 1. H. Feigl and M. Scriven, eds. University of Minnesota Press. Reprinted in Sellars 1963, 127-196.
  • Sellars, Wilfrid. 1962. "Philosophy and the Scientific Image of Man." In Frontiers of Science and Philosophy. Robert Colodny, ed. Pittsburgh, PA: University of Pittsburgh Press. Reprinted in Sellars 1963, 1-40.
  • Sellars, Wilfrid. 1963. Science, Perception, and Reality. London: Routledge & Kegan Paul.
  • Shapiro, Lawrence. 2000. "Multiple Realizations." Journal of Philosophy 97: 635-654.
  • Shapiro, Lawrence. 2004. The Mind Incarnate. Cambridge, MA: MIT Press.
  • Shoemaker, Sydney. 1981. "Some Varieties of Functionalism." Philosophical Topics 12: 93-119.
  • Sklar, Lawrence. 1967. "Types of Intertheoretic Reduction." British Journal for the Philosophy of Science 18: 109-24.
  • Smart, J.J.C. 1962. "Sensations and Brain Processes." In The Philosophy of Mind, Chappell, V.C. ed. Englewood Cliffs: Prentice-Hall, Inc.
  • Sober, Elliott. 1999. "The Multiple Realizability Argument Against Reductionism." Philosophy of Science 66: 542-564.
  • Teller, Paul. 1983. "Comments on Kim’s Paper." Southern Journal of Philosophy 22 (Suppl.): 57-61.
  • Tomberlin, James, ed. 1997. Philosophical Perspectives 11, Mind, Causation, and World. Malden, MA: Blackwell.
  • Witmer, Gene. 2003. "Multiple Realizability and Psychological Laws: Evaluating Kim’s Challenge." In Physicalism and Mental Causation, S. Walter and H. Heckmann, eds. Charlottesville, VA: Imprint Academic, 59-84.
  • Zangwill, Nick. 1992. "Variable Reduction Not Proven." Philosophical Quarterly 42: 214-218.

Author Information

William Jaworski
Fordham University
U. S. A.


The term “resurrection” refers to the raising of someone from the dead. The resurrection of the dead brings to the forefront topics from the study of personal identity and philosophical anthropology. For example, some people think that we have souls and that the souls play an important role in resurrection. Others claim that we do not have souls and that this is a reason to deny that there is any life after death. In addition, the study of resurrection has benefited from interaction with topics in contemporary metaphysics. There are many puzzles about how things survive change. Philosophers have taken insights and distinctions from those cases and used them in their discussion of resurrection.

The article begins with a brief overview of the doctrine of the resurrection. It touches on the essential parts of the Christian doctrine and points to some of the surrounding controversies. The most common objection to the Christian doctrine of the resurrection of the dead is that it cannot be made compatible with materialism, the claim that humans are material beings and have no non-physical parts. This article examines the supposed inconsistency and looks at four different attempts by philosophers to advance a coherent account of the doctrine of the resurrection. The conclusion is a brief look at immaterialist accounts of resurrection and a summary and criticism of John W. Cooper’s argument that the Christian belief in an intermediate state entails mind-body dualism.

Table of Contents

  1. The Christian Doctrine of Resurrection
  2. Objections to the Christian Doctrine of Resurrection
  3. Materialist Accounts of Resurrection
    1. The Simulacra Model
    2. The Constitution View
    3. The Falling Elevator Model
    4. Anti-Criterialism
  4. Immaterialists Accounts of Resurrection
    1. Augustine and Aquinas
    2. The Intermediate State
  5. References and Further Reading

1. The Christian Doctrine of Resurrection

Many different religions have accounts of life after death but the Christian doctrine of the resurrection of the dead has received the most attention by philosophers. This is in large part due to the centrality of the doctrine in the Western religious tradition. Because of the emphasis on Christian accounts of resurrection in the philosophical literature, this entry will focus on the debates about the Christian doctrine of resurrection. However, much of what is said can be applied to other religions and traditions. To see a contemporary non-Christian account of resurrection, see John Leslie’s Immortality Defended.

The raising of the dead plays a central role in Christian belief. To begin with, Christians believe that Jesus died and rose from the dead. Each of the four gospels contains testimony about the resurrection of Jesus (see Matthew 28:1-20, Mark 16:1-8, Luke 24:1-53, and John 20:1-21:25). Jesus’ resurrection is central to Christian belief because on it rests claims about Jesus’ divinity and various doctrines about salvation.

There is a fair amount of scholarly work done on the question of whether or not Jesus did rise from the dead. This debate falls outside the scope of the article but the interested reader will find The Son Rises: The Historical Evidence for the Resurrection of Jesus by William Craig and Did Jesus Rise From the Dead? The Resurrection Debate by Gary Habermas, Anthony Flew, and Terry Miethe to be good starting points.

Christians believe that Jesus’ resurrection serves as a model for the resurrection of some people (perhaps everyone) in the future. It is this belief that is known as the Christian doctrine of the resurrection of the dead (henceforth CDR). To be clear, this doctrine is one of bodily resurrection. It is not a claim about figurative or metaphorical resurrection. We will now look at various aspects of CDR.

First, one might wonder about the scope of CDR. Who, exactly, will be raised from the dead? By far, the majority of Christians (lay people, clergy, and scholars) have believed that both Christians and non-Christians will be resurrected. In addition, it has been believed that this resurrection is not the same for everyone. For example, some believe that Christians will be raised in a new spiritual body that will experience an eternity of blessing, while non-Christians will be raised so that they might undergo judgment and punishment.

Two doctrines that are compatible with a denial that both Christians and non-Christians will be resurrected are annihilationism and conditional immortality. Annihilationism is the view that non-Christians are not punished for eternity but rather are annihilated. Some versions of annihilationism hold that God will punish unrepentant sinners for a limited time in hell and then annihilate them (thus, endorsing some sort of afterlife) while others hold that sinners are not resurrected at all. Conditional immortality is the view that the soul is not inherently immortal and that it is only God’s gift that grants the soul eternal life. Both of these views are held by a small minority of evangelical Protestants and various Adventist churches.

Proponents of the resurrection of the godly and the ungodly point to scripture in support of their belief in a general resurrection. For example, in Acts 24:15 it is reported that Paul believed that “there shall certainly be a resurrection of both the righteous and the wicked” (all verses quoted are from the New American Standard Bible translation, NASB). In addition to the verse in Acts the reader can also look to Daniel 12:2 and Revelation 20:13-15 for support of the belief in a general resurrection. In any case, it must be acknowledged that historically and scripturally the bulk of attention is placed on the resurrection of the believer. Thus, while CDR’s scope may include the non-believer, it is primarily a doctrine about what happens to the believer in the afterlife.

Second, one might wonder about the timing of the resurrection in CDR. When will the dead be raised? This is a contentious issue among Christian theologians and the timing of the resurrection (or resurrections) is largely determined by whether one is an amillennialist, postmillennialist, or premillenialist. Amillenialists believe that Jesus will return to earth and at that time the resurrection of the dead will take place along with the establishment of the New Heaven and the New Earth. Postmillennialists believe that there will be a “millennial age,” which need not be a thousand years long, characterized by Christianity becoming the dominant religion and the world turning towards God. At the end of this age, Christ will return and the resurrection of the dead will take place. Finally, premillenialists hold that the resurrection of the believers will occur when Christ returns to earth. Following Christ’s return, there will be a millennial age in which Christ reigns on earth. At the end of this time, among other things, the resurrection of unbelievers will occur and the New Heaven and New Earth will be established. (This last characterization is a simplification. There are some versions of premillenialism in which more than two large scale resurrections take place.)

Third, one might wonder about the nature of the resurrection in CDR. What will people be like once they are raised from the dead? After all, if someone was merely restored to his or her physical state right before death, then in many cases death would occur immediately afterwards. First, CDR teaches that the resurrection will be a physical or bodily resurrection. For example, Paul writes in Romans 8:11 that “He who raised Christ Jesus from the dead will also give life to your mortal bodies through His Spirit who dwells in you.” Additionally, Paul writes in 1 Corinthians 15:42-44:

So also is the resurrection of the dead. It is sown a perishable body, it is raised an imperishable body; it is sown in dishonor, it is raised in glory; it is sown in weakness, it is raised in power; it is sown a natural body, it is raised a spiritual body.

Also, Christians cite the example of Jesus after his resurrection. Jesus is depicted not as some ghostly figure but as an embodied person, able to eat, drink, and physically interact with others.

Second, the depictions of the resurrected Christ in the gospels and the scripture passages above indicate that the body that will be raised will be significantly different than the one that died. In Christ’s case people who knew him before he died had difficulty recognizing him after he died. However, they did recognize him after some prompting. (See John 20:11-18 for a case of this.) Additionally, while Paul contrasts the two bodies in the passage from Corinthians above, the New Testament also indicates that believers will be able to recognize one another. (See Matthew 8:11, 27:52-53 and Luke 9:30-33.)

We can now sum up what the core of CDR is. CDR is a doctrine that claims believers will be resurrected in bodily form when Christ returns to the earth. Christians disagree about the timing of Christ’s return, the particulars about the resurrected body, and the scope of the resurrection. However, the creeds have been consistent in affirming the essential parts of CDR. The Apostles Creed, written around the third or fourth century C.E., affirms “the resurrection of the body.” The Nicene Creed, C.E. 325, reads “I look for the resurrection of the dead, and the life of the world to come.” Additionally, various confessions and doctrinal statements have overwhelmingly endorsed CDR. For example, in the Westminster Confession of Faith, composed in 1643-46, there is a section on the resurrection of the dead which includes the claim that “all the dead shall be raised up, with the selfsame bodies, and none other (although with different qualities)....”

2. Objections to the Christian Doctrine of Resurrection

In this section of the article two objections to the Christian doctrine of resurrection (CDR) will be examined. First, the relationship between CDR and miracles will be discussed. Second, we will consider the claim that CDR is incompatible with materialism. The majority of this section will focus on the second objection because it is a) the most common objection to CDR and b) specific to CDR and not applicable to any number of different doctrines, unlike the first objection involving miracles. Ultimately, it will be suggested that the difficulties that CDR has with materialism are not due to a particular conflict with materialism. Instead, whether one is a dualist or a materialist supporter of CDR, one must account for how a material object can be numerically identical with a previous material object that was destroyed.

One objection to CDR is that it requires a miracle to take place. The objector presumably believes either that God would not perform such miraculous events or cannot perform such events. This sort of objection was more popular in the early to mid-20th century when many leading theologians and philosophers believed that the notion of a miracle was incoherent and that Christianity would be better off without a commitment to such overt supernatural events. Note that this sort of objection applies not only to CDR but to large parts of traditional Christian doctrine.

Defenders of CDR will admit that it would take a miracle for God to bring about the resurrection of the dead. However, the defenders of CDR do not see this as a problem. Rather, they embrace the coherence of the concept of a miracle, and argue that we are within our epistemic rights to believe in miracles. Recently, the position that Christianity has within it the resources to justify belief in miracles has become more popular among philosophers. If this position is true, then the defender of CDR is within her epistemic rights in believing that a supernatural act of God is required for a resurrection to occur. However, this does not mean that CDR is true. The opponent of CDR can still argue that CDR is false because it is committed to the existence of miracles. Of course, the opponent of CDR in raising this objection is also calling into question the greater theological scheme of which CDR is but a part. Therefore, any criticism of CDR’s commitment to miracles quickly escalates into a discussion about the truth of Christianity.

The most common objection to CDR is that it is incompatible with materialism. Since materialism is the predominant view of philosophers, this objection is taken to be a serious blow to both CDR and Christianity. In order to understand this objection, one must understand the distinction between qualitative and numerical identity.

Suppose one day that you hear the following comments: “Joe is wearing the same watch that he wore yesterday,” and “Joe is wearing the same watch that Amy is wearing.” Both of these comments make use of the phrase, “same watch,” but mean very different things. The first comment says that Joe is wearing a watch that is numerically identical to the watch he wore the day before. If Joe bought a warranty for the watch he was wearing yesterday, that warranty would apply to the watch he is wearing today. The first speaker is not talking of two different watches; he is talking of only one watch. The second speaker is not talking of one watch but of two. The speaker is claiming that the watch Joe is wearing is qualitatively identical to the watch that Amy is wearing. The two watches are such that they are of the same brand, have similar features, are of the same color, etc. If Joe were to purchase a warranty for the watch he is wearing, it would not apply to the watch that Amy is wearing. This case of watches generalizes to other objects. If object X is numerically identical to object Y, then there are not, in fact, two objects, but just one. For example, Superman is numerically identical to Clark Kent; there is just one person who happens to lead an interesting double life. If object X is qualitatively identical to object Y, then there are two objects that happen to be exactly alike in their various properties and qualities. For example, two electrons might be thought of as being qualitatively identical even though they are not numerically identical.

Note that very few pairs of things are qualitatively identical in a strict and philosophical sense. For example, we might speak of two desks of being “the same desk.” However, it is likely they have enough differences that they are not qualitatively identical. Rather, they are just very similar. They are qualitatively alike and for almost any purpose one of the desks will do just as well as the other. Additionally, almost all numerically distinct objects are qualitatively distinct as well. For, take any two numerically distinct objects, unless they occupy the very same space, we could say that one has the property of being in such and such a location and the other lacks that property.

If CDR is true, then there will be many people in the far future that will be resurrected. We can ask of each of these people, is he or she the same person who died? In asking this question we are not asking if they are qualitatively the same person. As we saw above, CDR claims that those that are resurrected will have very different bodies than they had before death. Furthermore, this change is unproblematic. People can undergo a vast amount of qualitative change in their present life and still be the same person. For example, a person can be involved in a terrible accident that leaves him or her both physically and mentally very different. However, we would still consider that person to be the same person, numerically speaking, as the person who was in the accident, despite the change he or she endured. So, when we ask whether or not the resurrected persons are the same persons who died, we are asking if they are numerically identical to someone who lived in the past.

This question is problematic for the proponent of CDR. Suppose the answer is no, then it seems as if CDR is an empty hope for those who believe in it. For, the Christian does not merely believe that someone like her will be resurrected, but believes that she will be the one who is resurrected in the future. Thus, CDR is committed to the claim that there must be some way for resurrection to occur that allows for numerical identity between a person before death and after resurrection.

The dualist seems to have an easier time meeting this commitment. Under many dualist views, a person is identical to a soul or some sort of non-physical entity. During a person’s life, one soul is “attached” or associated with one particular body. When death occurs, the dualist thinks that the soul and the body become “detached.” Later, when the resurrection of the dead occurs, the soul becomes attached to a new body. This is unproblematic because a person is not identical to the body but to the soul. The newly resurrected person is identical to someone who existed before because the soul is identical to a soul that existed before.

It seems it is more difficult for a materialist to give an account of resurrection that accounts for the numerical identity of persons before and after death. To see this, we will first look at a case involving the destruction and recreation of an everyday object and then apply that case to the materialist believer of CDR. The following case is taken from Peter van Inwagen (p.45). Consider an everyday material object, such as a book or a manuscript. Suppose that at some point in the past this manuscript was burned. Now, what would you think if someone told you that he or she was currently in possession of the very same manuscript that was burned in the past? Van Inwagen would find this incredible. He does not doubt that someone could possess an exact duplicate of the manuscript. He denies that anyone could possess a manuscript that was numerically identical to the one that was burned.

Suppose the owner of the manuscript tried to convince van Inwagen that it was possible for it to be the same one by describing a scenario in which God rebuilds the manuscript using the same atoms or other bits of matter that used to compose the manuscript. Van Inwagen claims that the manuscript God recreated is merely a duplicate. A duplicate is an object that is merely qualitatively identical to another object. Van Inwagen is not alone in thinking this. John Perry expresses this intuition in his work A Dialogue on Personal Identity and Immortality. In it, a character of his argues that Kleenex boxes cannot be rebuilt after being completely destroyed. Underlying these intuitions is the view that mere rebuilding of an object (even using the same parts) is not enough to insure that the object after rebuilding is numerically identical to the object before rebuilding.

Applying this intuition to the materialist we can see why CDR seems to be in conflict with materialism. For, materialism holds that people are material objects like manuscripts and Kleenex boxes. Thus, if a person’s body is destroyed then a person is destroyed and God can no better rebuild a person’s body than he can a manuscript or any other material object.

In response to this argument, the defender of CDR may reject the intuition behind van Inwagen’s argument and claim that God can rebuild material objects as long as he is using the same parts that composed the object when it is destroyed. Under this picture, the reassembly view of resurrection, God would resurrect people by assembling together all the bits of matter that used to be a part of their bodies and bringing them together again to form healthy bodies. The reader may wonder what is meant by “parts” or “bits of matter” in this discussion. Specification of these terms will vary depending on the proponent of the reassembly view, but typically the parts under consideration are the basic micro-physical parts that we are made of. For example, it would be a poor reassembly view of resurrection that held that God resurrected people by gathering all the organs that composed people at a previous time. After all, our organs will decay and decompose in a similar way that our bodies will. The protons, neutrons, electrons, quarks, superstrings, or whatever subatomic particle you choose will not decay in the same way, and presumably would survive into the future so that God might eventually gather them and reassemble them.

There are objections to the view of resurrection as assembly that go beyond the intuition that reassembly of a body is not enough to ensure that a reassembled person is numerically identical to someone in the past. First, it is not clear that all the parts that compose people now will exist later when the time for resurrection comes. It seems possible, if not plausible, that God would not be able to resurrect some people if the reassembly view was true. The defender of CDR would not be comfortable with such an outcome. Second, parts of people can become parts of other people. For example, when a cannibal bites into her latest victim, she digests and incorporates the parts of one person into her own person. God would not be able to rebuild everyone given the existence of cannibals and other mechanisms that allow parts of one person to become parts of another person after death.

For the reasons above, philosophers have tended to reject reassembly views. (For an account of the medieval debates about reassembly views and resurrection see Caroline Walker Bynum’s The Resurrection of the Body. Some of the defenses of reassembly views by medieval apologists are entertaining if not persuasive.) We are left with our original problem, how can a material object be rebuilt? If materialism is true, then how is resurrection possible? The remaining sections of this article explain several different ways in which philosophers have attempted to answer this question.

It should be noted that the argument against the materialist defender of CDR can be transformed slightly to apply to any defender of CDR. In the description of CDR the article left open the question of whether or not the resurrected body is numerically identical to the body pre-death. Many Christians think that it is true that a numerically identical body is resurrected. Trenton Merricks makes this case forcefully in his article “The Resurrection of the Body and the Life Everlasting.” There he argues that a) “the overwhelming majority of theologians and philosophers in the history of the church have endorsed the claim of numerical identity” (p. 268) and b) that scripture teaches this. In defense of his second point he points to 1 Corinthians 15 and the fact that Christ bore the scars of crucifixion. If Merricks is right, and numerical identity of the body is part of CDR, then a believer in CDR must defend the view that it is possible for God to resurrect a material object even if one is a dualist. If Merricks is not right, then the dualist has an easier time coming up with an account of resurrection than the materialist.

3. Materialist Accounts of Resurrection

a. The Simulacra Model

Peter van Inwagen has presented a model of resurrection that is compatible with materialism and the Christian doctrine of resurrection (CDR). The key problem for the defender of CDR is that once we die our bodies begin to disintegrate and eventually are destroyed by natural processes. Once this happens, it seems that even God cannot bring back that body because it is a logically impossible thing to do, given the intuition discussed above. Van Inwagen proposes solving this problem by giving an account of resurrection where our bodies do not in fact undergo decay. Under his account, “at the moment of each man’s death, God removes his corpse and replaces it with a simulacrum, which is what is burned or rots” (van Inwagen, p. 49). Later, at the time of the general resurrection, God will take the corpse that he has preserved and restore it to life.

One objection that van Inwagen addresses in his article is that there is no reason for God to replace genuine corpses with simulacra. If God does preserve our corpse, why does he not preserve it here on earth or remove the corpse from the earth without a replacement? Van Inwagen’s brief answer is that if God did not provide a simulacrum, then there would be widespread irrefutable evidence of the supernatural. Suppose someone put a torch to a corpse. If God were preserving that corpse, then no amount of effort would allow the natural process of cremation to take place. Van Inwagen goes on to say that there are good reasons for God to have a policy of not providing regular evidence of the supernatural (though in the article above van Inwagen is not specific about what those reasons are.)

Another objection to the simulacrum view is that it makes God out to be a great deceiver. We tend to think of the corpses that we bury or cremate as genuine corpses. Further, we have every reason to suspect that this is the case. If we are wrong, it is only due to God’s constant effort to deceive us. (See Hudson, p. 181, for a discussion of this point.)

Finally, it can be objected that the simulacrum view is incredible. Even though it is coherent, it requires us to adopt radically different beliefs than we currently hold. Van Inwagen acknowledges this point and in a postscript to his original article writes:

I am inclined now to think of the description that I gave in ‘The Possibility of Resurrection’ of how an omnipotent being could accomplish the Resurrection of the Dead as a ‘just-so story’: Although it serves to establish a possibility, it probably isn’t true (p.51).

He goes on to remark that while the theory itself might not be literally true, it is true in another way in that it shows us some important features about how God will accomplish the resurrection of the dead.

b. The Constitution View

In the other sections of the article, we have assumed that a materialist is someone who holds the view that not only is a person a material object but that a person is identical to a material object, namely her body. Some materialists deny this. Instead, they hold that a person is constituted by her body and that this relation is not one of identity.

By looking at a statue and the matter it is composed of we can better understand the constitution view. Consider a hunk of marble; let us call that hunk “Hunk”. Suppose Hunk is carved into a wonderful statue which we call “Statue.” Arguably, Statue and Hunk are not identical for Hunk has properties that Statue lacks. Hunk, for example, can survive being carved into a different statue while Statue cannot. Statue cannot exist without an artworld, while Hunk can, etc. Thus, by Leibniz’s Law, Statue and Hunk are not identical. However, we can say that Statue is constituted by Hunk. (Lynne Rudder Baker argues for this view in Persons and Bodies.)

Given the constitution view of persons, we can construct an account of resurrection that purports to solve the problems of the reassembly view we described earlier. In her paper “Need a Christian be a Mind/Body Dualist, Baker claims that at the general resurrection God will take some, not all, of the atoms that used to constitute a person, let’s call him Smith, and recreate Smith’s body. The difference between this and the reassembly view is that what God is recreating is not Smith but merely a body that constitutes Smith. Thus, while we are inclined to agree with van Inwagen that we do not have numerically identical body here, Baker suggests that we should think we have the same person here. For, unlike in the case of the manuscript, God can “simply will (it seems to [Baker]) there to be a body that has the complexity to ‘subserve’ Smith’s characteristic states, and that is suitably related to Smith’s biological body, to constitute Smith” (Baker, 1995, p. 499).

One might raise several objections to this view. First, it seems that the constitutionalist has to concede that the body raised in glory is not the same one that is sown in weakness. One constitutionalist, Kevin Corcoran, shows that the constitutionalist can avoid this consequence by combining the view expressed above with the falling elevator account discussed in the following section.

Second, one might object that this view is merely a replay of the reassembly view. After all, what makes this new person Smith and not some replica? According to Baker, it is that “what makes Smith the person she is are her characteristic intentional states, including first-person reference to her body” (1995, p. 499). Unlike inanimate objects, such as manuscripts, persons can survive by having a material object constitute a mental life that has the suitable characteristics. The thing constituting a person does not need to have a particular origin, as in the case of van Inwagen’s manuscript.

One can follow up this reply by asking: What would happen if God were to reassemble several bodies, all of which are exactly like the body God created for Smith? It seems like Baker is committed to them all being identical to Smith, which is absurd. Baker responds to this objection by claming that we can trust in God’s goodness to not bring this situation about.

Finally, some would object that this view commits us to a controversial metaphysics, namely that of the constitutionalist ontology. Exploring in detail this objection would go well outside the scope of the present article. Rather, the reader should keep in mind that this model of resurrection does require one to adopt an ontology that many philosophers find disagreeable. (See Hudson for one metaphysician who has argued against constitutionalism.)

c. The Falling Elevator Model

One serious problem with the simulacra view is its commitment to mass deception by God. Recall that under this view none of the corpses we see here on Earth are genuine corpses. They are bodies that have never been alive and were not even around until God placed them, like movie props, on the earth. Dean Zimmerman, in his paper “The Compatibility of Materialism and Survival: The ‘Falling Elevator’ Model” has offered the materialist (he is not one himself) an account of resurrection that avoids the problems of both reassembly views and the simulacra view. The origins of the name “the falling elevator model” or the “jumping animals account” is due to the propensity of cartoon characters to avoid death in a falling elevator by jumping out at the last minute. In the same way, in the falling elevator model, bodies “jump” at the last second before death to avoid being destroyed.

According to the falling elevator model at the point just before death God enables a person to undergo fission. (An object undergoes a case of fission when it splits, like an amoeba, into two objects, both of which bear a causal relationship to the original object.) One body resulting from this case of fission goes on to die and becomes a genuine corpse. The second body is transported by God into the far future where it goes on to be resurrected. Both of these bodies have an immanent-causal connection to the body just before death and it is this connection that supports the claim that the resurrected person is identical with the person who died and the claim that the corpse is a genuine corpse and not a simulacrum.

The main objection to this view is that it is committed to denying the “only x and y principle.” This principle has many variants, but it basically states that the only things that matter when considering whether or not x is numerically identical to y are the intrinsic properties of x and y and the relationships between them. The falling elevator model violates this principle because it allows for there to be cases of fission where at one time there are two persons that are both alive and have an immanent-causal connection to a previous person. To see this, consider a case where this occurs and there are two people “Joe” and “Fred” who both have an immanent-causal connection to a previous person “Mark.” Since the causal connection between Joe and Mark and the causal connection between Fred and Mark are both of the sort used by the proponent of the falling elevator model, the proponent is forced to acknowledge that both Joe and Fred are numerically identical to Mark. But that can’t be! Joe and Fred are not numerically identical to one another, and the identity relationship is transitive. Thus, the proponent of the falling elevator model will have to insist that some other criteria, outside Joe, Fred, and Mark, be used to evaluate personal identity. For example, the proponent will likely say that an object x is numerically identical to a previous object y only if x is the closest continuer to y at that time. Thus, we have a violation of the only x and y principle.

Hudson adopts the falling elevator model but avoids the consequence of rejecting the “only x and y principle” by endorsing a perdurantist view of persons. According to the perdurantist, people are not wholly located at a particular time. Rather, they are spread out over time and are composed of temporal parts. In the case above, the perdurantist would not say that Joe and Fred are numerically identical to Mark. Instead, he would claim that the temporal parts of Joe and Fred are related to the temporal part of Mark in such a way that the object composed of Joe and Mark is a person and the object composed of Fred and Mark is a different person. Granted, these two persons overlap for the entirety of the temporal part Mark, but that is not an incoherent outcome.

Perdurantism is a controversial metaphysics. A full discussion of it falls outside the scope of this article. The reader should bear in mind that if one adopts Hudson’s view, one also has to adopt metaphysical theses that are criticized by a wide variety of philosophers.

d. Anti-Criterialism

In order to understand the motivations for anti-criterialism, it will help if we look at a puzzle known as the Ship of Theseus. The Ship of Theseus is a story about a ship captain, named Theseus, who slowly replaces each one of the parts of his ship with a new part. This change is gradual, and many are inclined to believe that at the end of the process the repaired ship (call it ship A) is numerically identical to the one he began with (see the distinction between numerical and qualitative identity in section 2). Suppose that someone were to reassemble the parts that were replaced and form a new ship (call it ship B). Would ship B also be numerically identical to the original ship? Again, many think so. Since identity is a transitive relationship it cannot be that both ships A and B are identical to the original ship. This poses a puzzle for us, as we have the intuitions that ships can both survive a replacement of their parts and can be disassembled and reassembled.

Faced with puzzles such as the Ship of Theseus, and the possibility of fission (a case where one object divides into two, such as an amoeba splitting into two amoebas), philosophers have tended to adopt criterialism. Criterialism is the claim that there are criteria for identity over time. One recent philosopher to deny this is Trenton Merricks. In this section of the article we will look at Merricks’ position and see how he applies it to the objections to the Christian doctrine of resurrection (CDR).

A criterion for identity over time is a criterion for a particular type of object that gives informative necessary and sufficient conditions for numerical identity over time. For example, if you possessed a criterion for identity over time for ships, then you would be able to say what it is about a ship at the present time that makes it identical to a ship that existed previously. Some philosophers think that such criteria are useful because having them would allow us to solve puzzles that involve questions regarding an objects identity over time. For example, a criterion for ships would help us solve the Ship of Theseus paradox by allowing us to determine whether or not ship A or ship B is numerically identical to the original ship.

Let us now look some models given for CDR. Van Inwagen, for example, believes that the criterion of identity over time for persons is that a person at a given time must be part of the same life as a person at a previous time. Hudson argues for what he calls a psychological criterion of personal identity. Given these criteria, each philosopher attempts to construct a model of resurrection that does not violate his or her criterion for personal identity. (It should be noted that Baker, a constitutionalist, does not think we can give a criterion of personal identity. This seems to be because the criterion is mysterious, and not because there is no criterion. While her model of resurrection appears under a different section in this article, the reader is encouraged to think about how an adoption of anti-criterialism might be used to defend a constitutionalist account of resurrection.)

The main objection to CDR was that there was no coherent account of resurrection in which the persons or bodies resurrected were numerically identical to persons or bodies before death. Note that there was very little argument behind this objection. Rather, the burden of proof was on the proponent of CDR to provide a “just-so” story that showed how it was possible for us to be resurrected. Underlying this assumption was the belief that there is some criterion of personal identity and the intuition that no story about resurrection can accommodate this criterion.

One might be able to shift the burden of proof away from the proponent of CDR by denying that there is any criterion of personal identity. Merricks does just this. He denies that there are any criteria of identity over time for any object. Further, he claims that he does not have an account of resurrection and that lacking such an account is no problem for the believer of CDR. It is now up to the opponent of CDR to say why CDR is impossible. Since there are no criteria of personal identity, this task will prove difficult if not impossible. Of course, the anti-criterialist might wish, along with the rest of us, that we knew how God will resurrect us. But this lack of knowledge merely shows that we are ignorant of how resurrection occurs, not that resurrection is impossible.

The main objection to this view of resurrection centers on the denial of criterialism. As in the case of constitutionalism and perdurantism, an account of the objections to this metaphysical thesis falls outside the scope of this article. However, the reader is encouraged to look at Dean Zimmerman’s paper “Criteria of Identity and the ‘Identity Mystics’” for one response to anti-criterialism.

4. Immaterialists Accounts of Resurrection

a. Augustine and Aquinas

Of course, not all Christians are materialists and in this section we will look briefly at two types of accounts of immaterialist resurrection. Note that by an “immaterialist account,” we mean an account that entails that materialism is false. Aquinas, for example, is an immaterialist in this sense even though he did not think that we are identical to our soul or essentially an immaterial object. Most of the contemporary literature on resurrection focuses on material accounts because a) many philosophers find the concept of an immaterial soul mysterious at best and b) the most common objection to the Christian doctrine of resurrection (CDR) involves its incompatibility with materialism. The reader should not take the current state of the literature to be a guide to the philosophical merits of either materialist or immaterialist accounts or the proportion of Christians who hold to each position.

One of the most popular forms of dualism held by Christians has been a dualism inspired by Plato and Descartes in which 1) the soul and body are separate substances, 2) the soul is immaterial, and 3) the soul is identical to or strongly connected to the mind. One of the early Christian adopters of this view was Augustine. He modified arguments from Plato’s Phaedo to show that the soul must be immortal. Additionally, he argued that the soul must be immortal because it desires perfect happiness. The desire for perfect happiness includes a desire for immortality because no happiness would be perfect if one feared losing it at death. This desire is a natural desire, and thus, Augustine claimed, the soul must naturally be immortal. Bonaventure later takes up this argument when he argues for the immortality of the soul. (See the Copleston reference for more details about Augustine, Bonaventure and Aquinas).

One contemporary philosopher who defends a dualism of mind and body in the Augustinian tradition is Richard Swinburne. Swinburne compares the soul to a light and the body to a light bulb. In his view, if our bodies are destroyed then the soul would naturally cease to function in the same way that a light would naturally go out when a light bulb is destroyed. However, he thinks it is within God’s power to “fix the light bulb” and restore the functioning of the soul by providing a new body or some other means. For example, God could by a miraculous divine act cause souls to function while disembodied. In any case, Swinburne emphasizes that the soul is not by nature immortal (this goes against Augustine). Swinburne’s view is compatible with the doctrine of an intermediate state (see 4.b below) but denies Merricks’ claim that we will have numerically the same body when we are resurrected. Swinburne himself thinks that there is no intermediate state.

Many contemporary Christian dualists are similar to Swinburne. They agree that a) the soul is not by nature immortal, b) the doctrine of the intermediate state is compatible with dualism, and c) we will receive new bodies at the time of the general resurrection and our souls will be “hooked up” to these bodies by a divine act. Disagreements among Cartesian dualist Christians tend to revolve around the origin of the soul and the way in which the soul interacts with the body. For example, William Hasker in his article “Emergentism” argues that the soul is generated by the body while Swinburne believes souls are created by God.

Some Christian immaterialists are not Platonic/Cartesian dualists but rather are dualists in the spirit of Thomas Aquinas. Aquinas held the hylomorphic view that persons are a composite substance of matter and form. The substantial form, that which makes someone a substance, is the rational soul. Among those who held to a hylomorphic view, there was a debate about whether or not the soul could survive death, and, if it could, whether or not this ensures a personal resurrection.

Unlike some hylomorphists (perhaps Aristotle) he argues that the human mind or soul can exist apart from the body. The human mind is not dependent on the body because the way in which it knows depends upon its state. So, instead of ceasing to exist when becoming disembodied, the soul would merely come to know the world in a different way. Additionally, Aquinas argued that we can look forward to a personal resurrection. While the various human souls are nearly identical, we can individuate them in virtue of the bodies they did have on Earth and will have in the general resurrection.

b. The Intermediate State

A Christian belief that is related to the doctrine of resurrection is the belief in an intermediate state. Many Christians believe that between the time of death and the time of resurrection there is an intermediate state at which people will continue to exist. This section of the article will look at accounts of this intermediate state and examine an argument for dualism based on the intermediate state.

It should be pointed out that Protestants and Catholics differ significantly on the nature of the intermediate state. Traditional Catholic thought holds that some people go to purgatory when they die, as opposed to ceasing to exist or immediately going to exist in the presence of God. Purgatory is a place where souls go to be cleansed of sin before entrance to heaven. Believers are encouraged to pray for those souls that are in purgatory so that the souls might escape purgatory sooner. Catholics find support for the doctrine of purgatory in 2 Maccabees 12:42-45 and in church tradition. Protestants reject the doctrine of purgatory because they deny that 2 Maccabees is an authoritative source and because they claim the doctrine of purgatory contradicts scripture. Additionally, some Catholics have held to a belief in Limbus Patrum, a place where Old Testament saints went to await the death and resurrection of Christ, and Limbus Infantum, a place where unbaptized infants go after death.

In addition to the above controversies, Christians debate the fate of believers after death. Many think that believers retain consciousness and go into the presence of God. Proponents of the intermediate state point to passages in the New Testament in support of the view. For example, 2 Corinthians 5:6-8 reads:

Therefore, being of good courage, and knowing that while we are at home in the body we are absent from the Lord…we are of good courage, I say, and prefer rather to be absent from the body and to be at home with the Lord.

Additionally, Jesus says to the thief in Luke 23:43, “Truly I say to you, today you shall be with Me in Paradise.” Some other verses that theologians cite are Hebrews 12:23 and Philippians 1:23.

Most Christians have thought that the doctrine of an intermediate state is taught by scripture. Occasionally, some thinkers have proposed the doctrine of soul sleep which is incompatible with the doctrine of an intermediate state. The doctrine of soul sleep is the claim that when a person dies he or she is unconscious until he or she is resurrected. This contradicts the doctrine of an intermediate state because the doctrine of an intermediate state holds that the believer is aware and mentally active during the time between death and the receiving of the resurrection body.

The philosophical upshot of the doctrine of an intermediate state is that some philosophers think that it entails mind-body dualism. This is one of the major arguments of John W. Cooper’s Body, Soul & Life Everlasting. In the book he argues that there are only three options given in the New Testament. The first is the view that there is an intermediate state (which according to Cooper implies dualism). The second is the view that resurrection does not happen at any future time and thus when it does happen (say outside our normal dimension of time) it is “instantaneous.” Finally, the third view is that of a resurrection after a passage of time here on earth.

Cooper accepts the theological arguments for the claim that there is an intermediate state. Why does he think that an intermediate state entails dualism? It seems to be because he thinks that an intermediate state is necessarily a disembodied state and thus is, by definition, one in which the person exists and is a non-physical entity. If this is the case then mind-body dualism does follow. However, not all scholars accept his contention that a person existing in an intermediate state is disembodied. For example, Baker claims “there is no reason to suppose that the intermediate state (if there is one) is one of disembodiment” (Baker, 1995, p. 498). Cooper, of course, would reject this claim. The reasons he cites mirror the claims made by the proponent of the incompatibility of materialism and CDR. In short, Cooper thinks that there is no coherent way for a material object to be resurrected which is numerically identical to one that previously existed, whether this resurrection occurs in an intermediate state or at the general resurrection.

5. References and Further Reading

  • Baker, L.R. “Need a Christian be a Mind/Body Dualist?” Faith and Philosophy 12 (1995): 489-504.
    • An article which presents the constitution view of persons and which argues that constitutionalism is compatible with the doctrine of the resurrection of the dead.
  • Baker, L.R. Persons and Bodies. New York: Cambridge University Press, 2000.
    • A major work in defense of constitutionalism.
  • Baker, L.R. “Persons and the metaphysics of resurrection.” Religious Studies, 43 (2007): 333–48.
    • An article which defends the constitution view of resurrection and touches on many of the other views discussed in this entry.
  • Bynum, C.W. The Resurrection of the Body in Western Christianity, 200-1336. New York: Columbia University Press, 1995.
    • A study of the doctrine of the resurrection of the dead in the early and medieval church.
  • Cooper, J.W. Body, Soul, & Life Everlasting. Grand Rapids Michigan: Eerdmans Publishing Company, 1989.
    • A book that argues for mind-body dualism based on the doctrine of the intermediate state. It includes a detailed study of the Old and New Testament accounts of the mind-body distinction and the doctrine of the resurrection.
  • Copleston, F. A History of Philosophy, Volume II: Medieval Philosophy. New York: Doubleday, 1993.
    • A good historical overview of medieval philosophy which includes details about Augustine, Bonaventure, and Aquinas and their views on resurrection and the relationship between the body and the soul.
  • Corcoran, Kevin J. “Persons and Bodies.” Faith and Philosophy 15 (1998): 324-340.
    • An article that combines constitutionalism and the falling elevator model.
  • Craig, W. L. The Son Rises: The Historical Evidence for the Resurrection of Jesus. Chicago: Moody, 1981.
    • An apologetic work in favor of the thesis that Jesus rose from the dead.
  • Grudem, W. Systematic Theology: An Introduction to Biblical Doctrine. Grand Rapids Michigan: Zondervan Publishing House, 1994. 810-839, 1109-1139.
    • A well organized systematic theology that contains references to many different religious traditions and creeds. Grudem is a conservative theologian and gives a clear, if not exhaustive, argument for traditional doctrines.
  • Habermans, G., Flew, A., and Miethe, T. Did Jesus Rise From the Dead? The Resurrection Debate. New York: Harper and Row, 1987.
    • Perspectives on whether or not Jesus did rise from the dead for a non-technical reader.
  • Hasker, W. “Emergentism.” Religious Studies 18 (1982): 473-488.
    • A defense of emergentism. Additionally, Hasker argues that the doctrine of resurrection makes dualism more attractive than materialism.
  • Hick, J. Philosophy of Religion. Englewood Cliffs, New Jersey: Prentice-Hall, 1973. 97-117.
    • Arguably, Hick argues for the replica model of resurrection. Additionally, there is a chapter on non-Christian accounts of life after death.
  • Hudson, H. A Materialist Metaphysics of the Human Person. Ithaca: Cornell University Press, 2001.
    • A defense of perdurantism and the falling elevator model of resurrection.
  • Leslie, John. Immortality Defended. Malden, Massachusetts: Blackwell publishing, 2007.
    • A book that defends a theistic (not Christian) view of resurrection that is notable for its use of modern physics and incorporation of eastern philosophy.
  • Merricks, T. “There are No Criteria of Identity Over Time.” Noûs 32 (1998): 106-124.
    • A technical defense of anti-criterialism.
  • Merricks, T. “The Resurrection of the Body and the Life Everlasting.” Reason for the Hope Within, ed. Michael Murray. Grand Rapids, Michigan: Eerdmans Publishing Company, 1999. 261-286.
    • A discussion of different accounts of resurrection and an argument for the claim that the doctrine of the resurrection provides support for materialism.
  • Perry, J. A Dialogue on Personal Identity and Immortality. Indianapolis: Hackett, 1978.
    • A good introduction to the philosophical problems surrounding resurrection. Written in dialogue form.
  • Plato, Phaedo. Translated by G.M.A Grube. Indianapolis: Hackett, 1977.
    • A fine translation of Plato’s work on the immortality of the soul.
  • Swinburne, R. The Evolution of the Soul. New York: Oxford, 1986.
    • A defense of Cartesian dualism that has a chapter on the future of the soul.
  • Van Inwagen, P. “The Possibility of Resurrection.” The Possibility of Resurrection and Other Essays in Christian Apologetics. Boulder, Colorado: Westview Press, 1998. 45-52.
    • A reprint of van Inwagen’s older article which defends the simulacra view. This version contains a significant postscript.
  • Zimmerman, D. “The Compatibility of Materialism and Survival: The ‘Falling Elevator’ Model.” Faith and Philosophy 16 (1999): 194-212.
    • The origins of the falling elevator model of resurrection.
  • Zimmerman, D. “Criteria of Identity and the ‘Identity Mystics’.” Erkenntnis 48 (1998): 281-301.
    • A discussion of criterialism.

Author Information

Jeff Green
Houston Baptist University
U. S. A.


In philosophy and theology, the word "hell," in its most general sense, refers to some kind of bad post-mortem state. The English word is apparently derived from an Indo-European word meaning "to cover," which is associated with burial, and by extension, with a "place of the dead." Accounts of hell’s nature describe these dimensions:

  • The duration of hell: is it temporary or permanent?
  • The felt quality of hell: is it a state of consciousness, or lack of consciousness? If the former, what is it like to be in hell?
  • The purpose of hell: why do some people go there?

Some Eastern religions teach that after death, people suffer conscious punishment for their sins before eventually being reincarnated. However, this ‘temporary hell’ plays a relatively peripheral role in these religions, which aim primarily at escaping the cycle of rebirth altogether. Therefore, this article concentrates on philosophical issues surrounding the doctrine of hell as it has arisen in the theistic religions of Judaism, Christianity, and Islam. In these, hell is central to traditional eschatological teachings about a last judgment. This is the culminating event of history, in which God bodily resurrects the dead and separates the righteous or saved (those with love for or faith in God) from the wicked, admitting the saved to some kind of heaven or paradise, and damning the wicked to a permanent hell.

Section One explains several alternative understandings of what hell is like. On the traditional Christian model of hell, articulated by some of the West’s most historically significant philosophers and theologians, hell involves permanent, conscious suffering for the purpose of punishing human sin. According to annihilationism, the damned ultimately cease to exist and so are not conscious. According to the free will view of hell, the purpose of hell is to respect the choice of the damned not to be with God in heaven. Finally, according to universalism, there is either no hell at all, or only a temporary hell. Section Two considers the ‘problem of hell’ (which is a particular form of the general philosophical problem of evil): if, as theistic religions traditionally have taught, God is all-powerful, all-knowing and completely good, it seems morally and logically impossible that God would allow anyone to be utterly and ineradicably ruined, as the damned in hell would seem to be. Advocates of the traditional view normally respond to this problem by claiming that hell is a function of impartial divine justice; this line of response is explored in Section Three. Finally, Section Four explains how the free will view deals with the problem of hell.

Table of Contents

  1. The Nature of Hell
    1. The Traditional View
      1. The Literal View
      2. Psychological Views
        1. Harsh Psychological View
        2. Mild Psychological View
    2. Annihilationism
    3. Free Will View
    4. Universalism
  2. The Problem of Hell
  3. Hell and Justice
  4. Hell and Freedom
  5. References and Further Reading

1. The Nature of Hell

a. The Traditional View

The Tanakh/Bible contains various images of the last judgment. One striking picture in Hebrew scripture occurs at the end of Isaiah (66:22-24). [Quotations from the Bible are from the New Revised Standard Version.] Faithful Jews, who will “remain before” God in a prosperous “new heavens and new earth,” “shall go out and look at the dead bodies of the people who have rebelled against [God], for their worm shall not die, their fire shall not be quenched, and they shall be an abhorrence to all flesh.” In the Gospel of Mark (9:48), Jesus appropriates this imagery in describing hell as a place “where their worm never dies, and the fire is never quenched.” In the Gospel of Matthew (25:31-46), Jesus teaches that at the last judgment, those who failed to care for “the least of my family” will “go away into eternal punishment,” which is “the eternal fire prepared for the devil and his angels.” Elsewhere in Matthew (8:12, 22:13, 24:51, and 25:30), Jesus invokes a rather different image, suggesting that hell is “outer darkness” (that is, outside heaven) “where there will be weeping and gnashing of teeth.” He teaches that many will seek to enter heaven but be shut out (Luke 13:22-30), suggesting that there is no way to escape from hell once there. Finally, the Christian Bible’s closing book (Revelation 20:7-15) describes the devil, along with Death, Hades, and “anyone whose name was not found written in the book of life,” being cast “into the lake of fire and sulfur . . . and they will be tormented day and night forever and ever.” The Qur’an teaches that hell is “a prison-house” (17:8) in which “those who disbelieve and act unjustly . . . shall remain forever” (4:168) to receive “a sufficient recompense” (9:68) for their sins. There they will “…burn in hellfire. No sooner will their skins be consumed than [God] shall give them other skins, so that they may truly taste” divine wrath (4:55). [Quotations from the Qur’an are from the translation by N. J. Dawood (Penguin Books, 1974).]

Reflection on these scriptural images has given rise to the traditional view of hell. The passage from Isaiah, in which the residents of hell are dead bodies, suggests that hell is a state of unconscious existence, or perhaps even non-existence. While some of the Gospel passages may fit with this view, the ones about weeping and gnashing of teeth seem to suggest instead that the residents of hell are conscious of their bad condition. Furthermore, the passages from Revelation and the Qur’an suggest that the denizens of hell experience torment (extreme conscious suffering). So, on the traditional view, the felt quality of hell is suffering (this implies that the damned exist and are conscious), and its purpose is to punish those who have failed to live faithfully in this life. With respect to duration, the traditional view teaches that the suffering of hell is not only permanent, but necessarily permanent, because there is no possible way for the damned to escape hell once there as a irreversible consequence of their sins. Different versions of the traditional view spring from different understandings of the suffering involved in hell.

i. The Literal View

In the harshest version – which takes much of the scriptural imagery literally – hell involves extreme forms of both mental and physical suffering. On the Day of Judgment, the dead will all be physically resurrected, and the bodies of the damned will be consigned to a literal lake of fire. According to Augustine, this fire will cause a physical agony of burning, but will not consume the flesh of the damned, so that their agony will never end. Furthermore, the damned will suffer psychologically: their most powerful desire will be to escape from hell, but they will realize that escape is impossible, and so will experience not only frustration, but despair. Furthermore, as Augustine puts it, they will be “tortured with a fruitless repentance.” (Book 9) Realizing that their own actions have placed them in this miserable position, they will be filled with regret and self-loathing.

ii. Psychological Views

Some traditionalists object that the literal view of hell, as a place of physical torment, presents God as sadistic. They prefer to see the scriptural images of fire, darkness, and so forth as potent symbols or metaphors for the psychological suffering of hell. Because humans were made for God, their most fundamental desire (whether they consciously acknowledge it or not) is to enjoy eternal union with God. As a state of eternal separation from God, hell would frustrate this central human desire. Therefore, even if the damned felt physical pleasure, they would still experience psychological suffering: frustration, despair, regret, and self-loathing. This ‘psychological suffering only’ view of hell can be further subdivided into harsher and milder views concerning the extent to which the damned suffer.

1) Harsh Psychological View

On the harsher view of psychological suffering, the torments of hell will cause the damned to see clearly, perhaps for the first time, that they truly desire union with God. Although this epiphany will bring them to genuine repentance and willingness to obey God, it will be ‘too late’ for them to enter heaven, for hell is necessarily an eternal state, from which there is no escape. On the harsh model, the damned really want to leave hell, but can’t.

Although this view fits with some scriptural imagery noted above (in which people try to enter heaven and are turned away), it is difficult to reconcile with the idea that God loves all people, including the damned. It would seem that a truly repentant denizen of hell would have attained the very same psychological state of love for God that the blessed in heaven enjoy. Therefore, it is hard to imagine that a loving God would want to keep such a person in hell (and to suggest that God might want to admit such people to heaven, but be unable to do so, would be to do deny God’s omnipotence).

Against this objection, some may argue that God does not in fact love all people, but only the elect, who are predestined for salvation. Others might point out that heaven is a reward for loving God in the unclear conditions of mortal life; those repenting only after God has made things clear to them would fail to merit heaven in the same way as the blessed (however, this argument would be difficult for many Christians to make, given their stress on the importance of divine grace, rather than individual merit, in the process of salvation).

2) Mild Psychological View

In the milder view of psychological suffering, while the damned may have a desire to leave hell and enter heaven, they would also wish to remain as they are: self-obsessed, morally vicious, etc. This view contends that the damned continually act on their desire to remain the same, and so are unwilling to repent and submit to God. If they seek to enter into heaven, it is only on their own terms. This is a ‘mild’ version of hell because, though the damned suffer in hell, they do not suffer badly enough to want (all things considered) to leave. In their vicious state, they could not enjoy union with God, and so prefer hell.

The mild view is easier to reconcile with the idea that God loves even the damned; if a denizen of hell were to genuinely repent, God would admit such a person to heaven. Thus, hell will be a permanent state for the damned only because they will never repent. There are two ways to explain why the damned will refuse to repent.

First, they may be unable to repent, because they have lost their freedom to choose what is truly good. In this case, hell is necessarily eternal; it is not possible for the damned to escape from hell once they arrive there. Second, the damned may be able to repent, but remain eternally unwilling to do so. That is, while the damned will actually remain in hell for all eternity, it is possible for their stay in hell to be temporary, since they could repent and be admitted to heaven.

This second explanation of eternal damnation is actually a departure from the traditional view of hell. As noted above, the traditional view teaches that the duration of hell is necessarily eternal because it is not possible for the damned to escape. This second view, according to which hell is eternal, but not necessarily eternal, is discussed here only because it is so close to the traditional view and does not have a widely accepted label. [The closest thing to an established label comes from Kvanvig (1993), which uses the term “second chance theory of hell” for any view denying that it is impossible for the damned to escape hell. See pages 71-73.]

It could be objected that on either version of the mild view, hell is not a form of punishment because it is not imposed on the damned against their will. However, it does not seem that all punishment must be contrary to the will of its recipient. It seems rather that punishment is a negative consequence demanded by justice, regardless of whether or not the one punished wishes to be punished. For example, if justice demands that God remove the ability of the damned to repent, then this removal would seem to be a form of punishment (one which shapes, rather than opposes, the wills of the damned).

b. Annihilationism

Annihilationism (also known as the ‘conditional immortality’ view) teaches that ultimately the damned cease to exist, and so are not conscious for all eternity. Whereas the traditional view is comprehensive in the sense that it specifies the purpose, duration, and the felt quality of hell, annihilationism is a thesis only about the last of these categories. Therefore, it is possible for annihilationists to take different positions on the overall nature of hell. They normally assume that once God annihilates a person, she will never again come into existence; annihilation is a permanent state. However, annihilationists disagree about God’s reason for annihilating the damned. Many see annihilation as retributive punishment for sin, while others think that God annihilates the damned out of love for them (this will be discussed further in section four).

According to annihilationism, the ultimate fate of the damned does not involve suffering (because it is a state of non-existence). However, it is open to annihilationists to assert that God puts the damned through a period of conscious suffering (enough, perhaps, to ‘pay them back’ for their sins) before finally snuffing out their existence. Descriptions of this temporary conscious suffering could vary in harshness along the lines described above for the traditional view.

The best argument for annihilationism derives from the traditional theistic doctrine of divine conservation: all things depend on God to conserve their existence from moment to moment, and so exist only so long as they are connected to God in some way. But if hell is complete and utter separation or disconnection from God, then hell would be a state of non-existence. Against annihilationism, some would object that it is contrary to God’s creative nature to annihilate anything (this will be discussed further in section four).

c. Free Will View

The free will view is primarily a thesis about the purpose of hell. It teaches that God places the damned in hell not to punish them, but to honor the choices they have freely made. On this view, hell originates not so much from divine justice as from divine love.

According to the free will view, one of God’s purposes in creation is to establish genuine love-based relationships between God and humans, and within the human community. But love is a relation that can exist only between people who are genuinely free. Therefore, God gives people freedom in this life to decide for themselves whether or not they will reciprocate God’s love by becoming the people God created them to be. People freely choose how they act, and through these choices they shape their moral character (a collection of stable tendencies to think, feel, and act, in certain ways). Those who develop a vicious character suffer psychologically, both in this life and in the life to come, for in the afterlife, people will keep the character they have developed in this life. So the suffering of hell consists (at the least) in living with one’s own bad character.

The question may arise: Why does God not simply alter the character of vicious people after they die, so that they become virtuous and God-loving denizens of heaven? Some would argue that such alteration would be too radical to preserve personal identity over time: the person admitted to heaven, though in many ways similar to the original vicious person, would not be numerically the same person because of serious differences in moral character; in altering the vicious person, God would be, in effect, annihilating her and replacing her with a numerically distinct virtuous counterpart. Against this argument, it could be claimed that even if instantaneous transformation would undermine personal identity, an omnipotent God could surely transform vicious people through a more gradual process that preserves personal identity. But even if it is possible, adherents of the free will view would consider such divinely-engineered transformation deeply inconsistent with the divine plan. For if God remade vicious people into saints, the humans’ new attitude toward God would not be truly their own, thus removing the genuineness of the love relationship between God and creature.

The free will view’s emphasis on character formation leads quite naturally to the Roman Catholic doctrine of purgatory. Because of their bad character, vicious people cannot have an afterlife entirely devoid of suffering. Those in purgatory, though initially vicious, are able and willing to repent, freely receiving a good character from God; therefore their suffering is temporary and they eventually enter into heaven. Those in hell, on the other hand, are either unable or unwilling to repent; the only afterlife God can give such people is an afterlife of self-inflicted suffering.

One pressing question for the free will view is why God gives the damned an afterlife at all, rather than simply letting them cease to exist at death (a version of annihilationism). This, and other objections to the free will view, will be discussed in section four.

Like annihilationism, the free will view is not a comprehensive view of hell, and so is subject to variation. It can be combined with either the claim that the damned suffer consciously for all eternity, or the claim that they are (eventually) annihilated. Another point of variation concerns post-mortem freedom: some teach that the damned have the ability after death to continue freely choosing and shaping their character, while others claim that the damned are locked into their vicious characters, unable to change.

d. Universalism

Strictly speaking, universalism is not a view of what hell is like, but it is nevertheless an important view relevant to any discussion of hell. Universalism teaches that all people will ultimately be with God in heaven. There are two main versions of the view. According to necessary universalism, it is not possible for anyone to be eternally separated from God; necessarily, all are saved. According to contingent universalism, while it is possible that people could use their free will to reject God forever, no one will actually do this; eventually, everyone will say yes to God’s love. While it would be consistent with the basic universalist thesis to say that all people go immediately to heaven upon death, most universalists (in an effort to incorporate scriptural warnings about hell) insist that many people will undergo a temporary period of post-mortem suffering before entering heaven. This period of suffering, which could be seen as a temporary hell or as a kind of purgatory, could be motivated either by divine justice, as in the traditional view of hell, or by divine love, as in the free will view.

2. The Problem of Hell

Atheists have leveled two different ‘arguments from evil’ against the existence of God (see Evil, Evidential Problem of, and Evil, Logical Problem of). According to the evidential argument from evil, we would not expect a world created by a necessarily omnipotent, omniscient, morally perfect being (that is, an ‘omniperfect’ God) to contain suffering of the kinds and amounts that we actually experience; therefore, though the suffering (i.e. evil) we see does not logically imply the non-existence of an omniperfect God, it does count as evidence against God’s existence. According to the logical argument from evil, it is not even logically possible for an omniperfect God to coexist with evil. Given the evident existence of evil, it is impossible for there to be an omniperfect God. Furthermore, since religious belief systems normally assert the existence of both God and evil, they are internally incoherent.

The problem of hell is a version of the logical problem of evil, and can be stated thus:

(1) An omniperfect God would not damn anyone to hell without having a morally sufficient reason (that is, a very good reason based on moral considerations) to do so.

(2) It is not possible for God to have a morally sufficient reason to damn anyone.

(3) Therefore, it is not possible for God to damn anyone to hell.

This argument concludes that if there is an omniperfect God—one that necessarily has the perfection of Goodness—then no one will be damned. Therefore traditional theological systems, which insist on both damnation and God’s omniperfection, are incoherent and must be revised. Theologians must give up either the doctrine of damnation or the traditional understanding of God as omniperfect.

In light of the above argument, those who retain their belief in God’s omniperfection have two options: embrace necessary universalism, or challenge the soundness of the argument. The argument is valid, so those who wish to reject it must deny one of its premises.

The argument’s first premise seems to follow from the nature of the relevant divine attributes. To say that a being is morally perfect is (in part) to say that such a being would not want any suffering to occur unless there were a morally sufficient reason for it to occur. God’s omnipotence and omniscience imply that God has knowledge and power sufficient to ensure that things happen only if God wants them to happen. So it seems that a perfectly good, omnipotent, and omniscient being would not allow suffering – particularly of the extreme sort associated with damnation – unless there was a very good moral justification for allowing it.

The second premise of the argument is much more controversial, however. Anti-universalists (i.e. those who affirm both divine omniperfection and damnation) have denied the premise in two different ways. The first is simply to deny that, given our finite minds, we can be sure that (2) is true. Is it not at least possible for God to have a morally sufficient reason for allowing damnation? Perhaps there is some great good (which we cannot now, and perhaps never will, grasp) that God cannot realize without the damnation of souls. Leibniz (c. 1672) suggests one possible example of such a good: the overall perfection of the universe. It may be that God brings about the damnation of some because preventing their damnation would have made the overall story of the universe less good. While a view such as Leibniz’s may be appealing to moral utilitarians, people with more Kantian moral intuitions will object that a God who pursues the perfection of the universe (or any other unseen good) at the expense of the damned is not morally perfect at all, but is instead using the damned as a mere means to divine ends (see Kant’s Ethics).

Second, anti-universalists can claim that (2) is certainly false because we know of a morally sufficient reason for God to allow damnation. They have proposed two such reasons. The first, and historically the most popular, is justice: if God failed to damn the wicked, God would be acting unjustly—acting in collusion with the wicked—and so would be morally imperfect. The second, more popular in the last century, is freedom: if God necessitated the salvation of everyone, then God would be removing human freedom to say “no” to God in an ultimate way, and consequently the value of saying “yes” to God would be significantly diminished.

3. Hell and Justice

Many defenders of the traditional view of hell claim that though God is loving, God is also just, and justice demands the eternal punishment of those who sin against God. However, others often object that far from demanding damnation, justice would prohibit it, since there would be a discrepancy between the temporary, finite crimes committed by the sinner and the everlasting, infinite punishment inflicted by God. Some see such reasoning as favoring annihilationism: if hell is punishment, then it must involve (at most) a finite amount of conscious suffering followed by annihilation. On the other hand, capital punishment (the earthly analogue of annihilation) is usually considered a more serious punishment than life imprisonment without parole (which could be considered analogous to eternal conscious punishment).

The following ‘infinite seriousness’ argument aims to show that justice not only permits God to damn some (contra the objection above), but actually demands it.

(4) Other things being equal, the seriousness of a crime increases as the status (the degree of importance or value) of its victim increases.

(5) God has an infinitely high status.

(6) Therefore, crimes against God are infinitely serious (from (4) and (5)).

(7) All sin is a crime against God.

(8) Therefore, all sin is infinitely serious (from (6) and (7)).

(9) The more serious a crime is, the more serious its punishment should be.

(10) Therefore, all sin should receive an infinitely serious punishment (from (8) and (9)).

Premise (9) is relatively uncontroversial, because it seems to be just cashing out part of what we mean when we talk about the “seriousness” of a crime. To say that a crime is not serious is (in part) to say that does not merit a serious punishment; to say that a crime is moderately serious is to say that it deserves a moderately severe penalty, and so on. Premise (5) is also uncontroversial, since an infinitely perfect being would seem to have infinite value and importance. However, some of the other premises of the infinite seriousness argument are subject to dispute.

At first glance, (7) may seem false: how can Smith’s theft of Jones’ wallet wrong God, especially if Smith is unaware of God’s existence and so cannot intend the theft to be directed against God? However, many believe that when one person is sufficiently precious to, and dependent upon, another, a wrong committed against the first person automatically wrongs the second. For example, harm done to an infant is arguably also harm done to the infant’s mother. But if all things depend on God for their continued existence, and all people are precious to God, then by the same principle it would seem that God is wronged by all sin, even if the sinner does not intend to wrong God.

Premise (4), which claims that seriousness of a crime is a function not only of the nature of the crime itself and the harm it causes, but also of the status of the victim(s) wronged by the crime, seems to fit with some widely shared moral intuitions. For example, other things being equal, killing a human (a higher status victim) seems to be a much more serious crime than killing a neighbor’s dog (a lower status victim). However, when the harm against a victim is indirect (e.g., by means of harming someone precious to the victim), it is not clear that the victim’s status is relevant to the seriousness of the crime. Other things being equal, killing a saint’s best friend seems no worse than killing a criminal’s, even though the saint would arguably enjoy a higher social status. On the other hand, this may not be a genuine counterexample to the first premise, because saints and criminals are both of the same natural kind (humanity); perhaps all the infinite seriousness argument needs is a principle according to which harms against beings of more ontologically perfect kinds are more serious than harms against beings of less perfect kinds.

Finally, as Jonathon Kvanvig (1993) notes, factors such as the criminal’s intentions are relevant to determining the appropriate degree of punishment for a crime. For example, premeditated murder is normally considered more serious than murder committed in a fit of passion. Therefore, it seems that not all sin deserves the same degree of punishment, even if all sin is against God. Insofar as damnation would inflict the same punishment (eternal separation from God) for all sin, it would be fundamentally unjust. This objection would seem to vitiate even annihilationist conceptions of hell, if they see annihilation as punishment. In response, it could be suggested that although all the damned are given an infinitely lengthy punishment, more serious criminals are placed in more harsh conditions. Or perhaps it could be claimed that although not all sins deserve infinite punishment, everyone commits at least one infinitely serious sin at some point in life, and so would deserve infinite punishment.

Even if the infinite seriousness argument is sound, the idea of divine mercy creates difficulties for a defense of the traditional view of damnation, as follows. Suppose that every person deserves damnation. Theistic religions teach that God is willing to forgive the sins of the faithful, so that they will not receive their just punishment. But if God is able and willing to forgo the punishment in one case, why not in all cases? There are two main (seemingly incompatible) responses to this question. Some claim that if God were to forgive everyone, this would display God’s mercy, but not God’s justice. Therefore, because God seeks to reveal all the divine attributes, God cannot will the salvation of all. Others insist that although God is willing to forgive everybody, not everyone is willing to ask for, or accept, God’s forgiveness, resulting in self-inflicted retribution.

4. Hell and Freedom

Because the traditional view of hell understands the purpose of damnation to be retribution for sin, it would seem to stand or fall with the infinite seriousness argument. As discussed at the end of section one, however, those who see hell as an expression of divine love have proposed an entirely different morally sufficient reason for God to allow damnation: respect for freedom. In the free will view, damnation is the only possible way for God to honor the freedom of the damned. To force the sinners into heaven against their wills would not, in this view, be an act of Divine love. Instead, God respects human autonomy by allowing us to shape our character through our own free choices, and by refusing to unilaterally change the character we have chosen; if in this life, we freely develop into morally vicious and miserable people, then that is how God will allow us to remain for eternity.

But if the only possible eternity open to the damned is one of fundamental ruin and despair, why would God give them a never-ending afterlife? Would it not be more loving of God to let the damned cease to exist at death (or, if justice demands it, after a temporary postmortem period of punishment)? The two main versions of the free will view require different lines of response to this question. Those who deny post-mortem freedom might insist that only the guaranteed existence of an eternal afterlife (good or bad) can render our ante-mortem choices truly momentous. Therefore, to guarantee the importance of our earthly freedom, God must give an afterlife to everyone. For those who affirm post-mortem freedom, God gives the damned a never-ending afterlife (at least in part) so that they can continue to choose whether to accept or reject God’s love. Indeed, some who defend the free will view suggest that because our earthly freedom and knowledge with respect to God are often very limited (indeed, because God’s very existence is not evident to many), no one would be in a position to make a truly decisive choice for or against God until the afterlife, in a situation where the agent had a clearer understanding of what was at stake. The subsequent discussion will focus on versions of the free will view that posit post-mortem choice.

The free will view assumes an incompatibilist account of free will, according to which a person is genuinely free with respect to her choices only if she (or an event involving her) is the ultimate causal determinant of those choices. Therefore, if God causally determined denizens of hell to repent, then God—rather than the humans—would be the ultimate determining cause of the repentence, and the humans would not be the agent of their own repentance. Those that hold the compatibilists view concerning free will and determinism claim that free actions can be causally predetermined, as long as the chain of causes runs through the will and intellect of the free agent in an appropriate way. If compatibilism is correct, then God could determine everyone to enter heaven freely, by first causing them to desire heaven enough to repent. Therefore, in claiming that God cannot both (1) give creatures genuine freedom and (2) guarantee that all will be saved , the free will view relies on incompatibilism, which is a very controversial view. For more on the compatibilist/incompatibilist controversy, see the entry on Free Will.

Even if an incompatibilist notion of freedom is taken for granted, it is not clear that the desire to honor human free choices would provide God with a morally sufficient reason to allow damnation. To see why, consider an analogous human situation. Perhaps parents should, out of respect for their children’s freedom, allow them to harm themselves in relatively insignificant ways. But as the degree of self-harm increases, it becomes less and less clear that non-intervention is the loving parental policy. Could it ever be truly loving to allow one’s child to, say, commit suicide? If the child were very young, or did not clearly understand the nature or consequences of her choice, then it would seem clearly wrong for the parent not to do everything in her power to stop the suicide. But if the child is both fully mature and fully cognizant of her choice and its ramifications, then some would consider parental intervention a violation of the child’s rightful autonomy. Insofar as the free will view appeals to God’s respect for the freedom and autonomy of the damned, it seems to conceive of the damned as related to God in something like the way an adult child is related to a parent. Those who see humans as more like infants in relation to God – because of the vast gap between divine and human power – will probably not be persuaded by the free will view.

Another possible objection to the free will view concerns the relationship between freedom and rationality. Free choices, if they are to have any real value, must be more than simply random or uncaused events—they must be explicable in terms of reasons. Free action must be a species of rational action. But there seems to be no reason to choose eternal suffering (or non-existence) over an eternity of bliss. The choice to remain in hell would be utterly irrational, and so could not count as a genuinely free choice. Defenders of the free will view would likely counter this objection by distinguishing between objective and subjective reasons. If people amass enough false beliefs, then what is in fact bad or harmful can seem good or beneficial to them. So perhaps the choice to remain in hell, while admittedly not objectively rational, could be motivated by the damned person’s subjective reasons (that is, by how things seem to him or her). Even if this line of defense is successful, it leaves open questions about the value of freedom in such cases: is it really a good thing for agents to have the power to act in ways that bring about their own objective ruin?

Although the freedom view does not rule out the traditional picture of hell as eternal existence apart from God, some would argue that it requires openness to other possibilities as well. What would happen, for example, if the damned hated God to such an extent that they would prefer non-existence to retaining even the slightest dependence on God? It would seem that God as depicted in the free will view would (out of respect for the freedom of the damned) give them what they wished for, unless there were a good reason not to. Thus, in the freedom view it would seem possible that the damned may end in annihilation. Hell would then be disjunctive: it could involve eternal conscious suffering or annihilation. Advocates of the free will view who favor a more traditional conception of hell can respond to the foregoing argument by positing some reason for God not to honor a damned person’s choice for annihilation. Here are four possible responses.

First, some suggest that souls, once created, are intrinsically immortal, and cannot be destroyed even by God. Most theists would not find this suggestion plausible, however, because it seems to do away with divine omnipotence.

Second, perhaps annihilating the damned would violate God’s moral principles. According to Stump (1986), Aquinas believed that being and goodness are convertible, and so considered morality to require that God never destroy a being unless doing so would promote an even greater level of being/goodness. Since annihilating a damned soul would decrease being without a compensating increase in being elsewhere in the universe, God is morally bound not to do it. This view could be criticized (as was Leibniz’s view above) for giving insufficient weight to the idea that God is first and foremost good to individuals, and only secondarily concerned with abstract issues like the amount of being in the universe.

Third, God might refuse to annihilate the damned because it is better for them (regardless of global considerations) to go on existing, because existence itself is a significant good for those who enjoy it. On the other hand, in using phrases like “a fate worse than death,” people seem to presuppose that the goodness of existence can be outweighed by negative features of existence. Therefore, if the sufferings of hell are serious enough, they could make continued existence there even worse for the damned than non-existence. So whether we consider this third suggestion (that eternal conscious separation from God is better for the damned than annihilation) to be plausible will depend on how bad we consider non-existence to be, and how bad we consider the felt quality of hell to be.

Fourth, God might refuse to annihilate the damned out of hope. This claim could be endorsed even by those who believe that an eternity of conscious separation from God would be worse than non-existence. We would think it right to interfere in the attempted suicide of a young person with temporary depression, because of her hope for a brighter future. Similarly, it would seem right for God to keep the damned in existence (even if this existence is temporarily worse than non-existence for them) if there were some hope that they might repent. Out of respect for freedom, God would not unilaterally alter the character of the damned so as to cause their repentance, but out of love and hope God would refuse to allow the damned to extinguish the possibility of reconciliation. If God allows the damned to continue in their suffering only out of hope that they may repent, then no one (not even God) can be certain that the damned will go on suffering eternally. For if God knew (through middle knowledge) that the damned would never freely repent, then God would have no reason to prolong their suffering.

For those who favor the fourth explanation over the first three, the freedom view faces a dilemma regarding the eternity of hell. On the one hand, if there is no hope that the damned will repent, God would seem to have no reason not to honor their (possible) choice for annihilation, thus rendering hell (understood as a state of conscious suffering) possibly temporary. On the other hand, if there is hope that a person in hell will repent, then while God would not honor a choice for annihilation, there is still the possibility for hell to be temporary, since a person who fully repented would eventually go to heaven. On this latter, hopeful, scenario, hell becomes not a place of everlasting retributive punishment, but a place of indefinitely long therapeutic punishment, aimed at the ultimate reconciliation of sinners with God. While it remains possible that some people will in fact hold out against God forever, on the freedom view the functional role of hell is very similar to that of purgatory in Roman Catholic theology: a state of being aimed at leading a person to heaven, through the removal of character flaws that would prevent her from enjoying beatific intimacy with God. The main difference is that the inhabitants of purgatory are certainly destined to join with God in heaven, while the inhabitants of hell face an uncertain future.

5. References and Further Reading

  • Adams, Marilyn M. (1993) ‘The Problem of Hell: A Problem of Evil for Christians’, in E. Stump (ed.) Reasoned Faith, A Festschrift for Norman Kretzmann, Ithaca, NY: Cornell University Press, 301–27.
    • An explanation of the problem of hell, advocating for universalism.
  • Augustine, City of God, Book 21.
    • Articulates and defends a literal version of the traditional Christian view of hell.
  • Crockett, William, ed. (1997) Four Views on Hell. Grand Rapids: Eerdmans Publishing Co.
    • Advocates of the literal view, the psychological view, annihilationism, and purgatory take turns explaining their own views and responding to the views of the others.
  • Kvanvig, Jonathan L. (1993) The Problem of Hell. New York: Oxford University Press.
    • An extremely thorough study of philosophical issues surrounding the problem of hell; argues at length against a retributive model of hell and in favor of love as the divine motivation for hell.
  • Leibniz, G. W. (c. 1672) The Philosopher’s Confession.
    • Proposes a ‘best possible world’ defense of damnation.
  • Lewis, C.S. (1946) The Great Divorce. London: MacMillan.
    • A psychologically astute fictional story about heaven and hell; it assumes something like the free will view.
  • Stump, Eleonore (1986) ‘Dante's Hell, Aquinas's Moral Theory, and the Love of God’, Canadian Journal of Philosophy, 16:181-196.
    • Attributes a version of the free will view to Dante and shows that it can be defended on Aquinas’ moral principles.
  • Swinburne, Richard (1983) ‘A Theodicy of Heaven and Hell’, The Existence & Nature of God, ed. Alfred J. Freddoso, Notre Dame: University of Notre Dame Press. pp. 37-54.
    • An articulation and defense of the free will view highlighting the importance of character formation; considers annihilation as well as eternal existence as possibilities for the damned.
  • Talbott, Thomas B. (1999) The Inescapable Love of God. Universal Publishers.
    • An extended argument for universalism.
  • Walls, Jerry (1992) Hell: The Logic of Damnation. Notre Dame: University of Notre Dame Press.
    • A defense of the free will view, emphasizing the need for postmortem choice.

Author Information

C. P. Ragland
Saint Louis University
U. S. A.


Autism, or the Autistic Spectrum Disorder, is a developmental psychological disorder that begins in the early stages of infancy and affects a child’s ability to develop social skills and engage in social activities. Three current psychological/philosophical theories attempt to explain autism as the result of certain cognitive deficiencies. Each theory takes a different approach to the autistic disorder and theorizes different causes. While no theory is without its difficulties, each different approach to the autistic disorder has played an important role in developing the philosophical understanding of social cognition.

Autism is more prevalent, roughly four times more, in males than females. As a disorder, it only has existed as a recognized clinical entity for sixty years and recent research indicates that it is more widespread in the population than is currently appreciated. Persons with autism show various difficulties in social skills, cognitive processing and other co-occurring behavioral and physical problems. The latter include repetitive movements such as hand-waiving or rocking, self-injurious behavior (in cases of extreme autism) and problems with digestion. Autism has become a nationwide issue with numbers of support groups, websites and research programs. Autism has also become influential in many discussions within philosophical psychology.

Autism has played a strong ancillary role in many debates concerning social cognition, how it develops and its structure. Because persons with autism lack the basic abilities to think about others, understanding autism may give us a window into understanding much or all of social cognition. Analogous to the role lesion studies and other neuropsychological disorders play in our understanding of cognition, brain structure and function and neural organization, autism may provide valuable insight into social cognition. The study of autism, with its specific constellation of behavioral and cognitive deficiencies, may be able to highlight the structure, development and nature of social cognition in general.

This article begins with the clinical definition of autism from the DSM-IV, then discusses the role autism has played in three main theories of cognition: Theory of Mind (hereafter ToM), Simulation Theory and the Executive Control or Metacognitive theory. Finally, there is a brief discussion of the role autism still plays in understanding social cognition.

Table of Contents

  1. The Clinical Properties of Autism
  2. Autism and Theory of Mind
  3. Executive Control/Metacognitive Approaches to Autism
  4. Autism and Simulation Theory
  5. Conclusion
  6. References and Further Reading

1. The Clinical Properties of Autism

Persons with autism show severely diminished or abnormal social interaction and communication, as well as a restricted repertoire of activities and interests (DSM-IV, p. 66). These symptoms can be mild, seen in a lack of certain nonverbal behaviors such as eye-to-eye gaze and gestures or any type of social interaction, or a more serious lack of all reciprocal social interaction and other large impairments in language development and language use. The autistic child may lack close social ties or the abilities to act as “friends” normally with other children. They also may prefer to play alone rather than with others.

The DSM-IV provides the following checklist as a guide to diagnosing autism:

A. A total of six (or more) items from (1), (2), and (3), with at least two from (1), and one each from (2) and (3):

  1. qualitative impairment in social interaction, as manifested by two of the following:

    (a) marked impairment in the use of multiple non-verbal behaviors such as eye-to-eye gaze, facial expression, body postures, and gestures to regulate social interaction.(b) failure to develop peer relationships appropriate to developmental level

    (c) A lack of spontaneous seeking to share enjoyment, interests, or achievements with other people (e.g., lack of showing, bringing, or pointing out objects of interest)

    (d) Lack of social reciprocity

  2. qualitative impairments in communication as manifested in at least one of the following:

    (a) delay in, or total lack of, the development of spoken language (not accompanied by an attempt to compensate through alternative modes of communication such as gesture or mime)(b) in individuals with adequate speech, marked impairment in the ability to initiate or sustain a conversation with others

    (c) stereotyped and repetitive use of language or idiosyncratic language

    (d) lack of varied, spontaneous make-believe play or social imitative play appropriate to developmental level

  3. restricted repetitive and stereotyped patterns of behavior, interests, and activities, as manifested by at least one of the following:

    (a) encompassing preoccupation with one or more stereotyped and restricted patterns of interest that is abnormal in either intensity or in focus(b) apparently inflexible adherence to specific, nonfunctional routines or rituals

    (c) stereotyped and repetitive motor mannerisms (e.g., hand or finger flapping or twisting, or complex body movements)

    (d) persistent preoccupation with parts or objects

B. Delays or abnormal functioning in at least one of the following areas, with onset prior to age three years: (1) social interaction, (2) language as used in social communication, or (3) symbolic or imaginative play.

C. The disturbance is not better accounted for by Rett’s disorder or Childhood Disintegrative Disorder.

These guidelines intentionally lack specificity to account for the wide variety of symptoms and severity found in cases of autism. One of the more well-known cases of autism, is that of Temple Grandin, who holds a PhD in animal science and teaches at Colorado State University. Professor Grandin teaches classes and runs her own business. These are not the kinds of accomplishments expected from a person diagnosed with autism. The more stereotypical case is the child who neither communicates with others nor seems to want to leave their solitary world. Autism derives its name from the intense feeling one gets of the “aloneness” of the autistic person. Even a brief survey of the literature on autism would suffice to show that people diagnosed with autism have varying degrees of impairment.

The clinical and diagnostic features of autism are given to give the philosophical reader a more direct understanding of how clinicians often view the disorder. While such issues are not typically germane to philosophical discussions, they are important in understanding the disorder.

2. Autism and Theory of Mind

Autism has played an important role in theories of cognition in philosophical psychology. The first approach with which we will deal is the Theory of Mind [ToM] approach to development and its treatment of autism. The phrase “ToM approach” is used as a general marker for that family of theories that takes our knowledge of other minds to be innate and basic (See Baron Cohen, 1995; Carruthers, 1996; and Botterill & Carruthers, 1999 for related ToM views on development and autism). Further, the ToM approach often holds that ToM cognition is subtended by modules of a sort. The work of Simon Baron-Cohen is seminal and is generally taken to be the locus classicus of these approaches.

The following example with help us to better understand the type of socio-cognitive knowledge many theories of social cognition attempt to explain. Imagine two close friends have just come back from a night of trick-or-treating one Halloween and have commenced surveying the candy they received. Sam, being an aficionado of hard candy, begins to gather all those types of pieces into a pile. Sam’s compatriot Alice, on the other hand, is a connoisseur of chocolate and he is reminded of this when he sees her collecting all the chocolates into a pile. As Sam separates his candies from one another he mentions to Alice that he would be willing to trade his chocolates for her candies.

This interaction depends upon the one person representing to themselves the preferences of another. This is the sort of knowledge that the that ToM studies. Sam knows that Alice likes chocolate. Alice knows that Sam has chocolates and might be willing to trade. As this example shows, understanding and recognizing the preferences, desires and beliefs of others plays an important role in our interactions.

Baron-Cohen (1995) believes that our ability to mindread, or understand the beliefs and desires of others and how they influence subsequent behavior, is the result of four separate modules/mechanisms working together in order to produce beliefs about what others know. The mindreading system is broken down into the following four modules, ID- the Intentionality Detector, EDD- Eye Direction-Detector, SAM- the Shared Attention Mechanism, and the ToMM-Theory of Mind Module/mechanism. Each of these four mechanisms line up, roughly, with properties in the world, which are: volition (desires), perception, shared attention and epistemic states (knowledge and belief).

The first mechanism Baron-Cohen describes is the Intentionality Detector (ID) (Baron-Cohen, 1995, p. 32). The ID is a perceptual device that interprets the motion of objects in terms of primitive volitional mental states like goal and desire. A more general rendering of this sort of interpretation would be “Object wants/desires x.” Humans use this because it makes sense of basic animal behaviors like approach and avoidance. In order to interpret motion in this way, one needs only two conceptual states: want and goal. The ID is activated whenever there is any perceptual input that might be identified as an agent. We also interpret certain stimuli in the modality of touch, sound, and other modalities in an intentional fashion (Baron-Cohen, 1995, p. 36). If we back up into something we may take it to be a person, and thus say “pardon me.” Only after we verify that it is not a person do we look around to make sure no one was watching us talk to no one in particular.

The second device is the Eye Detection Device (EDD) (Baron-Cohen, 1995, p. 38). The EDD works only through the visual sensory mode. It has three functions: detecting the presence of eyes or eyelike devices; computing which direction the eyes are pointing; and inferring that if another organism’s eyes are directed toward a thing, then it sees that thing. It is important on Baron-Cohen’s view that the third function be seen as giving the organism with the EDD the ability to posit mental states about the organism it is viewing. A new mental state, “one of knowing or believing that some other creature may have visual access to” is added to the basic/primitive mental states of the child. The second and third functions of the EDD are important for Baron-Cohen. Baron-Cohen believes that it is highly adaptive to be able to make a judgment about another being’s knowledge, such as when the tiger has prey in its sights (see Baron-Cohen, 1995 pp. 32-36). If one calculates that the tiger has its eyes trained on a friend, and one uses their knowledge that eyes are used to see (extrapolation from self and third function of the EDD), then one should realize that the tiger sees one’s friend and probably will want to attack. This is called a dyadic representation: Agent sees X. The ID and EDD can form dyadic representations that are relations between two objects or people. It resembles the story told about the tiger. With the ID one can interpret the tiger as an agent. If the agent sees ones friend, and eating is a desire of the tiger, then one might realize that my friend is in danger.

The third mechanism we will deal with is the shared attention mechanism, or SAM (Baron-Cohen, 1995 pp. 44-50). The SAM’s sole function is building triadic representations. The triadic representation expresses a relation between object, Self, and agent. The representation is put generally thus: [I-see- (tiger-sees my friend)]. The SAM compares input from the ID and the EDD and forms these triadic representations. Continuing the tiger example, with a slight modification, will help. If one sees the tiger prowling (ID), sees your friend some yards away, and sees that the tiger is in a position to see your friend (EDD), the SAM can now extrapolate that both the tiger and you see your friend. Furthermore, if you know that tigers like to hunt humans, you might then warn your friend of his impending lunch date.

In this scenario the SAM makes available the ID’s inference that the tiger has a goal, which one interprets through experience, to the EDD and then reads the eye direction in terms of the agent’s inferred goals. With this information one might surmise, according to the example, that the tiger would, more than likely, eat your buddy. After reaching this conclusion one may yell to try and warn your friend of her danger. With all of this in place we can see that this use of primitive representations could be very adaptive and helpful in navigating through a world that has agents who act with goal directed activity.

The final mechanism in Baron-Cohen’s architecture is the Theory of Mind Module/Mechanism (ToMM) (Baron-Cohen, 1995 pp. 50-55). The ToMM has a number of distinct functions. The ToMM is a cognitive system that allows the human to posit a wide range of mental states from observed behavior— to employ a theory of mind in parsing the behavior of others. We learn that upon seeing a desired item, ceteris paribus, people will likely try to get that item. We also learn that people can often misrepresent the world and that these false-beliefs might lead to behaviors that are explainable only in terms of this false belief. The ToMM is the one mechanism/module that we can utilize in order to understand and codify what we learn about mental/epistemic states. The ToMM gives us the ability to represent epistemic states. These epistemic states include believing, pretending, and dreaming. The final responsibility of the ToMM is be able to put the various epistemic states together to allow us to understand how these pieces work together in mental life. The ToMM has a grand job according to Baron-Cohen: “It has the dual function of representing the set of epistemic mental states and turning all this mentalistic knowledge into a useful theory” (Baron-Cohen, p. 51).

The ToMM has multiple functions. It first processes representations of propositional attitudes of the form: [Agent-Attitude-“Proposition”]. An example is “Selma believes that it is wintery.” This is a different ability than having a mental representation of, “It is wintery today.” It differs because one’s belief about Selma is a representation of what one takes her to believe about the world. Having these sorts of representations is crucial to the ability to represent epistemic mental states. The ToMM also allows us to infer that a person will attempt to obtain what they desire if they believe that they are likely to succeed.

For many ToM researchers, the problems persons with autism show in a variety of ToM tasks is evidence for the innate basis of our cognitions about other minds. For example, persons with autism do poorly on the false-belief task. Persons with autism typically use less mental state attribution in their speech compared with average functioning persons and IQ matched developmentally delayed children. Persons with autism also fail to recognize surprise based emotions in others (Harris, 1989). However, persons with autism do show preserved cognitive function in areas as diverse as mathematics, music and mnemonic capacities. These preserved cognitive abilities in persons with autism support a dissociation which furthers the case that ToM knowledge is separate, and thus likely etiologically different, from other cognitions.

The ToM approach generally finds socio-cognitive knowledge to be innate and highly structured. It is not without its problems, however. Some argue (Fodor, 1998) that the modularity relied upon as a basis for the explanation is not plausible given the nature of modules. Further, persons with autism show a wide range of socio-cognitive abilities (high and low functioning persons with autism) that seems to be further evidence against the modular nature of social cognition. As a result, some argue that other theories provide better explanations of the autistic disorder.

3. Executive Control/Metacognitive Approaches to Autism

An alternative to the ToM view of knowledge and development is known as the Executive Control or Metacognitive theory. Executive Control Theorists propose that our ability to understand the mental states of others is the result of the development and use of more general cognitive and metacognitive processes such as metarepresentation, the self monitoring cognitive activity and problem solving. Metarepresentation is the ability that our minds have to represent a representation or have beliefs about beliefs. So, on Executive Control theory, to represent to myself a belief state of someone else, i.e. “I believe my friend sees my chocolate is in the bowl,” one does so with the understanding that one is representing the belief state of another. According to the Executive Control view, these highly complex cognitions require certain cognitive resources which develop over time and practice. Furthermore, the ability to represent the mental states of others is not native. The metarepresentation of another’s epistemic state is the result of applying general cognitive strategies and abilities within a specific domain.

On the Executive Control approach the mind is a domain general information processor able to utilize a wide variety of cognitive resources across a number of domains in solving problems. Executive Control models of cognition and cognitive development state that most of our upper level cognitive abilities are subtended by the same basic sets of cognitive resources. Our ability to pretend, to problem solve and anticipate the actions of others based on inferred thoughts we take a person to have all stem from basic general cognitive abilities. We use the same sets of cognitive resources to solve problems in math, the social arena and learning our own phone number. Understanding others’ behaviors in a social setting is particular problem that humans must face. In order to understand this arena, we simply use these other cognitive skills within the social domain.

Executive Control models rely on a traditional psychological division of labor in the mind that separates memory into long-term memory (LTM) and short-term or working memory (STM). We also have certain cognitive abilities such as the development and use of certain problem solving strategies and the ability to metarepresent. In addition to the strategies one uses to solve problems, one must also be able to generate a plan or method of solving problems that one can implement. As such, the mind is generally able to organize and reorganize activities as a person solves a problem. “Executive function is defined as the ability to maintain appropriate behaviors such as planning, impulse control, inhibition of prepotent but relevant responses, set maintenance, organized search, and flexibility of thought and action” (Ozonoff, et al., 1991, p. 1083). For example, since Alice (a teacher) knows that she wants to be home by 3:00 this afternoon, she realizes that she must finish up the writing she’s scheduled for today. She must also meet with students. If she realizes that student meetings tap her energy leaving her unsuitable for writing, she must then plan to write before meetings if she wants to accomplish her goals.

According to the executive control model, in certain problem solving situations we are able to monitor our strategies for result and economy and make changes with these goals in mind. In the above case, Alice might simply schedule meetings on days that she does not intend to write so that she might more effectively write on the other days. We can also monitor our performance in reaching certain goals. If it turns out that the division-of-academic-labor plan is not working, Alice may alter that plan. She might even inhibit the tendency they have to allow other factors of their job to take time away from writing. If she stumbles onto a procedure that works well in getting them “primed” to write, she might adopt its use. There are many tests used to evaluate our executive control abilities, but the problem confronting experimentalists is that it is often hard to develop a task that reliably taps one set of skills or abilities. However, there are some direct tests, one of the more famous of which is the Tower of Hanoi Puzzle, which researchers rely on to test executive abilities.

In the Tower of Hanoi tests, participants follow certain rules in order to accomplish the task of moving the stack of discs from one area to the next. Imagine that you are presented with three poles the rightmost of which has three discs of differing sizes. The goal is then to move the configuration of discs you are presented with, largest disc on the bottom followed by the next smallest on top and then the smallest on top of that, to the leftmost pole. You are told that while you accomplish this task you can only move one disc at a time, you cannot place a larger disc onto a smaller one and that you need to accomplish the move in the fewest possible number of moves possible. As you might imagine, initial solutions usually involve mistakes and a great many more moves than is necessary. Persons with poor executive control (children, patients with certain frontal lobe problems, persons with autism, etc.) typically perform poorly on the Tower of Hanoi task. The reason for these failures is clear, according to the Executive Control theorist.

To perform well on the tower task requires the ability to plan a solution. It also requires remembering all the necessary rules that constrain choice. This task also measures the inhibition of prepotent responses, the first of which is to just start moving the discs over to the leftmost pole. Unfortunately, this is not necessarily the wisest first move. If it is the case that persons with autism typically do poorer on this task, this shows that they have poor executive control abilities. There has been some early research that showed persons with autism to do poorly on executive control tasks (Ozonoff, S., Pennington, B. and Rogers, S., 1991), but recent research is beginning to weaken this conclusion (Ozonoff, S. and Strayer, D., (2001).

Other tests of Executive Control function include a variety of card sorting tasks that require the participant to sort the cards based on color, shape, category, etc. Participants are not told the rule for sorting that will be used during the test. They must figure it out as a result of the response from the experimenter affirming or denying the given response. For example, a set of cards will have animals and artifacts that are colored either red or blue. If the rule the experimenter is using is based on color, the participant, provided there are no conditions preventing the learning of the rule, will figure that the proper rule is “like colored cards with like colored cards.” However, at a certain point during the test, after the participant has shown they are using the proper rule, the rule changes and requires that we sort according to object type (artifact or natural object). In order to succeed, the participant must become aware of this rule change and alter their responses accordingly. This test focuses on strategy, perseverance, and the inhibition of prepotent responses and flexibility of action. As with the Tower of Hanoi puzzle, persons with poor overall executive control do poorly on such tasks. While the abilities tested in the Tower of Hanoi and card sorting tasks are certainly necessary for the development of our understanding of other minds, they do not represent the full complement of skills required for awareness of the thoughts of others. There are still other abilities and skills necessary.

On the Executive Control theory, social knowledge comes from our ability to pretend which allows us to metarepresent. Pretence, for many Executive Control theorists, is critically important to the development of metarepresentation (Jarrold et al., 1993). The skills involved with pretence are exactly the same skills required when we begin to think about other minds. When we engage in pretence we are able to divorce the representation of the object from the object itself: the representation becomes decoupled. This allows children the crucial move that separates representation from the object. Once this ability is practiced, the child then realizes that the representation of the object is different from the object itself. Upon the realization that the mind represents and can have representations about the world that are not tied directly to the world (i.e. pretending the hall runner is a parking lot for cars) they are then able to metarepresent a variety of epistemic states.

In order to self-represent the belief state of another, children must be able to understand that they themselves hold representations of the world. They further understand that others have the same types of relations to the world with their thoughts. Children can then create a metarepresentation of the person who has some sort of perceptual contact with the world and then, based on that metarepresentation, can predict what that person would do in a given situation. For instance, if Sam knows that Alice saw him hide his candy in the box under his bed, then he could suspect that she might go to the hiding spot if she wants some chocolate. Such metarepresentational abilities also allow us to recognize the so-called “false-belief” states of others. Sam must be able to recognize that Alice saw him put the chocolate in the box under his bed, know that he changed the hiding spot unbeknownst to her and realize that she wouldn’t know that the hiding spot had changed since she never saw me move the chocolate. She would have a false-belief based on his particular epistemic relation to the word that he realizes to be inaccurate. Understanding that someone has a false belief also requires that the user have cognitive control over the contents of his mind so that he does not confuse his own beliefs about the world with what they take others to believe. Only after these ancillary abilities are developed can the child succeed in recognizing the false-beliefs of others. Note that these complex chains of thought require a large working memory span that tracks not only my wants (to keep the chocolate for myself), but also the desires and beliefs of another (Alice wants the chocolate and believes it’s where Sam first hid it).

A result of this particular view about cognition, development and our metarepresentational abilities is a markedly different approach and explanation of the disorder autism than we encountered with the ToM approach. Instead of taking the root problem of autism to be due to a failure of some mechanism/module dedicated to the processing of certain social stimuli, the metacognitive approach finds that autism is the result of an inadequate working memory, which allows us to metarepresent (Keenan, 2000). The autistic disorder is the result of a failing of the Executive Control mechanism responsible for inhibiting certain responses, problems in working memory, recall and inflexible and perseverative problem-solving strategies (Ozonoff, et al., 1991). The failure of persons with autism on typical false-belief tasks is the result of being unable to differentiate their own views from another’s during recall (Hughes, 2002). They might also adopt the improper strategy of relying on their own personal beliefs, either by confusing which set of beliefs belongs with whom or simply forgetting which belief is theirs, in answering questions about others’ beliefs. The problem facing persons with autism and causing their suite of behavioral problems is thus a general inability to accurately store and recall information rather than a specific focal deficit in understanding mental states.

4. Autism and Simulation Theory

Simulation Theory (ST) is usually offered in contrast to other approaches and has is supported more by philosophers than psychologists. While ST traditionally received less critical notice than competing approaches, recently a variety of researchers have ardently and eloquently defended it (such as Alvin Goldman, Robert Gordon and Gregory Currie, Paul Harris and Ian Ravenscroft). ST may be more likely to explain socio-cognitive abilities since it is not laden with the theoretical commitments of ToM and utilizes some of the strengths of the executive control theory.

Simulation Theory holds that one’s knowledge of other minds is related to some sort of capacity to imagine or simulate the beliefs, desires and intentions of another and predict what they would do if one were to act in accordance with the simulated propositional attitudes. For Currie and Ravenscroft (2002, p. 52) each person is able to imaginatively project themselves into the place of another person and “generate within ourselves states of imagining that have as their counterparts the beliefs and desires of someone whose behavior we want to predict.” For Goldman (2006) mindreading begins with a basic “like-me” judgment based on low-level face based emotion recognition abilities. Using a basic “like-me” judgment, we can sense how others are feeling by the facial display of another. Seeing someone display the disgust face activates in our brains the same motor neuron paths as are active when we experience disgust. Through the use of special mirror-neurons, the brain is wired to fire those motor pathways it sees in others.

A main point of contention between the “theory”-theorists and the simulation theorists resides in what exactly the “like-me” consists. For the former, the judgment relies on theoretical assumptions, thus vindicating a theoretical component to social cognition; for the latter, it is the result of basic processes, neural or otherwise. The “like-me” judgment is at the heart of Goldman’s (2006) claim that simulation is the basic method through which we understand others. Regardless of what the “like-me” me judgment is or requires, the evidence for neonatal mimicry relies on studies that have proven difficult to replicate.

For both Currie and Ravenscroft (2002) and Goldman (2006) simulative abilities are fueled by a very basic perceptual ability to recognize emotions in others. In order to recognize how others are feeling, the infant must be able to cue into social stimuli. Once the infant can see these cues, they can begin to mimicking certain features of the emotional expression. Once they begin to mimic the expression, they begin to generate the affect states involved in the mimicked display. According to Currie and Ravenscroft, once these feats are accomplished the infant can assume that if the perceived creature is in a state, and the infant knows what that state feels like, whatever they feel is felt by other. The infant makes a very basic “like me” judgment and, from that judgment, an understanding of others begins. As the children begin to track eye-gaze and use proto-declarative pointing, they begin to develop more sophisticated ways of understanding that aids them in understanding and predicting the behavior of others.

There is an important difference in focus between Goldman’s and Currie and Ravnecroft’s versions of ST. For Goldman, prediction of behavior does not require a feeding in of propositional attitudes or mental states into one’s own cognitive system. In understanding another’s mental states, one mirrors those behaviors or facial expressions. In so doing, one comes to an unmediated understanding of how the other feels. For ST theorists like Currie and Ravenscroft, one places the pretend mental states into imagination and then allows the cognitive system run “offline” and generate predictions. This difference is important for theorists like Goldman who base simulation off certain neural functioning like mirroring.

Our ability to predict others’ behavior requires an act imagination to run the simulation. Our imagination provides the mental area in which we can simulate the role beliefs would play in certain inferential practices of an entertained person. If one imagines that another is hungry, then one might believe that they will go get lunch. One does this because when one believes themselves to be hungry they go get lunch. One plugs in supposed beliefs and desires and then runs a simulation as to what these states would cause them to do in that situation. Goldman (2006) allows that something like the above process occurs when we attempt to understand other’s mental states, but he thinks that this is an upper-level cognitive process and should be seen as importantly different from the lower level “like-me” judgment. The former processes require the lower level mirroring tasks.

In order for one to properly predict another’s behavior based on the simulation of another’s thoughts or behavior, certain assumptions must be made. When one simply thinks “What would I do in this situation” in order to allow the proper inferential chain to go through, one must assume that self and the target are roughly equivalent in a number of important respects. If one lacks basic assumptions about others, or for some other reasons believes that the target is different in important respects, one must augment the simulation with this information so as to have accurate predictions of the other’s behavior. One must disregard or replace certain basic assumptions that they might entertain in a normal case. Thus, the type of simulation one must perform becomes more complex.

In a typical case, one would predict that their friend, whom they know is hungry will likely attempt to go get lunch if the opportunity presents itself. One can make this judgment based on the fact that they would do the same thing in the situation. One plugs in the relevant information and runs a simulation. However, if one knows that their friend is on a diet, they have to take that into account when simulating their behavior. One cannot simply run the simulation using their own particular beliefs, as they are not on a diet. Details of this sort are crucial in understanding and predicting behavior.

On Currie and Ravenscroft’s version of ST, autism is the result of an inability to properly use imagination in the problem solving process, specifically, the process of placing ourselves, imaginatively, into the place of another. However, the problem facing persons with autism is not a complete inability to place themselves imaginatively in the situation of another. Rather, it is a difficulty in developing the skills necessary to practice the imaginative replacement.

Placing yourself in someone’s position, as detailed above, requires that you allow certain belief or desire states that you do not have to become active. We must set aside our own “mental economy” and allow the entertained propositional states to guide our beliefs of what that person might do. As with the earlier example of eating when hungry, since one is not on a diet, one must set aside their own responses and think “as if” they were. Thus, one would choose to not eat in the face of the hunger. Part of the difficulty persons with autism face is they are simply unable to make the proper adjustments to their own mental economy to allow the imagined belief states to play the proper role in simulating another’s beliefs. Persons with autism simply find it too difficult to simulate another person’s belief or desire states. Currie and Ravenscroft claim that the reason that persons with autism cannot simulate others is that they were never able to develop those abilities that allow for complex simulations to occur.

The reason persons with autism lack the development and use of ToM abilities is that they lack the “quasi-perceptual capacity for emotion recognition” (Currie and Ravenscroft, 2002 p. 159). They take the ability to recognize emotions to be something that is native or that surfaces early in development. Since persons with autism do not pick up on the basic emotional cues, they lack one of the primary inputs that allow simulation to occur. According to the authors, a young child perceives another’s emotional state, mimics those facial/bodily expressions and, based on how that mimicked facial expression feels to them as they perform it, thereby know what it feels like to be in that state. Since a person with autism does not even cue into these basic emotional states, they are never in a position to make the proper “like-me” reasoning and they never begin the basic mimicry that sets the whole simulative process into motion. The effects of this simple inability to recognize and simulate other’s emotional states are far-reaching.

Thus, autism, for Currie and Ravenscroft (2002), is an imaginative disorder. There are Executive Control problems like those mentioned in Executive Control models, but these problems come after and as a result of the inability to pick up on the basic perceptual content that cues us in to the mental states of others.

For a simulation theorist like Goldman (2006) the root of the autistic disorder is to be found in basic mirror-neuron dysfunction. Goldman bases his view off studies that show persons with autism are less apt in imitative abilities than average persons. Goldman cites further evidence that seems to indicate that the mirror neurons that allow simulation to occur are not functioning (Goldman, 2006 p. 206). The evidence for the mirror neuron dysfunction is tentative and Goldman notes this. But ST theorists find that the recent research into mirror neuron function and the role that these neurons play in a host of social behaviors such as mimicry, and thinking about others thoughts and actions are important signs that the theory is more supported than the rival “theory”-theory approach.

5. Conclusion

Autism remains an intriguing disorder that is only partially understood. No theory can claim to be the most widely accepted and each has its own difficulties. “Theory”-theory needs to find ways to deal with much of the new research on where and how certain tasks are performed in the brain. Some of this research, as Goldman (2006) notes, seems to violate the modularity basis that “theory”-theory requires. Further, the “theory”-theorists’ like Baron-Cohen have retreated from their theoretical commitments and offered alternative views of the autistic disorder (Baron-Cohen, 2002). Simulation theory and Executive Control theory often rely on the claim that the executive control abilities are dysfunctional in persons with autism and some recent research calls this into question (Ozonoff, S., and Strayer, D., 2001; Hughes, C., 2002).

Some recent research has tried to blend together the theoretical tenets of all of the approaches (Cundall, 2006; Keenan, 2000) forming a hybrid version of the theories and often a détente between “theory”-theory and simulation theory can be found. Researchers like Goldman think theoretical reasoning about other’s mental states is likely, but not the basic form of socio-cognitive thought. “Theory”-theorists often note that something like simulation is used, but it is only a later developmental ability in social cognition. Other researchers, Rittscher, et al, (2003) are avoiding some of the more theoretical disputes and have simply begun to investigate how socio-cognitive information is processed in the brain. Autism still presents any researcher interested in explaining socio-cognitive development an interesting challenge and any theory that purports to explain socio-cognitive structure and development will need to offer an explanation of the disorder.

6. References and Further Reading

  • Barkow, J., Cosmides, L., Tooby, J. (1992). The Adapted Mind. New York. Oxford University Press.
  • Baron-Cohen, S., (1995). Mindblindness. Cambridge, Mass: The MIT Press.
  • Baron-Cohen, S., (2003). The Essential Difference. New York: Basic Books.
  • Bechtel, W., and Richardson, R. (1992). Discovering Complexity. Princeton, NJ. Princeton University Press.
  • Bickle, J., (2003). Philosophy and Neurosciences: A Ruthlessly Reductive Account. Dordrecht-The Netherlands: Kluwer Academic Publishers
  • Blake, R., Turner, L., Smoski, M, Pozdol, S., and Stone, W. (2003). Visual Recognition of Biological Motion is Impaired in children with Autism. Psychological Science, Vol. 14, 151-158.
  • Bloom, P., and German, T. (2000). Two reasons to abandon the false-belief task as a test of theory of mind. Cognition, 77: B25-B31.
  • Botterill, G., and Carruthers, P. (1999). Philosophy of Psychology. Cambridge: Cambridge University Press.
  • Carruthers, P., (2003). Review of Currie and Ravenscroft’s Recreative Minds. Retrieved October 25, 2004.
  • Carruthers, P., and Smith, P. (1996). Theories of Theories of Mind. Cambridge: Cambridge University Press.
  • Castelloe, P., and Dawson, G. (1993). Subclassification of Children with Autism and Pervasive Developmental Disorders. A Questionnaire bases on the Wing and Gould Subgrouping Scheme. Journal of Autism and Developmental Disorders. Vol. 33: 229-241.
  • Ceponiene, R., Lepisto, T., Shestakova, A., Vanhala, R., Alku, P., Naatanen, R. and Yaguchi, K. (2003). Speech-sound-selective auditory impairment in children with autism: They can perceive but do not attend. Proceedings of the National Academy of Sciences. Vol. 100: 5567-5572.
  • Cundall, M., (2006). Autism’s Role in Understanding Social Cognition. Journal of Humanities & Social Sciences, Vol. 1, 1.
  • Currie, G., and Ravenscroft, I., (2002). Recreative Minds. Oxford: Oxford University Press.
  • Currie, G., and Sterelny, K. (2000). How to Think about the Modularity of Mindreading. The Philosophical Quarterly, Vol. 50: 145-162.
  • Dawson, G., Klinger, L., Panagiotides, H., Lewy, A., and Castelloe, P. (1995). Subgroups of Autistic Children Based on Social Behavior Display Distinct Patterns of Brain Activity. Journal of Abnormal Child Psychology. Vol. 23: 569-583.
  • Fodor, J., (1980). Special Sciences, or the Disunity of Science as a Working Hypothesis. In Readings in the Philosophy of Psychology Vol. I. Ned Block Ed. Cambridge, MA. Harvard Publishers.
  • Fodor, J., (2000). The Mind Doesn’t Work That Way. Cambridge, Mass: The MIT Press.
  • Gerrans, P., (2002). The Theory of Mind Module in Evolutionary Psychology. Biology and Philosophy. Vol. 17: 305-321.
  • Goldman, A., (2006). Simulating Minds. New York. Oxford University Press.
  • Gopnik, A., and Meltzoff, A., (1998). Words, Thoughts and Theories. Cambridge, Mass: The MIT Press.
  • Harris, P. (1989). Children and Emotion. Malden, MA. Blackwell Publishers.
  • Hughes, C. (2002). Executive Functions and Development: Emerging Themes. Infant and Child Development. Vol 11: 201-209.
  • Jarrold, C., Boucher, J., and Smith, P. (1993). Symbolic Play in Autism: a review. Journal of Autism and Developmental Disorders, 23: 281-387.
  • Jarrold, C., Boucher, J., and Smith, P. (1994). Executive Function Deficits and the Pretend Play of Children with Autism. Journal of Child Psychology and Psychiatry. Vol. 35: 1473-1482.
  • Karmiloff-Smith, Annette, (1992). Beyond Modularity. Cambridge, MA: The MIT Press.
  • Keenan, T., (2000). Mind, Memory, and Metacognition. In Minds in the Making: Essays in Honor of David R. Olson. Astington Eds. Malden MA, Blackwell Publishers.
  • Leekam, S., and Prior, M., (1994). Can Autistic Children Distinguish Lies form Jokes? A Second Look at Second Order Belief Attribution. Journal of Child Psychology and Psychiatry. Vol. 35: 901-915.
  • Leslie, A. M. (1992). Autism and the ‘theory of mind’ module. Current Directions in Psychological Science, 1: 18-21.
  • Malle, B., Moses, L., and Baldwin, D. (2001). Intentions and Intentionality: Foundations of Social Cognition. Cambridge, MA: The MIT Press.
  • Olson, D., (1993). The Development of mental representations: the origins of mental life. Canadian Psychology, 30, 293-306.
  • Ozonoff, S., Pennington, B., and Rogers, S. (1991). Executive Function Deficits in High Functioning Autistic Individuals: Relationship to Theory of Mind. Journal of Child Psychology and Psychiatry, 32: 1081-1105.
  • Ozonoff, S., and Strayer, D. (2001). Further Evidence of Intact Working Memory in Autism. Journal of Autism and Developmental Disorders, Vol. 31: 257-263.
  • Pierce, K., Muller, R., Ambrose, J., Allen, G., and Courchesne, E. (2001). Face processing occurs outside the fusiform ‘face area’ in autists: evidence from functional MRI. Brain, 124: 2059-73.
  • Puce, A., and Perrett, D., (2003). Electrophysiology and brain imaging of biological motion. An article in, Decoding, imitating and influencing the actions of others: the mechanisms of social interaction. Philosophical Transactions of the Royal Society, 358: 435-445.
  • Provine, R., (2000). Laughter: A Scientific Investigation. New York, Penguin Publishers.
  • Rittscher, J., Blake, A., Hoogs, A., Stein, G., (2003). Mathematical modeling of animate and intentional motion. An article in, Decoding, imitating and influencing the actions of others: the mechanisms of social interaction. Philosophical Transactions of the Royal Society, 358: 475-490.
  • Ruffman, T., (2000). Nonverbal Theory of Mind. In Minds in the Making: Essays in Honor of David R. Olson. Astington Eds. Malden MA, Blackwell Publishers.
  • Schultz, R. T., Gauthier, I., Klin, A., Fulbright, R., Anderson, A.W., Volkmar, F., Skudlarski, P., Lacadie, C., Cohen, D. J., and Gore, J. C. (2000) Abnormal ventral temporal cortical activity among individuals with autism and Asperger syndrome during face recognition. Archives of General Psychiatry, 37: 331-340.
  • Sterelny, K., (2003). Thought in a Hostile World: The Evolution of Human Cognition. Malden, MA: Blackwell Pubishers.
  • Volkmar, F., Klin, A., Schultz, R., Chawarska, K., and Jones, W. (2003). The Social Brain in Autism. The Social Brain: Evolution and Pathology. in (Brune, Ribbert and Scheiefenhovel Eds.). Hoboken, NJ. Wiley and Sons Ltd.
  • Wellman, H. M. (1991). The Child’s Theory of Mind. Cambridge, MA: The MIT Press.

Author Information

Michael Cundall
Arkansas State University
U. S. A.

Internalism and Externalism in Epistemology

The internalism-externalism (I-E) debate lies near the center of contemporary discussion about epistemology. The basic idea of internalism is that justification is solely determined by factors that are internal to a person. Externalists deny this, asserting that justification depends on additional factors that are external to a person. A significant aspect of the I-E debate involves setting out exactly what counts as internal to a person.

The rise of the I-E debate coincides with the rebirth of epistemology after Edmund Gettier’s famous 1963 paper, “Is Justified True Belief Knowledge?” In that paper, Gettier presented several cases to show that knowledge is not identical to justified true belief. Cases of this type are referred to as “Gettier cases,” and they illustrate “the Gettier problem.” Standard Gettier cases show that one can have internally adequate justification without knowledge. The introduction of the Gettier problem to epistemology required rethinking the connection between true belief and knowledge, and the subsequent discussion generated what became the I-E debate over the nature of justification in an account of knowledge. Internalists maintained that knowledge requires justification and that the nature of this justification is completely determined by a subject’s internal states or reasons. Externalists denied at least one of these commitments: either knowledge does not require justification or the nature of justification is not completely determined by internal factors alone. On the latter view, externalists maintained that the facts that determine a belief’s justification include external facts such as whether the belief is caused by the state of affairs that makes the belief true, whether the belief is counterfactually dependent on the states of affairs that makes it true, whether the belief is produced by a reliable belief-producing process, or whether the belief is objectively likely to be true. The I-E discussion engages a wide range of epistemological issues involving the nature of rationality, the ethics of belief, and skepticism.

Table of Contents

  1. The Logic of the I-E Debate
    1. Knowledge and Justification
    2. Justification and Well-foundedness
    3. The Meaning of ‘Internal’
    4. Taking Stock
  2. Reasons for Internalism
    1. The Socratic/Cartesian project
    2. Deontology (The Ethics of Belief)
    3. Natural Judgment about Cases
      1. BonJour’s Norman case
      2. The New Evil Demon Problem
  3. Reasons for Externalism
    1. The Truth Connection
    2. Grandma, Timmy and Lassie
    3. The Scandal of Skepticism
  4. The Significance of the I-E Debate
    1. Disagreement over the Significance of the Thermometer Model
    2. Disagreement over the Guiding Conception of Justification
    3. Disagreement over Naturalism in Epistemology
  5. Conclusion
  6. References and Further Reading

1. The Logic of the I-E Debate

The simple conception of the I-E debate as a dispute over whether the facts that determine justification are all internal to a person is complicated by several factors. First, some epistemologists understand externalism as a view that knowledge does not require justification while others think it should be understood as an externalist view of justification. Second, there is an important distinction between having good reasons for one’s belief (that is, propositional justification) and basing one’s belief on the good reasons one possesses (that is, doxastic justification). This distinction matters to the nature of the internalist thesis and consequently the I-E debate itself. Third, there are two different and prominent ways of understanding what is internal to a person. This bears on the nature of the internalist thesis and externalist arguments against internalism. This section explores these complications.

a. Knowledge and Justification

The traditional analysis of knowledge is that knowledge is justified true belief. As Socrates avers in the Meno, knowledge is more than true belief. Superstitious beliefs that just turn out to be true are not instances of knowledge. In the Theatetus Socrates proposes that knowledge is true belief tied down by an account. Socrates’ proposal is the beginning of what epistemologists refer to as the justified true belief (JTB) account of knowledge. A true belief tied down by an account can be understood as a true belief for which one has adequate reasons. On the JTB account having adequate reasons turns a true belief into knowledge.

The JTB account was demolished by Gettier’s famous 1963 article. As explained in the introduction Gettier cases demonstrate that knowledge is more than justified true belief. Suppose that Smith possesses a good deal of evidence for the belief that someone in his office owns a Ford. Smith’s evidence includes such things as that Smith sees Jones drive a Ford to work every day and that Jones talks about the joys of owning a Ford. It turns out, however, that (unbeknownst to Smith) Jones is deceiving his coworkers into believing he owns a Ford. At the same time, though, someone else in Smith’s office, Brown, does own a Ford. So, Smith’s belief that someone in his office owns a Ford is both justified and true. Yet it seems to most people that Smith’s belief is not an instance of knowledge.

The Gettier problem led epistemologists to rethink the connection between knowledge and true belief. An externalist position developed that focused on causal relations or, more generally, dependency relations between one’s belief and the facts as providing the key to turning true belief into knowledge (see Armstrong 1973). It is unclear from this move alone whether externalism should be understood as the view knowledge does not require justification or that justification should be understood externally. Some externalists advocate the view that knowledge doesn’t require justification but that nonetheless justification is epistemically important (see Sosa 1991b). Other externalists hold that knowledge does require justification but that the nature of the justification is amenable to an externalist analysis (see Bergmann 2006).

A significant aspect of the issue of how one should understand externalism is whether the term ‘justification’ is a term of logic or merely a place-holder for a necessary condition for knowledge. If ‘justification’ is a term of logic then it invokes notions of consistency, inconsistency, implication, and coherence. On this conception of justification an externalist analysis of the nature of justification is implausible. However, if ‘justification’ is merely a place-holder for a condition in an account of knowledge then the nature of justification might be amenable to an externalist analysis. Externalists have defended both views. Some argue that ‘justification’ is a term of logic and so their position is best understood as the view that justification is not required for knowledge. However, other externalists have argued that ‘justification’ is not a term of logic but a term that occurs in connection with knowledge talk and so is amenable to an externalist account. Many internalists, by contrast, claim that justification is necessary for knowledge and that the notion of justification may be (partially) explicated by the concepts of consistency, implication, and coherence.

b. Justification and Well-foundedness

There is a significant difference between merely having good reasons for one’s belief that the Bears will win the Super Bowl and basing one’s belief on those reasons. Mike Ditka may have excellent reasons for believing the Bears will win; they have a superior defense and an excellent running back. Nevertheless Ditka may believe that the Bears will win based on wishful thinking. In this case it’s natural to make a distinction in one’s epistemic evaluation of Ditka’s belief. Ditka’s belief is justified because he has good reasons for it. But Ditka’s believing the claim as he does is not justified because he bases his belief on wishful thinking and not the good reasons he has. This marks the distinction between propositional and doxastic justification. Other epistemologists refer to the same distinction as that between justification and well-foundedness (see Conee & Feldman 2004).

This leads to a second area of complication in the I-E debate. Internalists claim that every condition that determines a belief’s justification is internal, but causal relations are typically not internal. Since basing one’s belief on reasons is a causal relation between one’s belief and one’s reasons, internalists should not claim that every factor that determines doxastic justification is internal (see 1c below for further discussion of this). Accordingly, internalism should be understood as a view about propositional justification. Moreover, given that one cannot know unless one bases one’s belief on good reasons this implies that internalists will understand the justification condition in an account of knowledge as composed of two parts: propositional justification and some causal condition (typically referred to as “the basing relation”). This considerably complicates the I-E debate because there’s not a straightforward disagreement between internalist and externalist views of doxastic justification, since externalists typically avoid dissecting the justification condition. Common forms of externalism build in a causal requirement to justification, for example, one’s belief that p is produced by a reliable method. Nevertheless it is important to get the nature of the internalist thesis straight and only then determine the nature of the externalist objections.

c. The Meaning of ‘Internal’

The distinction between propositional and doxastic justification allows us to bring into focus different notions of internal states. Internalism is best understood as the thesis that propositional justification, not doxastic justification, is completely determined by one’s internal states. But what are one’s internal states? One’s internal states could be one’s bodily states, one’s brain states, one’s mental states (if these are different than brain states), or one’s reflectively accessible states. The two most common ways of understanding internalism has been to take internal states as either reflectively accessible states or mental states. The former view is known as accessibilism and it has been championed by Roderick Chisholm and Laurence BonJour (see also Matthias Steup (1999)). The latter view is known as mentalism and it has been defended by Richard Feldman and Earl Conee.

On an accessibilist view every factor that determines whether one’s belief is propositionally justified is reflectively accessible. Since the causal origins of one’s beliefs are not, in general, reflectively accessible they do not determine whether one’s belief is propositionally justified. But whether or not one’s belief that p and one’s belief that q are contradictory is reflectively accessible. Since contradictory beliefs cannot both be justified one can ascertain by reflection alone whether pairs of beliefs lack this devastating epistemic property.

One should note that the above claim that the causal origins of one’s beliefs are not, in general, reflectively accessible is an anti-Cartesian claim. Arguably, Descartes thought that one could always discover the causal origins of one’s beliefs. On the Cartesian view causal relations that hold between beliefs and experiences and beliefs are reflectively accessible. Many scholars, however, believe this view is false. Stemming from Freud’s work many now think that one does not have the kind of access Descartes thought one had to the causal origins of one’s beliefs. Given this an accessibilist view about doxastic justification—that is, propositional justification + the causal origins of one’s belief—is not feasible. Accessibilists should only require that every factor that determines whether one’s belief is propositionally justified is reflectively accessible.

There are varieties of accessibilist views depending on how one unpacks what states count as reflectively accessible. Are these states that one is able to reflectively access now or states that one may access given some time? If accessibilism is not restricted to current mental states then it needs to explain where the cut off is between states that count towards determining justificatory status and those that don’t count. Richard Feldman has a helpful article on this topic in which he defends the strong thesis that it is only one’s current mental states that determine justificatory status (Feldman 2004b).

Another dimension apropos accessibilism is whether the justificatory status of one’s belief needs to be accessible as well. If it does then one’s inability to determine whether or not one’s belief that p is justified demonstrates that p is not justified for one. BonJour (1985, chapter 2), for instance, is commonly cited as defending this strong kind of accessibilism. This strong version of accessibilism is often taken to be the purest form of internalism since internalism is not uncommonly associated with a commitment to higher-order principles such as one knows that p only if one knows that one knows that p. Robert Nozick (1981, p. 281) takes internalism to be the thesis that knowledge implies knowledge of all the preconditions of knowing.

The other prominent view of internal states is that they are mental states. This view is known as mentalism (see Conee & Feldman 2004b). Mentalism, like accessibilism, is a view about propositional justification, not doxastic justification. One’s mental states completely determine the justificatory status of one’s beliefs. Mentalism is connected to accessibilism since according to the Cartesian tradition one can determine which mental states one is in by reflection alone. To the extent that mentalism is distinct from accessibilism it allows that some non-reflectively accessible mental states can determine whether one’s belief is propositionally justified.

A defender of a mentalist view needs to explain which mental states determine justificatory status. Do all mental states—hopes, fears, longings—determine propositional justification or just some mental states, such as beliefs and experiences? Moreover, a defender of mentalism needs to clarify whether both current and non-current mental states can determine justificatory status. A non-current mental state is a mental state that you do not currently host. For instance, you believed a moment ago that 186 is greater than 86 but currently you are not thinking about this.

One of the advantages of mentalism is that it upholds a clear internalist thesis—justification is determined by one’s mental states—without appealing to the problematic notion of access. Many understand the notion of access to be a thinly disguised epistemic term (see, for instance, Fumerton (1995) p. 64). To have access to some fact is just to know whether or not that fact obtains. This is problematic for an accessibilist because he analyzes justification in terms of access and then use the notion of justification to partially explicate knowledge. In short, if ‘access’ is an epistemic term then any analysis of knowledge that rests upon facts about access will be circular. The mentalist escapes this problem. One’s mental states determine justification, and one does not explicate what one’s mental states are by appeal to the problematic notion of access. However, mentalism does face the objection that since it eschews the notion of access it is not a genuine form of internalism (see Bergmann 2006 for a further examination of this issue).

d. Taking Stock

Before we press on to other issues in the I-E debate let us take stock of what has been considered. Internalism is the view that all the factors that determine propositional justification are either reflectively accessible states (that is, accessibilism) or mental states (that is, mentalism). Internalists also hold that doxastic justification, which is propositional justification and a basing requirement, is necessary for knowledge. We can think of internalism as the view that all the factors that determine justification apart from a basing requirement are internal. Let us call these justification determining factors, minus the basing requirement, the J-factors. Externalists about justification deny that the J-factors are all internal. If, however, we view externalism merely as a negative thesis then we lose sight of its distinctly philosophical motivation. Externalists’ positive views are grounded in the intuition that the natural relations between one’s beliefs and the environment matter to a belief’s justification. If, for example, a subject’s belief that there is a tiger behind the tall grass is caused by the fact that there is a tiger there this fact seems significant to determining the justificatory status of that belief, even though this fact may not be reflectively accessible to one. At a certain level of generality, externalism is best viewed as stressing the justificatory significance of dependency relations between one’s belief and the environment.

2. Reasons for Internalism

This section examines prominent reasons for internalism. I will discuss three motivations for internalism: the appeal to the Socratic/Cartesian project; the appeal to deontology; and the appeal to natural judgment about cases. These three motivations are conspicuous in arguments for internalism. After giving each reason I shall consider externalist responses.

a. The Socratic/Cartesian project

One common strategy internalists employ is to emphasize that epistemic justification requires having good reasons for one’s beliefs. As both Socrates and Descartes stressed it’s not rational to believe p without possessing a good reason for believing p. Suppose I believe that the Telecom’s stock will drastically fall tomorrow. It’s reasonable to ask why I think that’s true. Clearly it’s wrong to repeat myself, saying “I believe that’s true because it is true.” So it seems I must have a reason, distinct from my original belief, for thinking that Telecom’s stock will fall. Also I cannot appeal to the causal origins of that belief or to the reliability of the specific belief process. Those sorts of facts are beyond my ken. Whatever I can appeal to will be something I am aware of. Moreover, I can’t merely appeal to another belief, for example, Karen told me that Telecom’s stock will fall. I need a good reason for thinking that Karen is good indicator about these sorts of things. Apart from that supporting belief it’s not rational to believe that Telecom’s stock will fall. So rationality requires good reasons that indicate a belief is true. The appeal to the Socratic/Cartesian project is a way to motivate the claim that it is a basic fact that rationality requires good reasons.

This requirement conflicts with externalism since externalism allows for the possibility that one’s belief is justified even though one has no reasons for that belief. To draw out this commitment let us expand on the above example. Suppose that my belief that Telecom’s stock will fall is based on my beliefs that Karen told me so and that Karen is a reliable indicator of these things. But not every belief of mine is supported by other beliefs I have. These kinds of beliefs are called basic beliefs, beliefs that are not supported by other beliefs. Consider your belief that there’s a cube on the table. What reason do you have for this belief? It might be difficult to say. Yet internalism requires that you have some reason (typically, the content of one’s experience) that supports this belief if that belief is rational. Externalists think that that is just too tall of an order. In fact one of the early motivations for externalism was to handle the justification of basic beliefs (see Armstrong 1973). In general, externalists think that basic beliefs can be justified merely by the belief meeting some external condition. One complication with this, though, is that some externalists think a basic belief require reasons but that reasons should be understood in an externalist fashion (see Alston (1988)). I shall ignore this complication because on Alston’s analysis justification depends on factors outside one’s ken. So, to the extent that one is moved by the internalist intuition, one will think that externalism falls. It allows for justification without good reasons. One should also note that this appeal to the Socratic/Cartesian project supports accessibilism.

A related argument used to support internalism appeals to the inadequacy of externalism to answer philosophical curiosity (see Fumerton 2006). If we take up the Socratic project then we are interesting in determining whether our most basic beliefs about reality are likely to be true. It seems entirely unsatisfactory to note that if one’s beliefs meet some specified external condition then the beliefs are justified; for the natural question is whether one’s belief has met that external condition. This suggests that to the extent that we are interested in whether our beliefs are epistemically justified internalism is the only game in town. Externalist Response One early externalist response was to note that internalists focus on conditions they use to determine justificatory status but that this is conceptually distinct from conditions that actually do determine justificatory status. An adequate definition of albinos may be entirely useless for finding actual albinos (see Armstrong 1973, p. 191). In a similar manner it’s at least conceptually possible that one’s analysis of the nature of justification is not a useful tool for determining whether or not one’s beliefs are justified. What this shows is that internalists need an additional argument from the fact that we can appeal to only internal factors to determine justification to the conclusion that only internal facts determine justification.

Another early response to this internalist tactic is to argue that internalism fails to meet its own demands. Alvin Goldman (1980) presents an argument of this kind, claiming that there is no definite and acceptable set of internalistic conditions that determine what cognitive attitude a subject should have given her evidence. Goldman argues for this conclusion by supposing that there is some set of internalistic conditions and then contenting that there no acceptable way to accommodate this set of conditions within the constraints laid down by internalists. For instance, Goldman reasons that one internalistic constraint is that the correctness of these conditions be reflectively accessible. But, if the correctness of this procedure depends on its ability to get one to the truth more often than not, then since that property isn’t reflectively accessible, internalists shouldn’t understand the correctness of the procedure to consist in its ability to be a good guide to the truth. Goldman then argues that other accounts of the correctness of this procedure likewise fail. So it is not possible for internalism to meet its own severe restrictions. For a similar argument see Richard Foley (1993).

b. Deontology (The Ethics of Belief)

A prominent source of support for internalism is the allegedly deontological character of justification (see Plantinga (1993), chapter 1; this section relies heavily on Plantinga’s discussion). The language of ‘justified’ & ‘unjustified’ invokes concepts like rightness & wrongness, blameless & blameworthy, and dutifulness & neglect. Facts about justification are set in the larger context of one’s duties, obligations, and requirements. Descartes, for instance, explains that false belief arises from the improper use of one’s own will. There is a two-fold implication to this. First, if one governs one’s believing as one ought then one is justified in one’s believings. Second, if one maintains proper doxastic attitudes one will have (by and large) true beliefs. Locke, like Descartes, connects justification with duty fulfillment. Locke maintains that though one may miss truth, if one governs one’s doxastic attitudes in accord with duty then one will not miss the reward of truth (Essay, IV, xvii, 24).

The argument from the deontological character of justification to internalism proceeds as follows. Justification is a matter of fulfilling one’s intellectual duties but whether or not one has fulfilled one’s intellectual duties is entirely an internal matter. One fulfills one’s intellectual duties when one has properly taken into account the evidence one possesses. If Smith scrupulously analyzes all the relevant information about Telecom’s stock prices and draws the conclusion that Telecom’s prices will soar then Smith’s belief is justified. If it later comes to light that the information was misleading this doesn’t impugn our judgment about Smith’s belief at that time. Smith was intellectually virtuous in his believing and drew the appropriate conclusion given the evidence he possessed. In contrast if Jones is an epistemically reckless stock broker who does not study the market before he makes his judgments, but happens to hit on the true belief that Telecom’s stock prices will fall then we do not count his belief as justified since he ignored all the relevant evidence. Jones should have believed otherwise.

The cases of Smith and Jones support the claim that fulfilling one’s intellectual duty is entirely a matter of what one is able to determine by reflection alone. Both Smith and Jones are able to determine that their evidence indicates Telecom’s stock will soar. Smith appropriately believes this and Jones does not. Since externalists would require some other non-reflectively accessible condition externalism is wrong. One should note that this argument supports accessiblism, not mentalism. Externalist Response Externalists have responded to this line of argument in two ways. First, some externalists deny that facts about duties, rights, or blameworthiness are relevant to the sense of justification necessary for knowledge. Second, other externalists deny that the deontological character of justification supports accessibilism. Arguments of the first kind fall into two groups: (a) arguments that a necessary condition for rights, duties, or blameworthiness is not met with respect to belief and (b) arguments that facts about deontology are not relevant to determining epistemic facts. The most common argument for (a) is that beliefs are outside of an individual’s control, and so it does not make sense to consider an individual blameworthy for a belief. This is the issue of doxastic voluntarism. Sosa (2003) and Plantinga (1993) present arguments for (b). The basic idea in these cases is that an individual may be deeply epistemically flawed but nonetheless perfectly blameless in his or her belief. An individual may, for instance, be “hardwired” to accept as valid instances of affirming the consequent; nonetheless, a person’s belief in A on the basis of if A then B and B is not justified.

Michael Bergmann (2006, chapter 4) presents an argument of the second type that the deontological character of justification does not support accessibilism. The basic idea of Bergmann’s argument is that an appeal to the deontological character of justification only supports the requirement that the person not be aware of any reasons against the belief. It does not support the stronger requirement that the person be aware of positive reasons for the belief. Bergmann then argues the weaker requirement is consistent with externalism.

c. Natural Judgment about Cases

A different strategy to support internalism is to appeal to natural judgment about cases. I shall consider two famous thought experiments designed to elicit internalist intuitions: BonJour’s Clairvoyant cases, specifically the case of Norman (BonJour 1980) and the new evil demon problem (Lehrer & Cohen 1983 & Cohen 1984). I shall present the two cases and then offer an externalist response. As Sosa (1991a) explains the two cases are related in that each is the mirror image of the other. In the Norman case there is reliability without internal evidence while in the new evil demon problem there is internal evidence without reliability.

i. BonJour’s Norman case

In BonJour’s (1980) article he presents four clairvoyant cases to illustrate the fundamental problem with externalism. Subsequent discussion has focused mainly on the case of Norman. BonJour describes the Norman case as follows:

Norman, under certain conditions that usually obtain, is a completely reliable clairvoyant with respect to certain kinds of subject matter. He possesses no evidence or reasons of any kind for or against the general possibility of such a cognitive power, or for or against the thesis that he possesses it. One day Norman comes to believe that the President is in New York City, though he has no evidence either for or against his belief. In fact the belief is true and results from his clairvoyant power, under circumstances in which it is completely reliable. (p. 21)

Intuitively it seems that Norman’s belief is not justified. Norman doesn’t have any reasons for thinking that the President is in New York City. Norman just finds himself believing that. Were Norman to reflect on his belief he would come to see that that belief is unsupported. Yet in the situation imagined Norman’s belief is the product of a reliable process. Norman is not aware of this fact. But nonetheless on some externalist analyses Norman’s belief is justified because it is produced by a reliable process.

The Norman case is used to illustrate a general problem with externalism. Externalists hold that the justification of basic beliefs requires only that the specified external condition is met (excluding the complication with Alston’s view, mentioned above). Yet where the subject lacks any internally accessible reason for thinking the belief is true it seems irrational for the subject to maintain that belief. Rationality requires good reasons.

ii. The New Evil Demon Problem

The original evil demon problem comes from Descartes. In the Meditations Descartes entertains the possibility that he is deceived by a powerful demon in believing that (for example,) he has hands. Descartes concludes that he needs to rule out this possibility by providing good reasons for thinking that he is not deceived in this way and that he can take the evidence of his senses at face value. Most epistemologists think Descartes concedes too much by requiring that he rule out this possibility in order to know that he has hands on the basis of the evidence he possesses.

The new evil demon problem is different from Descartes’ evil demon problem. This problem does not require that one rule out the possibility of massive deception in order to have knowledge. Rather the problem is intended to illustrate the inadequacy of externalism. The new evil demon problem was originally developed against reliabilism, the view that a belief’s justification consists in the reliability of the process that produced it. The problem is that there are possible individuals with the same evidence as we possess but whose evidence is not truth indicative. For instance we can conceive of individuals that have been placed in Matrix scenarios in which their brains are stimulated to have all the same experiences we have. When we seem to see a tree, normally a tree is present. However, when these individuals in a Matrix scenario seem to see a tree, there is no tree present. Their experiences are systematically misleading. Nevertheless since they possess just the same evidence that we have, the justificatory status of their beliefs is exactly the same as ours. If our beliefs are justified then so are their beliefs, and if their beliefs are not justified then our beliefs aren’t justified. This intuition reflects the key internalist claim that two individuals that are alike mentally are alike with respect to justification. There’s no difference in justification unless there’s some relevant mental difference. Externalists are committed to denying this symmetry. Since the individuals in the Matrix world fail to meet the relevant external condition their beliefs are unjustified, but since our beliefs meet the external condition our beliefs are justified.

The Externalist Response

Both the Norman case and the new evil demon problem have led to significant modifications to externalism. At a very general level the basic externalist move is that relative to our world Norman’s belief is unjustified and an individual’s belief in the Matrix world is justified. In our world clairvoyance is not a reliable belief-forming method. A clairvoyant’s belief that, for example, today is their lucky day is not caused by the relevant fact. Furthermore, a clairvoyant’s belief is not objectively likely to be true. The externalist thinks that justification tracks these actual facts and so accordingly our judgment of Norman’s belief is that it is unjustified.

Similarly in the new evil demon problem justification tracks the actual facts. Since our perceptual beliefs meet the external condition they are justified. When we consider possible individuals with the same perceptual evidence that we have, we rightly consider their beliefs justified. Granted that their beliefs do not meet the external condition in that world, but in our world such beliefs do meet the external condition.

Alvin Goldman (1993) develops this externalist response to the Norman case. Goldman argues that Norman’s belief is not justified because relative to our list of epistemic virtues and vices clairvoyant beliefs are unjustified. Goldman argues that justification is relative to actual intellectual virtues, where the virtues are understood in a reliabilist fashion. This is a departure from Goldman’s earlier view in which the reliability of a belief forming process in a world determined the justificatory status of the belief. On that view Goldman is saddled with the consequence that Norman’s beliefs is justified and the beliefs of the people in the Matrix world are unjustified. On his (1993) view a belief’s justification is determined by the reliability of processes in our world. Goldman is not saddled with those counterintuitive results but can instead maintain the internalist’s intuition without surrendering externalism. For other instances of this relativization move see Sosa (1991a) and Bergmann (2006).

3. Reasons for Externalism

The following is an examination of three prominent reasons for externalism—the argument from the truth connection, the argument from ordinary knowledge ascriptions, and the argument from the implausibility of radical skepticism. Also included are the main internalist responses.

a. The Truth Connection

A very powerful argument for externalism is that epistemic justification is essentially connected to truth. Epistemic justification differs from prudential or moral justification. One can be prudentially justified in believing that one’s close friend is a good chap. One is prudentially justified in believing that this is true. But it’s possible that one has good epistemic reasons for withholding this belief. So one is not epistemically justified in believing one’s close friend is a good fellow. How should we account for this difference between prudential and epistemic justification? The natural response is to hold that epistemic justification implies that one’s belief is objectively likely to be true whereas prudential justification (or other non-epistemic forms of justification) does not. However, whether one’s belief is objectively likely to be true is not determined by one’s mental states or one’s reflectively accessible states. The objective likelihood of a belief given a body of evidence is a matter of the strength of correlation in the actual world between the truth of the belief and the body of evidence. If one applies some liquid to a litmus paper and it turns red then the objective likelihood that the liquid is acidic is very high. But the strong correlation between red litmus paper and acidity is not reflectively accessible. So, if epistemic justification implies that one’s belief is objectively likely to be true then justification is not determined entirely by one’s internal states.

Internalist Response

Internalists argue that the problem of the truth connection is a problem for everyone. Epistemic justification is essentially connected to the truth in a way that distinguishes it from, say, prudential justification. But it is exceedingly difficult to note exactly what this connection consists of. Internalists stress that the proposed externalist solution that epistemic justification raises a belief’s objective likelihood of truth isn’t as straightforward as it first appears. The intuition in the new evil demon problem illustrates that epistemic justification does not imply that one’s belief is objectively likely to be true. So to generate an argument against internalism from the truth connection one needs to do more than appeal to the intuition of a strong connection between justification and truth. The problem of the truth connection for internalism is an active area of research. See Lehrer & Cohen (1983) for the original discussion of this problem.

b. Grandma, Timmy and Lassie

One of the most powerful motivations for externalism is that we correctly attribute knowledge to unsophisticated persons, children, and some animals. These individuals, though, lack internalist justification. So either knowledge doesn’t require justification or justification should be understood externally. Grandma knows that she has hands even though she can not rehearse an argument for that conclusion and can not even think of anything else to defend the claim that she does have hands. Timmy knows that it’s a sunny day and Lassie knows that there’s water in the bowl. In each case it appears that the subject is justified but lacks any internally accessible reason for the belief. Reflection on these cases, and many others like them, supports the externalist central contention that internalism is too strong. Persons can know without possessing internalistic justification.

The main problem with appeal to cases like Grandma, Timmy, and Lassie is that the details of such cases are open to interpretation. Internalists argue that when the cases are properly unpacked either these are not cases of justification or there is internalist justification (see “Internalist Response” immediately below). In an attempt to strengthen the argument for externalism some externalists appeal to non-standard cases. One non-standard case is the chicken-sexer case. Chicken-sexers are individuals that possess the unique ability to reliably sort male from female chickens. As the case is described chicken-sexers do not know how they sort the chickens. They report not being able to offer the criteria they use to sort the chickens. Nonetheless they are very good at sorting chickens and their beliefs that this is a male, this is a female, etc., are justified even though they lack internalist justification.

Another non-standard case is the case of quiz-show knowledge. The case envisions a contestant, call her Sally, on a popular quiz show that gets all the answers right. When a clue is offered Sally rings in with the correct answer. She’s quite good at this. Intuitively Sally knows the answers to the clues; yet from Sally’s perspective the answers just pop into her head. Moreover, Sally may believe that she does not know the answer.

What should we say about this case? Sally is very reliable. Her answers are objectively likely to be true. We can fill out the case by stipulating her answers are caused in part by the relevant fact. She learned the answer either by direct experience with the relevant fact—she was in Tiananmen Square during the famous protests of 1989—or through a reliable informant. Yet Sally lacks any internal phenomenology usually associated with remembering an answer. The answers just seem to come out of the blue. Moreover, Sally doesn’t take herself to know the answer. Yet given her excellent track record it certainly seems right to say that Sally knows the answer. This is a problematic case for internalists because it appears that no relevant internal condition is present.

Internalist Response

The argument advanced by externalists above is a conjunction of two claims: (i) these individuals have knowledge and (ii) no internalist justification is present. In the cases of Grandma, Timmy, and Lassie one response is to deny that these individuals have knowledge, but that strikes many as incredibly implausible and too concessive to skeptical worries. A much more plausible response is to argue that an internalist justification is present. In the case of Grandma, for instance, she has experiences and memories which attest that she had hands. Though she doesn’t cite that as a reason, it is nonetheless a good reason for her to believe that she has hands. Similar points can be made with respect to Timmy and Lassie. To the extent that our judgments that Timmy and Lassie have knowledge are resilient we can find appropriate experiences that indicate the truth of their beliefs.

In the chicken-sexer case internalists respond by either denying that the subject has knowledge or claiming that there are features of the chicken-sexer’s experience that indicate the sex of the chicken. The quiz-show case is more interesting. Given the description of the case it’s difficult to find a reason available to Sally that will meet internalist strictures. The options for the internalists seem limited. Since it’s not plausible that there’s a relevant internalist justification present, internalists are saddled with the result that Sally lacks knowledge. How plausible is this result? Richard Feldman (2005a) argues that it’s not apparent from the case that (for example) Sally even believes the answer. Sally is encouraged to answer and she goes with whatever pops in her head. Moreover, Feldman observes, the contestant seems to lack any stable belief forming mechanism. Since knowledge entails belief it appears then that Sally lacks knowledge because she lacks belief. Furthermore, as another option, since Sally may take herself not to know the answer she possesses a reason that undermines her knowledge (see Feldman (2005a) for the role of higher-order knowledge to defeat object-knowledge). The upshot is that the case of quiz show knowledge is indecisive against internalism: either Sally lacks the relevant belief or she possesses a reason that defeats her knowledge.

c. The Scandal of Skepticism

Another main motivation for externalism is its alleged virtues for handling skepticism in at least some of its varieties. One powerful skeptical argument begins with the premise that we lack direct access to facts about the external world. For any experiential justification we have for believing some fact about the external world—for example, there’s a magnolia tree—it’s possible to have that same justification even though there’s no such fact. The experience one has is caused by a state of one’s brain and it is possible that science could develop a method to induce in one that brain state even though there are no magnolia trees for hundreds of miles. The skeptic continues to argue that since we lack direct access to facts about the external world we lack non-inferential knowledge (or justification) for believing those facts. The final step of the skeptic’s argument is that we do lack sufficient evidence for inferential knowledge (or inferential justification) for believing those facts. Here the skeptic argues that the evidence we possess for external world beliefs does not adequately favor commonsense over a skeptical thesis. Any appeal to experiential evidence will not decide the case against the skeptic and the skeptic is happy to enter the fray over whether commonsense beats skepticism with regard to the theoretical virtues, for example, coherence and simplicity. Berkeley, for instance, argued that commonsense decidedly lost the contest against a kind of skeptical thesis (Berkeley Three Dialogues between Hylas and Philonous).

Internalists find this kind of argument very difficult to rebut. Internalists tend to focus on the final step and argue that even though experience does not imply that skepticism is false it nevertheless makes skepticism much less probable than commonsense. This response is intuitive but it brings with it a number of controversial commitments. The ensuing debate is too complex to summarize here. The upshot though is that it is no easy task to maintain this intuitive response. Consequently externalists think they have a distinct advantage over internalism. Externalists tend to think internalism lands in skepticism but that we have good reason to suspect skepticism is false. Externalists eagerly point out that their view can handle the skeptical challenge.

Externalists typically address the skeptic’s argument by denying that lack of direct access with a fact implies lack of non-inferential knowledge (or justification). In terms of an early version of externalism—D.M. Armstrong’s causal theory (Armstrong 1973)—if one’s perceptual belief that p is caused by the fact that makes it true then one knows that p. Other externalists unpack the externalist condition differently (for example, reliability or truth-tracking), but the core idea is that a lack of direct access doesn’t preclude non-inferential knowledge. Externalists press this virtue against internalist views that are saddled with the claim that lack of direct access implies no non-inferential knowledge (or justification). Assuming that the first and final steps of the skeptical argument are good (a very controversial assumption), internalism would imply that we lack knowledge. Externalists thus see their analysis of knowledge as aligning with commonsense (and against the skeptic) that we possess lots of knowledge.

Internalist Response

One internalist response to this reason for favoring externalism is to challenge the claim that internalism lands in skepticism. Some internalists develop views that imply one does have direct access to external world facts (see entry on direct realism). Another internalist move is the abductivist response which challenges the claim that we lack inferential knowledge or justification for believing commonsense. The abductivist response gets its name from Charles Sanders Peirce’s description of abduction as a good form of inductive reasoning that differs from standard inductive inference (for example, enumerative induction—this swam is white, so is the next one, so is this one as well, …, so, the general rule that all swans are white). The abductivists argues, to put it very roughly, that commonsense is the best explanation of the available data that we possess. Accordingly, we do possess inferential justification for believing that skepticism is false.

A different response to this alleged virtue of externalism is to argue that externalism yields only a conditional response to skepticism. If externalists maintain that some external condition, E, is sufficient for non-inferential knowledge or justification then we get the result that if E then one has non-inferential knowledge. For instance, if, for example, perception is reliable then we have perceptual knowledge. But, the internalist argues, we are not able to derive the unconditional claim that we have perceptual knowledge. In order to conclude that we would have to know that E obtains, but it seems all the externalist can do is appeal to some other external condition, E1, and argue that if E1 then we know that E obtains. This strategy looks unpromising (see Stroud 1989).

4. The Significance of the I-E Debate

What is the I-E debate all about? Why has the debate garnered so much attention? This section considers several proposals about the significance of the I-E debate. Most everyone sees the I-E debate as metaepistemological. The I-E debate concerns fundamental questions about epistemology: what is nature and goals of epistemological theorizing. The three proposals I examine in this section need not be exclusive. Each proposal reflects facets of the I-E debate.

a. Disagreement over the Significance of the Thermometer Model

D.M. Armstrong introduced the “thermometer model” in epistemology as a way of grasping his externalist theory (see Armstrong 1973). The “thermometer model” compares non-inferential knowledge with a good thermometer. A good thermometer reliably indicates the temperature, that is, the temperature readings reliably indicate the actual temperature. In a similar manner non-inferential knowledge is a matter of a belief being reliably true. On the thermometer model a belief that is reliably true need not meet any internalist conditions; if the belief stands in the right relation to the truth of what is believed then the belief is an item of knowledge.

The significance of the thermometer model is whether one should understand non-inferential knowledge purely in terms of external conditions. The driving motivation behind this model is that non-inferential knowledge should be understood in just the same naturalistic sense in which one understands a good thermometer. The model aims to remove questions about non-inferential knowledge from what might be called a rationalist framework in which all forms of knowledge are explicated in terms of reasons. Given the rationalist approach to noninferential knowledge one looks for some fact, different from the original belief, that one is aware of and that makes probable (or certain) the truth of one’s belief. The thermometer model cuts to the heart of this rationalistic project.

It is not at all surprising that the thermometer model met heavy resistance. Laurence BonJour argued that stress on the thermometer model would imply that Norman knows that the president is in New York. BonJour observes that the thermometer model has us view epistemic agents merely as “cognitive thermometers”. If they reliably record the facts then they have noninferential knowledge even though from their own perspective their beliefs have little by way of positive support.

The metaepistemological issue about what to make of the thermometer model is closely related to the issue of what to make of ordinary knowledge ascriptions. It is a common practice to ascribe knowledge to individuals that are in many respects like reliable thermometers. The significant question is what to make of this fact. Do such individuals meet internalistic conditions? Are our ascriptions of knowledge correct in cases in which individuals don’t meet any internalistic conditions? These are areas of ongoing research. The issues here are discussed in the contextualism literature.

b. Disagreement over the Guiding Conception of Justification

Another way to view the I-E debate is a disagreement over the guiding conception of justification. Alvin Goldman (1980) distinguishes between the regulative and theoretical conceptions of justification. The regulative conception of justification takes as its aim to offer practical advice to cognizers in order to improve their stock of beliefs. This epistemological aim, Goldman notes, is prominent in Descartes. The theoretical conception, by contrast, aims to offer a correct analysis of justification, that is, to specify the features of beliefs that confer epistemic status. Goldman sees our interest in a theory of justification as driven by these two different conceptions.

One way of explaining the significance of the I-E debate is over the role of regulative considerations in an account of justification. The access internalist can be seen as stressing the significance of some regulative conditions for a correct account of justification. This is most clearly seen in the stress on the ethics of belief. If a subject’s belief is justified then, in some sense, the subject has regulated her doxastic conduct appropriately. Externalists, by contrast, want to draw a sharp distinction between regulative and theoretical considerations to get the result that regulative considerations do not enter into one’s account of the nature of justification.

c. Disagreement over Naturalism in Epistemology

Another proposal about the significance of the I-E debate is that it is over the issue of whether to “naturalize” epistemology (see, for instance, Fumerton 1995, p. 66). As we saw above with the “thermometer model” a thread that runs through externalist analyses is the idea that epistemic concepts—justification, evidence, and knowledge—can be understood in terms of nomological concepts. Armstrong’s account of noninferential knowledge invokes the idea of a natural relation that holds between a belief and the true state of affairs believed. When a belief stands in this natural relation to the true state of affairs believed then the belief is an instance of noninferential knowledge. Moreover this natural relation is similar to the relation between a thermometer reading and the actual temperature in a good thermometer. Other externalist analysis invoke different nomological concepts: Goldman’s (1979) account makes use of the idea of reliability; Robert Nozick’s (1981) account appeals to the idea of truth-tracking which he unpacks in terms of causal concepts; and Fred Dretske’s (1981) account makes use of a naturalistic concept of information processing.

It’s important to stress the context in which these externalist accounts arose. As we have seen the recognition that the traditional justified true belief (JTB) account of knowledge failed led epistemologists to rethink the connection between true belief and knowledge. It is widely recognized that the traditional JTB account was largely explicated within a rationalist understanding of justification. Justification, on this tradition, invoked concepts such as implication, consistency, coherence, and more broadly, reasons of which the subject was aware. The introduction of the Gettier problem led epistemologists to question whether this traditional assumption was correct. Externalist analyses attempted to explain how natural relations like causation and reliability could provide the key to understanding noninferential knowledge.

Internalists, by contrast, stress the significance of mental concepts to understanding noninferential knowledge or basic justification. These concepts need not be irreducible to physical concepts. But the key idea for internalism is that mere external facts which a subject lacks awareness of are not sufficient for analyzing epistemic concepts. As Fumerton stresses (Fumerton (1995) p. 67) the key epistemic concepts for internalist are concepts like Descartes’ clarity and distinctness, Russell’s notion of direct acquaintance, or—more elusively—Chisholm’s basic notion of more reasonable than.

There are wide ranging issues with respect to naturalism in epistemology. One main issue is whether the evidential relation is contingent or necessary. Internalism can be understood as the view that the most basic evidential relation is necessary and consequently the theory of evidence is an a priori matter. Externalism, by contrast, can be understood as affirming that evidential relations are contingent (see, for example, Nozick (1981) Chapter 3 section III).

Another issue with respect to naturalism in epistemology is its connection to naturalism in the philosophy of mind. The naturalist aims to understand the mind as a physical system. Since physical systems can be explained without invoking mental concepts a naturalist in epistemology is weary of using questionable mental concepts to elucidate the nature of epistemic concepts. Internalism in epistemology is not necessarily at odds with naturalism as a metaphysical view but the internalist’s preferred concepts tend to come from commonsense psychology rather than the natural sciences. Externalists, by contrast, tend to stress natural concepts like causation, reliability, and tracking because these set up better for a naturalist view in the philosophy of mind.

5. Conclusion

The I-E debate develops out of the ruins of the traditional justified true belief account of knowledge. As Edmund Gettier famously illustrated knowledge is more than justified true belief. Attempts to answer the Gettier problem generated the I-E debate. This debate centers on a diverse group of issues: the significance of ordinary knowledge attributions, the nature of rationality, the ethics of belief, and the role of naturalism in epistemology.

See also "Internalism and Externalism in Mind and Language."

6. References and Further Reading

  • Alston, W. 1983. “What’s Wrong with Immediate Knowledge?” Synthese 55, 73-95.
  • Alston, W. 1986. “Internalism and Externalism in Epistemology.” Philosophical Topics 14, 179-221.
  • Alston, W. 1988. “An Internalist Externalism.” Synthese 74, 265-283.
  • Alston, W. 1995. “How to think about Reliability” Philosophical Topics 23, 1-29.
  • Alston, W. 2005. Beyond “Justification”: Dimensions of Epistemic Evaluation. Ithaca, NY: Cornell University Press.
  • Armstrong, D.M. 1973. Belief, Truth and Knowledge. New York: Cambridge.
  • Bergmann, M. 2006. Justification without Awareness. New York: Oxford.
  • BonJour, L. 1980. “Externalist Theories of Empirical Knowledge,” Midwest Studies in Philosophy 5, 53-73.
  • Reprinted in Kornblith 2001. Page references are to the Kornblith reprint.
  • BonJour, L. 1985. The Structure of Empirical Knowledge. Cambridge, MA: Harvard University Press.
  • Chisholm, R. 1988. “The Indispensability of Internal Justification.” Synthese 74:3, 285-296.
  • Cohen, S. 1984. “Justification and Truth.” Philosophical Studies 46, 279-295.
  • Conee, E., and R. Feldman. 2004a. Evidentialism: Essays in Epistemology. New York: Oxford.
  • Conee, E., and R. Feldman. 2004b. “Internalism Defended” in Evidentialism: Essays in Epistemology. New York: Oxford, 53-82.
  • Dretske, F. 1971. “Conclusive Reasons,” Australasian Journal of Philosophy, 49, 1-22.
  • Dretske, F. 1981. Knowledge and the Flow of Information. Cambridge, MA: MIT Press.
  • Feldman, R. 2004a. “In Search of Internalism and Externalism.” The Externalist Challenge, ed. Richard Schantz. New York: Walter de Gruyter. pp. 143-156.
  • Feldman, R. 2004b. “Having Evidence.” in Conee & Feldman, Evidentialism: Essays in Epistemology. New York: Oxford, 219-241.
  • Feldman, R. 2005a. “Respecting the Evidence.” Philosophical Perspectives 19, 95-119.
  • Feldman, R. 2005b. “Justification is Internal.” Contemporary Debates in Epistemology. eds. Matthias Steup and Ernest Sosa. Malden, MA: Blackwell. pp. 270-284.
  • Foley, R. 1993. “What Am I to Believe?” in S. Wagner and R. Warner, eds. Naturalism: A Critical Appraisal. University of Notre Dame Press, 147-162.
  • Fumerton, R. 1988. “The Internalism/Externalism Controversy.” Philosophical Perspectives 2, 443-459.
  • Fumerton, R. 1995. Metaepistemology and Skepticism. Lanham, MD: Rowman & Littlefield.
  • Fumerton, R. 2004. “Inferential Internalism and the Presuppositions of Skeptical Arguments.” in The Externalist Challenge, ed. Richard Schantz. New York: Walter de Gruyter. pp.157-167.
  • Fumerton, R. 2006. “Epistemic Internalism, Philosophical Assurance and the Skeptical Predicament.” in Knowledge and Reality: Essays in honor of Alvin Plantinga, pp. 179-191.
  • Gettier, E. 1963. “Is Justified True Belief Knowledge?” Analysis 23: 121-3
  • Goldman, A. 1967. “A Causal Theory of Knowing.” The Journal of Philosophy 64, 357-372.
  • Goldman, A. 1979. “What is Justified Belief?” in Justification and Knowledge ed. G.S. Pappas. Dordrecht: D. Reidel. 1-23.
  • Goldman, A. 1980. “The Internalist Conception of Justification,” Midwest Studies in Philosophy 5, 27-51.
  • Goldman, A. 1993. “Epistemic Folkways and Scientific Epistemology,” Philosophical Issues 3, 271-285.
  • Goldman, A. 1999. “Internalism Exposed.” Journal of Philosophy 96, 271-93.
  • Kornblith, H. 1988. “How Internal Can You Get?” Synthese 74, 313-27.
  • Kornblith, H. (Ed.) 2001. Epistemology: Internalism and Externalism. Malden, MA: Blackwell.
  • Lehrer, K. and S. Cohen. 1983. “Justification, Truth, and Coherence.” Synthese 55, 191-207.
  • Nozick, R. 1981. Philosophical Explanations. Cambridge, MA: Belknap Press.
  • Plantinga, A. 1993. Warrant: The Current Debate. New York: Oxford.
  • Sosa, E. 1991a. “Reliabilism and intellectual virtue” in Knowledge in Perspective: Selected Essays in Epistemology. New York: Cambridge University Press, 131-145.
  • Sosa, E. 1991b. “Knowledge and intellectual virtue” in Knowledge in Perspective: Selected Essays in Epistemology. New York: Cambridge University Press, 225-244.
  • Sosa and BonJour, L. 2003. Epistemic Justification: Internalism vs. Externalism, Foundations vs. Virtues. Malden, MA: Blackwell.
  • Steup, M. 1999. “A Defense of Internalism.” in The Theory of Knowledge: Classical and Contemporary Readings, 2nd ed. Belmont, CA: Wadsworth, 373-84.
  • Steup, M. 2001. “Epistemic Duty, Evidence, and Internality.” in Knowledge, Truth, and Duty. ed. M. Steup. New York: Oxford.
  • Stroud, B. 1989. “Understanding Human Knowledge in General,” in M. Clay and K. Lehrer, eds., Knowledge and Skepticism. Boulder: Westview Press.
  • Stroud, B. 1994. “Scepticism, ‘Externalism’, and the Goal of Epistemology,” Proceedings of the Aristotelian Society, Supplementary Volume 68: 291-307.

Author Information

Ted Poston
Email: poston "at" jaguar1 "dot" usouthal "dot" edu
University of South Alabama
U. S. A.

Gottfried Leibniz: Causation

leibnizThe views of Leibniz (1646-1716) on causation must stand as some of the more interesting in the history of philosophy, for he consistently denied that there is any genuine causal interaction between finite substances. And yet from another perspective, he sought to integrate both old and new causal taxonomies: On the one hand, Leibniz put forth a theory of causation that would accommodate the Scientific Revolution’s increasing mathematization of nature, one according to which efficient causes played a dominant role. On the other hand, Leibniz also sought to integrate certain aspects of traditional Aristotelian causation into his philosophy. In particular, while many of Leibniz’s contemporaries were rejecting Aristotelian final causes, Leibniz insisted that the pursuit of final causes was worthwhile. Indeed, they played a crucial role in his philosophical system. The result is that Leibniz produced a system with a complex integration of both old and new––of both final and efficient causes––while simultaneously denying there was any real causal interaction between substances at the most basic level. The resulting metaphysics is sufficient to secure him a significant place in the history of the philosophy of causation, one worthy of serious attention.

In introducing his views on causation, Leibniz nearly always pivoted his theory against what he saw as its main rivals, occasionalism and physical influx theory (influxus physicus). He thought both were unacceptable, and that his own theory was the only viable option. In presenting Leibniz’s own theory, the famous "preestablished harmony," this article follows his lead by considering, in the first section, why Leibniz deemed the competitors unacceptable. The article then discusses the details of Leibniz’s positive views on causation.

Table of Contents

  1. The Negative Stance: Leibniz against Physical Influx and Occasionalism
    1. Against Physical Influx
    2. Against Occasionalism
  2. The Positive Stance: Leibniz’s Preestablished Harmony
  3. Efficient and Final Causation
  4. Divine Conservation and Concurrence
  5. References and Further Reading
    1. Primary Sources
    2. Secondary Sources

1. The Negative Stance: Leibniz against Physical Influx and Occasionalism

When it came to introducing his theory of causation, preestablished harmony, Leibniz was fond of presenting it via an argument by elimination: he would set the argument up against its main competitors; reasoning that neither of them was intelligible and so each must be false. Consequently, since the preestablished harmony is entirely intelligible according to Leibniz, and more worthy of a divine creator, it must be the true theory of causation. The following passage from 1698, written with particular attention to mind–body causation, is typical of Leibniz’s presentation:

I have pointed out that we can imagine three systems to explain the intercourse which we find between body and soul, namely, (1) the system of mutual influence of one upon the other, which when taken in the popular sense is that of the Scholastics, and which I consider impossible, as do the Cartesians; (2) that of a perpetual supervisor who represents in the one everything which happens in the other, a little as if a man were charged with constantly synchronizing two bad clocks which are in themselves incapable of agreement –– this is the system of occasional causes; and (3) that of the natural agreement of two substances such as would exist between two very exact clocks. I find this last view fully as possible as that of a supervisor and more worthy of the author of these substances, clocks or automata. (GP IV, 520 [L 494])

This highly metaphorical passage presents Leibniz’s own view, the last of the three options, as both “possible,” and “more worthy” than its competitors of being the product of divine invention. The first view, which Leibniz refers to as the “system of mutual influence,” is also labeled by him “the theory of physical influence” (A VI, 6, 135 [NE 135]), and “the hypothesis of influx” (C 521 [L 269]), among other labels. Leibniz’s claim about this theory of causation is that it is simply impossible. The other main competitor, occasionalism (or “the system of occasional causes”) is possible according to Leibniz, but it is not worthy, and so it is at least implausible. Why did Leibniz consistently make such claims about the rival theories of causation?

a. Against Physical Influx

While the history of the influx theory is complex and often unclear, it seems to have originated in the Neoplatonic tradition and was put to work by a number of medieval Scholastic philosophers (see O’Neill, 1993). The details of the history and various formulations of the influx model need not concern us here however, for what is important is that Leibniz rejects any model of causation that involves a transmission of parts between substances, that is, a passing on of something from one substance (the cause) to another (the effect). And Leibniz uses the terminology “influx” or “influence” to refer to any model of causation that involves passing properties, or “accidents,” from one substance to another, or from one “monad”––the term for Leibnizian substances––to another. The best–known passage containing Leibniz’s rejection of this model is from Monadology 7:

There is, furthermore, no way to explain how a monad could be altered or changed in its inner make-up by some other created being. For one can transpose nothing in it, nor conceive in it any internal motion that could be excited, directed, increased, or diminished within it, as can happen in composites where there is change among the parts. Monads have no windows through which something can enter into or depart from them. Accidents cannot be detached, nor wander about outside of substances, as the sensible species of the Scholastics formerly did. And so, neither substance nor accident can enter a monad from without. (GP VI, 607f. [AG 213f.])

The Scholastic model of causation involved properties of things (“species”) leaving one substance, and entering another. Consider what happens when one looks at a red wall: one’s sensory apparatus is causally acted upon. According to the target of this passage, this involves a sensible property of the wall (a “sensible species”) entering into the mind’s sensorium. According to Leibniz, “nothing ever enters into our mind naturally from the outside” (GP IV, 607 [AG 214]). Leibniz’s message is clear enough: since substances as he conceives of them are “windowless”––that is, indivisible, partless, immaterial, soul-like entities––there is no place for anything to enter into it, or leave it. As a result, one cannot conceive of a property or part of something entering a monad and transposing its parts, for monads have no parts and thus have no portals in which to enter and exit. Given that monads have no parts or windows, it is, as we have seen Leibniz claim, impossible for this theory to be true. Hence, it is not true, according to Leibniz.

b. Against Occasionalism

It is clear that Leibniz viewed occasionalism––Malebranche’s theory of causation––as the leading contender, for he addressed it in a number of published and unpublished writings spanning the course of decades. According to occasionalism, God is the only truly causally efficacious being in the universe. According to Leibniz, Malebranche’s “strongest argument for why God alone acts” (ML 412) is roughly as follows. A true cause, for Malebranche, is one according to which there is a necessary connection between it and its effect. Since bodies cannot move themselves, it must be minds that move bodies. But since there is no necessary connection between the will of a finite mind and what it wills, it follows that the only true cause is the will of God, that is, the only will for which there is a necessary connection between it and what it wills (that is, its effects). Hence, what appear to be causally efficacious acts of will by finite beings are mere occasions for God––the only true cause––to exercise his efficacious will.

Leibniz used three arguments against occasionalism. First, he argued that occasionalism consistently violates “the great principle of physics that a body never receives a change in motion except through another body in motion that pushes it.” According to Leibniz, this principle has “been violated by all those who accept souls or other immaterial principles, including here even all of the Cartesians [such as Malebranche]” (GP VI, 541 [L 587]). In other words, Leibniz believed that occasionalism, by claiming that a material object can be put into motion by something other than another material object, namely, the occasional cause of a finite will and the true cause of the divine will, violated a fundamental principle of physics. As we shall see, Leibniz believed the preestablished harmony did not do so, since every non-initial state of a body in motion has, as a real cause, some state of a body in motion.

Second, Leibniz often argued that occasionalism involved “perpetual miracles.” Consider the following from a letter to Antoine Arnauld of 30 April 1687:

[I]f I understand clearly the opinions of the authors of occasional causes, they introduce a miracle which is no less one for being continual. For it seems to me that the concept of the miracle does not consist of rarity. … I admit that the authors of occasional causes may be able to give another definition of the term, but it seems that according to usage a miracle differs intrinsically through the substance of the act from a common action, and not by an external accident of frequent repetition, and that strictly speaking God performs a miracle whenever he exceeds the forces he has given to creatures and maintains in them. (GP II, 92f. [LA 116])

Notice that Leibniz’s objection is not simply that occasionalism is miraculous because God is constantly acting in the course of nature. Rather, his objection is that according to occasionalism, there is nothing in the nature of objects to explain how bodies behave. All change on Malebranche’s system is explained by appeal to God, and not by the natures or intrinsic forces of created things. Finite bodies on this view are merely extended hunks of matter with no nature by appeal to which one can explain motion. Thus, there is no natural explanation for natural change (no naturally inner cause of motion), and hence such change is supernatural, that is, miraculous.

Finally, this second argument is closely connected with a third argument. Throughout all of his later years, Leibniz sought to distance himself from Spinoza. His primary way of doing so was to insist that there are genuine finite substances, a claim at odds with Spinoza’s monism. According to Leibniz, the very nature of a substance consists in force, or its ability to act, for if it has no such ability, then it is a mere modification of God, the only other substance who could act. Leibniz believed that occasionalism was in danger of reducing into the view of Spinoza—a doctrine inconsistent with traditional theology, and in any event, according to Leibniz, one at odds with the common sense view that creatures are genuine individuals:

I have many other arguments to present and several of them serve to show that according to the view which completely robs created things of all power and action, God would be the only substance, and created things would be only accidents or modifications of God. So those who are of this opinion will, in spite of themselves, fall into that of Spinoza, who seems to me to have taken furthest the consequences of the Cartesian doctrine of occasional causes. (GP IV, 590 [WF 164])

Because occasionalism makes God the principle of activity in created substances, it makes God the very nature of created substances. Hence, there is only one substance (God), and created individuals are modifications of God. So, Leibniz argued that occasionalism has the dangerous consequence of collapsing into Spinozism. (For considerations of Leibniz’ treatments of occasionalism, see Rutherford, 1993; Sleigh 1990.)

2. The Positive Stance: Leibniz’s Preestablished Harmony

Leibniz maintained that created substances were real causes, that God was not the only causally efficacious being (that is, that occasionalism was false), and that intersubstantial causation could not be understood in terms of a physical influx. So, what was Leibniz’s account of causation?

Leibniz's account of causation was in terms of his famous doctrine of the preestablished harmony. This doctrine contains three main ingredients:

(1) No state of a created substance has as a real cause some state of another created substance (that is, a denial of intersubstantial causality).

(2) Every non-initial, non-miraculous, state of a created substance has as a real cause some previous state of that very substance (that is, an affirmation of intrasubstantial causality).

(3) Each created substance is programmed at creation such that all its natural states and actions are carried out in conformity with––in preestablished harmony with––all the natural states and actions of every other created substance.

Consider the above claims in application to the mind-body relation. Leibniz held that for any mental state, the real cause of that state is neither a state of a body nor the state of some other mind. And for any bodily state, the real cause of that state is neither a state of a mind nor the state of some other body. Further, every non-initial, non-miraculous, mental state of a substance has as a real cause some previous state of that very mind, and every non-initial, non-miraculous, bodily state has as a real cause some previous state of that very body. Finally, created minds and bodies are programmed at creation such that all their natural states and actions are carried out in mutual coordination, with no intersubstantial mind-body causation involved.

For example, suppose that Troy is hit in the head with a hammer (call this bodily state Sb) and pain ensues (call this mental state Sm), a case of apparent body to mind causation. Leibniz would say that in such a case some state of Troy's mind (soul) prior to Sm was the real cause of Sm, and Sb was not a real causal factor in the obtaining of Sm. Suppose now that Troy has a desire to raise his arm (call this mental state Sm), and the raising of his arm ensues (call this bodily state Sb), a case of apparent mind to body causation. Leibniz would say that in such a case some state of Troy's body prior to Sb was the real cause of Sb and Sm was not a causal factor in the obtaining of Sb. So although substances do not causally interact, their states accommodate one another as if there were causal interaction among substances.

Mind-body causation was merely one case of causation, for Leibniz believed that a similar analysis is to be given in any case of natural causation. When one billiard ball in motion causes another one to move, there exists, metaphysically speaking, no real interaction between them. Rather, the struck billiard ball moved spontaneously upon contact by the billiard ball in motion. It did so in perfect harmony, that is, in such a way that it appears as though the first causes the second to move. All of this is summarized in Leibniz’s New System of Nature (1695), right after his rejection of occasionalism and physical influx:

Therefore, since I was forced to agree that it is not possible for the soul or any other true substance to receive something from without … I was led, little by little, to a view that surprised me, but which seems inevitable, and which, in fact, has very great advantages and rather considerable beauty. That is, we must say that God originally created the soul (and any other real unity) in such a way that everything must arise for it from its own depths, through a perfect spontaneity relative to itself, and yet with a perfect conformity relative to external things. … There will be a perfect agreement among all these substances, producing the same effect that would be noticed if they communicated through the transmission of species or qualities, as the common philosophers imagine they do. (GP IV, 484 [AG 143f.])

In the last sentence of the above passage, Leibniz refers to what the “common philosophers imagine.” As we have seen, Leibniz is here referring to those who endorse influx theory, the view that postulates “the transmission of species or qualities” (see Against Influx Theory above). Although Leibniz clearly found this theory unacceptable at the end of the day, he did nonetheless indicate that it is an acceptable way of understanding phenomenal nature. It is worth underscoring this point as it helps to highlight what exactly Leibniz has in mind. He writes in the New System:

Besides all the advantages that recommend this hypothesis [that is, preestablished harmony], we can say that it is something more than a hypothesis, since it hardly seems possible to explain things in any other intelligible way, … Our ordinary ways of speaking may also be easily preserved. For we may say that the substance whose state explains a change in an intelligible way (so that we may conclude that it is this substance to which the others have in this respect been adapted from the beginning, in accordance with the order of the decrees of God) is the one which, so far as this change goes, we should therefore think of as acting upon the others. Furthermore, the action of one substance on another is neither the emission nor the transplanting of an entity, as commonly conceived, and it can be reasonably understood only in the way I have just described. It is true that we can easily understand in connection with matter both the emission and receiving of parts, by means of which we quite properly explain all the phenomena of physics mechanically. But a material mass is not a substance, and so it is clear that action as regards an actual substance can only be as I have described. (GP IV, 487 [WF 20]; my emphasis)

There are at least two points worth emphasizing in this passage. First, Leibniz was clearly aware that his theory was at odds with common sense, that is, it is at odds with “our ordinary ways of speaking.” As the above passage indicates, he was concerned to preserve our usual ways of speaking about causal interactions. As a result, Leibniz held that there was a sense in which one could say, for example, that mental events influence bodily events, and vice-versa. He wrote to Antoine Arnauld that although “one particular substance has no physical influence on another … nevertheless, one is quite right to say that my will is the cause of this movement of my arm …; for the one expresses distinctly what the other expresses more confusedly, and one must ascribe the action to the substance whose expression is more distinct” (GP II, 71 [LA 87]). In this passage, Leibniz sets forth what he believed the metaphysical reality of apparent intersubstantial causation amounts to. We begin with the thesis that every created substance perceives the entire universe, though only a portion of it is perceived distinctly, most of it being perceived unconsciously, and, hence, confusedly. Now consider two created substances, x and y (x not identical to y), where some state of x is said to be the cause of some state of y. Leibniz's analysis is this: when the causal state of affairs occurred, the relevant perceptions of substance x became more distinct, while the relevant perceptions of substance y became more confused. Insofar as the relevant perceptions of x become increasingly distinct, it is “causally” active; insofar as the relevant perceptions of substance y become increasingly confused, it is passive. In general, causation is to be understood as an increase in distinctness on the part of the causally active substance, and an increase in confusedness on the part of the passively effected substance. Again, each substance is programmed at creation to be active/passive at the relevant moment, with no occurrence of real substantial interaction. Thus, ordinary ways of speaking are preserved on the grounds that it is true according to the “distinct/confused analysis” to say that one object is the cause of another.

Second, the above passage indicates that when it comes to a mechanical study of phenomenal nature––that is, when it comes to natural philosophy––the influx model may be used. In a way this is not surprising, for as Leibniz makes clear in this passage, the objects of mechanics are physical masses, and these objects have parts (they have “windows”) via which parts can enter and exit and cause change. They are not substances, which again, have no such parts. So, it appears to be Leibniz’s view that at the level of the most real, the level of substances (monads), preestablished harmony is the correct view. However, the influx model is acceptable at the phenomenal level of mechanics, perhaps as an abstraction from, or idealization of the underlying reality. But note that this level is indeed phenomenal, that is, only an appearance, and any analysis on this level is not the end of the story. Still, for Leibniz, the fact that it is acceptable when it comes to mechanics preserves our ordinary ways of speaking, since it is a model of genuine intersubstantial causation. But such a way of speaking, for Leibniz, is certainly not metaphysically rigorous.

3. Efficient and Final Causation

This last point about different Leibnizian metaphysical levels relates to another unique characteristic of Leibniz’s system. Although at the deepest level of analysis, preestablished harmony reigns supreme in Leibniz’s metaphysics, it is also true that Leibniz embraced a specific taxonomy of types of naturally operative causes, one that incorporated both ancient and modern conceptions of causation. Specifically, Leibniz maintained, in accordance with his belief that the phenomenal level can be treated as engaging in intersubstantial causation, that “laws of efficient causes” govern bodies. Consider the following from the Monadology:

The soul follows its own laws and the body likewise follows its own; and they agree by virtue of the preestablished harmony among all substances, because they are all representations of one self-same universe.

Souls act according to the laws of final causes through appetition, ends, and means. Bodies act according to the laws of efficient causes or of motions. And the two realms, that of efficient causes and that of final causes, are harmonious with one another. (GP VI, 620 [AG 223])

In accordance with the mechanical philosophy that prevailed during Leibniz’s lifetime, he held that the motions of bodies are to be understood as engaging in efficient causal relations, or behaving according to “laws of efficient causes.” But Leibniz also believed, as the above passage indicates, that final causation was prevalent in the world, and that it operated in harmony with the realm of efficient causation. Indeed, in the passage above, Leibniz presented his usual bifurcation of the world into two realms: the bodily realm is governed by efficient causation, and the realm of souls (individual substances) is governed by final causation.

A final cause of some activity is that for which that activity occurs; it is a goal, or end, or purpose of some activity. In claiming that souls act according to final causes, Leibniz seems to have in mind that they are essentially goal driven entities. Any given substance (such as a soul), according to Leibniz, is endowed with two powers: perception and appetite. Leibniz characterizes appetition thus: “The action of the internal principle which brings about the change or the passage from one perception to another may be called appetition” (GP VI, 609 [AG 215]). Appetitions are the ultimate principles of change in the Leibnizian universe, as they are responsible for the activity of the ultimately real things, substances. In claiming, therefore, that substances are governed by laws of final causes, Leibniz has in mind that appetitions lead a substance to strive for certain future perceptual states:

[S]ince the nature of a simple substance consists of perception and appetite, it is clear that there is in each soul a series of appetites and perceptions, through which it is lead from the end to the means, from the perception of one object to the perception of another. (C 14 [MP 175])

It is a matter of some controversy whether Leibniz held that appetitive states of a substance are intrasubstantial productive causes of change (that is, efficient causes of change), and there are texts that can be brought to bear on both sides of the issue. (See Carlin, 2004, 2006; Davidson, 1998; Murray, 1995, 1996; Paull, 1992.) In some passages, Leibniz separates the world into what appear to be functionally autonomous causal realms:

Souls act according to the laws of final causes through appetition, ends, and means. Bodies act according to the laws of efficient causes or of motions. And the two realms, that of efficient causes and that of final causes, are harmonious with one another. (GP VI, 620 [AG 223])

But in other texts, Leibniz seems clearly to suggest that final causes are a species of efficient cause, and hence are productive causes of change. Consider the following:

[T]he present state of body is born from the preceding state through the laws of efficient causes; the present state of the soul is born from its preceding state through the laws of final causes. The one is the place of the series of motion, the other of the series of appetites; the one is passed from cause to effect, the other from end to means. And in fact, it may be said that the representation of the end in the soul is the efficient cause of the representation in the same soul of the means. (Dut, II, 2, 134; my emphasis)

Thus, in this text, Leibniz suggests that final causes themselves produce future perceptions by way of efficient causation.

In this connection, it is worth noting that there is a sense in which final causation is operative at the level of phenomenal bodies as well. “There is,” Leibniz writes in the New Essays, “a moral and voluntary element in what is physical, through its relation to God. . . . [B]odies do not choose for themselves, God having chosen for them” (A VI, 6, 179 [NE 179]). Mechanical bodies, understood as phenomenal hunks of matter, do not exhibit intentionality. Thus, they do not frame their own ends in the way that immaterial substances do. Still, there is a sense in which they are subject to final causes, for they act for the ends that God has set for them, and they do so by way of mechanical efficient causation. Thus, there is some suggestion that Leibniz held that both efficient and final causation permeated the universe at multiple ontological levels.

But whether or not Leibniz believed that both types of causes operated at multiple ontological levels, he did nonetheless believe that the harmony of efficient and final causes explained the ordinary conscious activity of substances, including that sort of activity often cited as involving free will:

[T]he laws that connect the thoughts of the soul in the order of final causes and in accordance with the evolution of perceptions must produce pictures that meet and harmonize with the impressions of bodies on our organs; and likewise the laws of movements in the body, which follow one another in the order of efficient causes, meet and so harmonize with the thoughts of the soul that the body is induced to act at the time when the soul wills it. (GP VI, 137 [T 62])

Although it might appear to some that such a view is inconsistent with freedom of the will, Leibniz did not think so, for he repeatedly maintained that human souls, though governed by preestablished laws of final causes, act with freedom of the will (e.g. GP VII, 419 [L 716f.]). (Whether he was entitled to such a view is another matter.) It is also worth noting that in a number of passages, Leibniz argues that this harmony between types of causation accounts for the very union of the human body and soul (cf. GP VI, 599 [AG 208]).

Finally, Leibniz does not restrict his doctrine of final causation to the conscious activity of rational agents, for he seems to recognize final causal activity everywhere in his system. Consider the following from his Notes on Stahl:

[T]hat motion is not improperly called voluntary, which is connected with a known distinct appetite, where we notice the means at the hands of our soul, being adapted to the end itself; although in other [non-voluntary] movement also, appetites proceed to their own ends through means, albeit they are not noticed by us. (Dut II, 2, 136; my emphasis)

Here Leibniz claimed that final causes operate at the level of the unconscious: a mental state can function as a final cause without our being aware of it. In a letter of 8 May 1704 to Sophie Charlotte, Leibniz made essentially the same point: “So that even in our instinctive or involuntary actions, where it seems only the body plays a part, there is in the soul an appetite for good or an aversion to evil which directs it, even though our reflection is not able to pick it out in the confusion” (GP III, 347 [WF 224f.]). It seems to follow that the preestablished harmony between efficient and final causes has wider application than one might suppose at first glance.

4. Divine Conservation and Concurrence

Although Leibniz maintained against the occasionalists and Spinoza that created substances were genuine sources of their own activity, and that it is not true that God alone is the source of all natural activity, he did nonetheless believe in a doctrine of divine conservation and concurrence. Briefly, according to the latter, God is not an absentee creator, but is involved in every aspect of the natural world, including the causal activity of created substances. Since Leibniz held that creatures are real causes of their own actions, this means that both God and creatures concur in bringing about the effects of the actions of created substances.

Although the texts on this aspect of Leibniz’s theory of natural causation are notoriously thorny, the following passage seems to represent what is his considered view:

The concurrence of God consists in giving us continually whatever there is of reality in us and our actions, insofar as it contains some perfection; but what there is therein of limitation or imperfection is a consequence of preceding limitations, which are originally in the creature. (GP VI, 340 [T 377])

In general, the idea seems to be this: creatures are real causes of the imperfections in actions, while God is responsible for the perfection contained in the action. But this general idea seems clearly inconsistent with a number of other doctrines put forth by Leibniz. For example, there is reason to believe that he holds that a substance can be said to act only insofar as it tends towards perfection (cf. GP VI, 615 [AG 219]). If this is the case, then in conjunction with the passage above, it appears that God is the only active agent. Moreover, Leibniz, along with many other seventeenth century thinkers, held that divine conservation of the world amounts to a continual recreation of every substance and all their states. If this is the case, one is left wondering how not to slip into the occasionalism of Malebranche, for it would seem once again that creatures are not producing anything. This notoriously difficult topic has recently spawned a body of secondary literature, as commentators have struggled with the apparent inconsistencies. (Adams, 1994; Lee, 2004; Sleigh, 1990)

5. References and Further Reading

a. Primary Sources

References to works of Leibniz are cited by abbreviation according to the key below. Each one is cited by page number unless otherwise noted. ASämtliche Schriften und Briefe. Multiple volumes in seven series. Edited by the German Academy of Sciences. Darmstadt and Berlin: Berlin Academy, 1923–. Cited by series, volume, and page.

Philosophical Essays.
Edited and translated by Roger Ariew and Daniel Garber. Indianapolis: Hackett, 1989.
Opera Omnia.
Edited by L. Dutens. Geneva: Fratres De Tournes, 1768. Cited by volume, and page.
Die Philosophischen Schriften von Gottfried Wilhelm Leibniz.
7 vols. Edited by C.I. Gerhardt. Berlin: Weidman, 1875-1890. Cited by volume and page.
Philosophical Papers and Letters.
Edited by Leroy Loemker, 2nd ed. Dordrecht: Reidel, 1969.
The Leibniz-Arnauld Correspondence.
Translated and edited by H.T. Mason. Manchester: Manchester UP, 1967.
Philosophical Writings.
Translated and edited by Mary Morris and G.H.R. Parkinson. London: Dent, 1973.
New Essays on Human Understanding.
Translated and edited by Peter Remnant and Jonathon Bennett. Cambridge: Cambridge UP, 1982. The original French text is in A VI, 6.
Edited by Austin Farrer and translated by E.M. Huggard. New Haven: Yale UP, 1952. Cited by section number as in GP VI.
Leibniz’s ‘New System’ and Associated Contemporary Texts.
Translated and edited by R.S. Woolhouse and Richard Francks. Oxford: Oxford UP, 1997.

b. Secondary Sources

  • Adams, Robert. 1994. Leibniz: Determinist, Theist, Idealist. Oxford: Oxford UP.
    • A classic and thorough discussion of Leibniz’s views on a number of topics, including human and divine causation. The book consults a wealth of primary sources.
  • Gregory Brown, 1992. “Is There a Pre-Established Harmony of Aggregates in the Leibnizian Dynamics, or Do Non-Substantial Bodies Interact?,” Journal of the History of Philosophy 30, pp. 53-75.
    • This article argues that Leibnizian aggregates do not interact in Leibniz’s physics, and also discusses the importance of distinguishing ontological levels in Leibniz’s philosophy.
  • Carlin, Laurence. 2006. “Leibniz on Final Causes,” Journal of the History of Philosophy 44 (2), pp. 217-233.
    • This paper argues that for Leibniz, final causes are species of efficient cause, and are therefore just as productive as efficient causes.
  • Carlin, Laurence. 2004. “Leibniz on Conatus, Causation, and Freedom,” Pacific Philosophical Quarterly 85 (4), pp. 365–379.
    • This paper argues that Leibniz was a causal determinist by focusing on his treatment of causation in relation to his concept of conatus, or his concept of force in his physics.
  • Cover, Jan and Mark Kulstad, eds. 1990. Central Themes in Early Modern Philosophy. Indianapolis:Hackett.
    • This is an anthology that contains a number of articles of causation in early modern philosophy, including an article on the relationship between Leibniz and occasionalism.
  • Davidson, Jack. 1998. “Imitators of God: Leibniz on Human Freedom,” Journal of the History of Philosophy 36, pp. 387–421.
    • This paper argues that Leibniz was a causal determinist on the grounds that his model of human volition imitates the model of divine agency.
  • Garber, Daniel. 1994. “Leibniz: Physics and Philosophy” in Jolley, ed., The Cambridge Companion to Philosophy, pp. 270-352.
    • This article is a sustained treatment on Leibniz’s views of the interaction between dynamical bodies, the laws of nature, and efficient and final causation.
  • Jolley, Nicholas. 1994. The Cambridge Companion to Leibniz. Cambridge: Cambridge UP.
    • This anthology contains articles on many aspects of Leibniz’s philosophy. It is written by leading scholars, and could very well be the first place to look for someone new to Leibniz.
  • Kulstad, Mark. 1990. “Appetition in the Philosophy of Leibniz.” In A. Heinkemp, W. Lenzen, and M. Schneider, eds., Mathesis Rationis, pp. 133-151.
    • This is a through examination of Leibniz’s concept of appetition, and is particularly helpful in relating appetition to his physics and to human volition.
  • Lee, Sukjae. 2004. “Leibniz on Divine Concurrence.” Philosophical Review 113 (2), pp. 203-248.
    • This paper is a close and controversial examination on Leibniz’s doctrines of divine conservation and concurrence.
  • Murray, Michael. 1995. “Leibniz on Divine Foreknowledge of Future Contingents and Human Freedom.” Philosophy and Phenomenological Research 55: 75-108.
    • This article argues that Leibniz was not a causal determinist, contrary to what others have argued.
  • Murray, Michael.. 1996. “Intellect, Will, and Freedom: Leibniz and His Precursors.” The Leibniz Society Review 6: 25-60.
    • This paper develops the interpretation in Murray (1995) by drawing on a wealth of historical sources, including medieval philosophers’ treatment of the concept of moral necessity.
  • Nadler, Steven, ed. 1993. Causation in Early Modern Philosophy. University Park: Penn State UP.
    • This collection of papers is the classic source for papers on causation in early modern philosophy.
  • O’Neill, Eileen. (1993) “Influxus Physicus.” In Nadler, Steven, ed. Causation in Early Modern Philosophy, pp. 27-57.
    • This paper traces the history of the physical influx theory, and analyses its main tenets. It has become the classic treatment of the issue.
  • Paull, R. Cranston. 1992. “Leibniz and the Miracle of Freedom,” Nous 26: 218-235.
    • This paper contains an argument for the conclusion that Leibniz was not a causal determinist. It draws attention to certain passages that appear troubling for the causal determinist reading.
  • Rutherford, Donald. 1995. Leibniz and the Rational Order of Nature. Cambridge: Cambridge UP.
    • This book contains excellent discussions of Leibniz’s views on the properties of the best possible world, and is particularly helpful on the topic of how the level of efficient causes relates to the level of final causes.
  • Rutherford, Donald. 1993. “Natures, Laws, and Miracles: The Roots of Leibniz’s Critique of Occasionalism” in Nadler, Steven, ed. Causation in Early Modern Philosophy, pp. 135-158.
    • A clear discussion of exactly why Leibniz disagrees with Malebranche’s occasionalism. The article challenges some scholars’ interpretations.
  • Sleigh, Robert C. 1990. Leibniz and Arnauld: A Commentary on Their Correspondence. New Haven: Yale UP.
    • This book examines a number of Leibniz’s views on contingent, substance, and causation in the context of Leibniz’s classic exchange with Antoine Arnauld. It also contains helpful discussions of Leibniz’s treatment of occasionalism.
  • Sleigh, Robert C.1990. “Leibniz on Malebranche on Causality” in Cover and Kulstad, eds. Central Themes in Early Modern Philosophy, pp. 161-194.
    • This is a helpful discussion of Leibniz’s reaction to Malebranche’s occasionalism.
  • Wilson, Margaret. 1976. “Leibniz’s Dynamics and Contingency in Nature” in Machamer and Turnbull, eds., Motion and Time, Space and Matter, pp. 264-289.
    • This is a discussion of Leibniz’s belief that the causal laws of nature must be grounded in considerations about final causes.

Author Information

Laurence Carlin
University of Wisconsin, Oshkosh
U. S. A.

Doxastic Voluntarism

Doxastic voluntarism is the philosophical doctrine according to which people have voluntary control over their beliefs. Philosophers in the debate about doxastic voluntarism distinguish between two kinds of voluntary control. The first is known as direct voluntary control and refers to acts which are such that if a person chooses to perform them, they happen immediately. For instance, a person has direct voluntary control over whether he or she is thinking about his or her favorite song at a given moment. The second is known as indirect voluntary control and refers to acts which are such that although a person lacks direct voluntary control over them, he or she can cause them to happen if he or she chooses to perform some number of other, intermediate actions. For instance, a person untrained in music has indirect voluntary control over whether he or she will play a melody on a violin. Corresponding to this distinction between two kinds of voluntary control, philosophers distinguish between two kinds of doxastic voluntarism. Direct doxastic voluntarism claims that people have direct voluntary control over at least some of their beliefs. Indirect doxastic voluntarism, however, supposes that people have indirect voluntary control over at least some of their beliefs, for example, by doing research and evaluating evidence.

This article offers an introductory explanation of the nature of belief, the nature of voluntary control, the reasons for the consensus regarding indirect doxastic voluntarism, the reasons for the disagreements regarding direct doxastic voluntarism, and the practical implications for the debate about doxastic voluntarism in ethics, epistemology, political theory, and the philosophy of religion.

Table of Contents

  1. Introduction
  2. Indirect Doxastic Voluntarism
  3. Direct Doxastic Voluntarism
    1. Arguments against Direct Doxastic Voluntarism
      1. The Classic Argument
      2. The Empirical Belief Argument
      3. The Intentional Acts Argument
      4. The Contingent Inability Argument
    2. Arguments for Direct Doxastic Voluntarism
      1. The Observed Ability Argument
      2. The Action Analogy Argument
  4. Significance: Ethical, Epistemological, Political, and Religious
  5. Conclusion
  6. References and Further Reading

1. Introduction

The central issue in the debate about doxastic voluntarism is the relationship between willing and acquiring beliefs. Necessarily related to this central issue are two other important issues: the nature of belief and the nature of the will, or more specifically, the nature of voluntary control. In order to provide a basic foundation for understanding the central issue, let us begin by clarifying each of these related issues.

First, let us make a preliminary and necessarily cursory clarification about the nature of belief. Consider your own case. Assuming that you are like most people, you believe a wide variety of things. Among the various things you believe, is one of them that the sum of thirty-seven and three is forty? If all went well, as you read and replied to that question, two things happened: (i) you comprehended the proposition the sum of thirty-seven and three is forty—that is, it was immediately present to your mind, you understood it, and you actively considered it, etc.—and (ii) you answered affirmatively. In light of such examples, philosophers have traditionally characterized the nature of belief as follows. To say that a person believes a proposition is to say that, at a given moment, the person both comprehends and affirms the proposition. It is in this sense that Augustine claims, “To believe is nothing but to think with assent” (Augustine, De Praedestione Sanctorum, v; cf. Aquinas, Summa Theologicae II-II, Q. 2, a. 1; Descartes, Meditations IV, Principles of Philosophy I.34; Russell 1921. For a detailed discussion of the nature of assent, see, for example, Newman 1985.).

This traditional characterization is a reasonable starting point for understanding the nature of belief, but it is at the very least incomplete. To see why, reflect on your own experience of considering the above-raised question. Both prior to and subsequent to considering the question, the proposition the sum of thirty-seven and three is forty was neither immediately present to your mind nor something you were actively considering. Nonetheless, you still believed it, and you still believe it. In this respect, you are like most other people. There are, as a matter of fact, some propositions that people believe about which they are currently thinking and others that they believe about which they are not currently thinking. To account for this fact, let us amend the traditional characterization of belief. To say that a person believes some proposition is to say that, at a given moment, the person either

i) comprehends and affirms the proposition, or

ii) is disposed to comprehend and to affirm the proposition (cf. Audi 1994, Price 1954, Ryle 2000, Scott-Kakures 1994, Schwitzgebel 2002).

There are, as one might expect, a number of subtle and controversial issues regarding the nature of belief that one could raise at this point, and addressing such issues would certainly be important in developing a complete theory about doxastic voluntarism. This amended description of belief should be sufficient, however, for our introductory discussion.

Second, let us make a preliminary and, again, necessarily cursory clarification about the nature of voluntary control. Take a moment to visualize the White House or to imagine the melody of your favorite song. Such mental activities are not difficult. Assuming your mental faculties are functioning properly, if you choose to perform these actions, they will happen immediately. They are things over which you have, what we will call, direct voluntary control. Suppose, however, that you want to learn either to play a particular song on a musical instrument on which you are currently untrained or to say a particular phrase in a foreign language that you do not currently speak. You will not acquire these abilities immediately after choosing to do so. Rather, you will have to choose to engage in a series of acts (for example, attending lessons, practicing, etc.) that will eventually result in your acquiring of these abilities. So, you do not have direct voluntary control over whether you can play a musical instrument or learn a foreign language. Nonetheless, acquiring abilities such as these is something that you choose to do. Thus, it is something over which you have a form of voluntary control—namely, what we will call, indirect voluntary control.

As with the nature of belief, at this point one could raise a number of subtle and controversial issues regarding the nature of voluntary control, and addressing such issues would surely be important in developing a complete theory about doxastic voluntarism. (For related discussions of these issues, see, for example, Alston 1989, Steup 2000, Nottelmann 2006.) Nonetheless, this distinction between direct and indirect voluntary control should be sufficient for our introductory discussion.

Corresponding to this distinction between direct and indirect voluntary control, philosophers distinguish between direct doxastic voluntarism and indirect doxastic voluntarism. The former is concerned with answering the question: to what extent, if any, do people have direct voluntary control over their beliefs? The latter is concerned with answering the question: to what extent, if any, do people have indirect voluntary control over their beliefs? Since the debate about indirect doxastic voluntarism is less contentious, let us examine it first.

2. Indirect Doxastic Voluntarism

Is indirect doxastic voluntarism true? Consider the following cases. First, suppose you walk into a room that is dark but has a working light that you can turn on by flipping the switch on the wall. When you walk into the room, you believe the proposition the light in the room is off. You realize, though, that you could change your belief by flipping the switch, so you flip the switch. The light comes on, and subsequently, you believe the proposition the light in the room is on. Second, suppose a usually trustworthy friend tells you that Paul David Hewson is one of the most popular singers of all time. You have no idea who this Hewson fellow is, but you would like to know whether you should trust your friend and, hence, believe the proposition Paul David Hewson is one of the most popular singers of all time. So, you do some research and discover that Paul David Hewson is the legal name of the incredibly popular lead singer for the Irish rock band U2. Consequently, you come to believe that Paul David Hewson is one of the most popular singers of all time. Thus, there are at least two cases in which someone has indirect voluntary control over his or her beliefs.

These cases, however, are not unique. The first illustrates that people have indirect voluntary control over whether they will believe any proposition, if they have voluntary control over the evidence confirming or disconfirming the proposition. The second illustrates that people have indirect voluntary control over whether they will believe many propositions, provided that they can discover evidence confirming or disconfirming these propositions, that they choose to seek out this evidence, and that they form their beliefs according to the evidence.

The significance of cases such as these is widely recognized among participants in the debate about doxastic voluntarism. (For summaries of such cases, see, for example, Alston 1989, Feldman 2001.) In fact, they are so widely accepted that philosophers seem to have reached a consensus on one aspect of the debate, recognizing that indirect doxastic voluntarism is true. In light of this consensus, they focus the majority of their attention on the more contentious question of direct doxastic voluntarism, to which we will now turn.

3. Direct Doxastic Voluntarism

Is direct doxastic voluntarism true? On this issue, philosophers are divided. Many argue that it is not, but some argue that it is. To each position, however, there are important challenges. Let us consider the most influential arguments and counterarguments in some detail, beginning with arguments against direct doxastic voluntarism.

a. Arguments against Direct Doxastic Voluntarism

i. The Classic Argument

Bernard Williams (1970) offers two arguments against direct doxastic voluntarism. Call the first “The Classic Argument,” since it is, perhaps, the locus classicus of the debate. Call the second “The Empirical Belief Argument,” since the notion of empirical belief is its essential feature.

The Classic Argument runs as follows: If people could believe propositions at will, then they could judge propositions to be true regardless of whether they thought the propositions were, in fact, true. Moreover, they would know that they had this power—that is, the power to form a judgment regarding a proposition regardless of whether they thought it was true. For instance, direct doxastic voluntarism seems to imply that, at this very moment, Patti could form the belief that Oswald killed Kennedy regardless of whether, at this very moment, she regards the proposition Oswald killed Kennedy as true or as false. Moreover, if direct doxastic voluntarism is correct, then it seems that Patti would know that she has the power to form a judgment regarding the proposition Oswald killed Kennedy regardless of whether she considers the proposition to be true. This phenomenon, however, is at odds with the nature of belief for the following reason. If a person believes that a proposition is true, then he or she would be surprised (or experience some related form of cognitive dissonance) to discover that the proposition is false. Similarly, if a person believes that a proposition is false, then he or she would be surprised (or experience some related form of cognitive dissonance) to discover that the proposition is true. For instance, if Patti believes that Oswald killed Kennedy, then she would experience some form of cognitive dissonance upon discovering that C.I.A. operatives killed Kennedy. Similarly, if Patti believes that Oswald did not kill Kennedy, then she would experience some form of cognitive dissonance upon discovering that he did. Thus, people could not seriously think of the beliefs they set out to acquire at will as beliefs—such as the things that “purport to represent reality.” Thus, Williams continues,

With regard to no belief could I know—or, if all this is to be done in full consciousness, even suspect—that I had acquired it at will. But if I can acquire beliefs at will, I must know that I am able to do this; and could I know that I was capable of this feat, if with regard to every feat of this kind which I had performed I necessarily had to believe that it had not taken place? (1970, 108)

Williams suggests that the answer to his rhetorical question is clear: ‘no’. It follows that such a person would not know that he or she is capable of acquiring beliefs at will and, hence, that such a person could not acquire beliefs at will. Therefore, Williams suggests, direct doxastic voluntarism is not merely false; rather it is conceptually impossible (1970, 108).

Critics, however, argue that The Classic Argument has at least three major flaws. First, they suggest that there is a difference between belief acquisition and belief fixation. It is at least possible that at one moment a person could will, in full consciousness, to acquire a belief concerning a proposition merely for practical reasons, regardless of the truth of the proposition. Once the person does this, however, he or she might perceive the evidence for the proposition differently than before—such that he or she comes to perceive some fact, which previously seemed like a terrible evidence for the proposition, as conclusive evidence for the proposition. In which case, the person’s belief would be fixed for theoretical reasons that are concerned with the truth of the proposition. Thus, the person might perceive his or her previous position as a kind of doxastic blindness, in which he or she failed to recognize the evidence for what it really is—namely, conclusive evidence. Hence, it is possible that at one moment a person could will, in full consciousness, to acquire a belief regardless of the truth of the proposition, and in the next moment regard his or her belief as a belief and believe that his or her belief was acquired at will just a moment ago. Therefore, critics conclude, The Classic Argument fails (cf. Johnston 1995, 438; Winters 1979, 253; see also Scott-Kakures 1994).

Second, they contend that a person could know, in general, that he or she had the ability to acquire beliefs at will without knowing that any particular belief was acquired at will. Jonathan Bennett illustrates the objection nicely with a thought experiment involving a group of fictional characters called ‘Credamites’. According to Bennett’s tale,

Credam is a community each of whose members can be immediately induced to acquire beliefs. It doesn’t happen often, because they don’t often think: ‘I don’t believe that p, but it would be good if I did.’ Still, such thoughts come to them occasionally, and on some of those occasions the person succumbs to temptation and will himself to have the desired belief. […] When a Credamite gets a belief in this way, he forgets that this is how he came by it. The belief is always one that he has entertained and has thought to have some evidence in its favour; though in the past he has rated the counter-evidence more highly, he could sanely have inclined the other way. When he wills himself to believe, that is what happens: he wills himself to find the other side more probable. After succeeding, he forgets that he willed himself to do it. (1990, 93)

To understand, more clearly, how Bennett’s Credamites can exercise direct voluntary control over their beliefs, consider a particular (hypothetical) case. Suppose there is a Credamite who is very ill and who finds it possible, but less than likely, that she will recover from her illness. Nonetheless, her chances of recovery will increase if she believes that she will recover from her illness, and she is aware of this connection between her beliefs and her illness. So, as any rational Credamite might, she simply chooses to believe that she will recover and, consequently, forgets that she willed herself to form the belief. Thus, Bennett’s thought experiment suggests that, contrary to what Williams claims, there could be beings who have the ability to form beliefs at will, choose to exercise that ability on a specific occasion, and immediately forget that they exercised their ability on that occasion (see also Scott-Kakures 1994, 83; Winters 1979, 255). Therefore, he and sympathetic critics conclude, The Classic Argument fails.

Third, they contend that a person could possess an ability without knowing that he or she possesses the ability (see, for example, Winters 1979, 255). Thus, a person could have the ability to acquire beliefs at will even if it were impossible for her to know that he or she had this kind of ability. Therefore, the critics conclude, The Classic Argument fails.

ii. The Empirical Belief Argument

The Empirical Belief Argument against direct doxastic voluntarism runs as follows. A person can have an empirical belief concerning a proposition only if the proposition is true and the person’s perceptual organs are working correctly to cause the belief. For example, a woman can have an empirical belief, say, that the walls in her office are white only if the walls in her office are, in fact, white and her eyes are working correctly to cause the belief. In cases of believing empirical matters at will, “there would be no regular connection between the environment, the perceptions,” and the belief. Thus, believing at will would fail to satisfy the necessary conditions of ‘empirical belief’. Therefore, believing empirical matters at will is conceptually impossible (Williams 1970, 108).

Critics suggest that there are at least two problems with The Empirical Belief Argument. First, people believe all sorts of things about empirical matters that are not caused by the state of affairs obtaining and their perceptual organs functioning properly (cf. Bennett 1990, 94-6). For instance, one might believe that a tower in the distance is round because it seems round to one whose perceptual organs are functioning properly—even though at this distance square towers appear round. Hence, the argument seems to rely on a false premise. Second, even if the argument were sound, it would show only that it is impossible for people to will to believe some propositions. Therefore, the critics contend, even if The Empirical Belief Argument were sound, it would show only that certain beliefs are not within one’s voluntary control, not that direct doxastic voluntarism is false, let alone conceptually impossible.

The problem, however, might seem merely to be Williams’ suggestion that a person can have an empirical belief concerning a proposition only if the proposition is true. Supporters of The Empirical Belief Argument, however, could reject that claim and offer a revised version of the argument. In fact, Louis Pojman has offered such an argument, which runs as follows (Pojman 1999, 576-9). Acquiring a belief is typically a happening in which the world forces itself on a subject. A happening in which the world forces itself on a subject is not a thing the subject does or chooses. Therefore, acquiring a belief is not typically something a subject does or chooses.

Critics contend, however, that there are at least two problems with Pojman’s version of the argument. First, they contend that people do have some direct form of voluntary control over their beliefs they form in light of sensory experiences. For instance, someone might have a very strong sensory experience suggesting that there is an external world and, nonetheless, not judge that there is an external world. Rather, one might suspend judgment about the matter (see, for example, Descartes’s First Meditation). Similarly, someone like John Nash, the M.I.T and Princeton professor portrayed in “A Beautiful Mind,” might have a very strong sensory experience as if he or she is in the presence of another person and, nevertheless, not judge that he or she is in the presence of another person. Rather, such a person might judge that he or she is alone and that the sensory experience is a hallucination. Thus, critics conclude, even if people cannot control the information provided to them by their senses, they can control whether they believe (so to speak) “what their senses tell them.” Second, they contend that like Williams’ original version of the argument, Pojman’s revised version would demonstrate, at best, that it is impossible for people to will to believe some propositions. Thus, they conclude that it does not demonstrate that direct doxastic voluntarism is false, let alone conceptually impossible.

iii. The Intentional Acts Argument

Dion Scott-Kakures (1994) offers another kind of argument that attempts to show that direct doxastic voluntarism is conceptually impossible. The argument uses an analysis of the nature of intentional acts to suggest that direct doxastic voluntarism is impossible. It goes as follows. If direct doxastic voluntarism is true, then believing is an act that is under people’s direct voluntary control. Moreover, any act that is under a person’s direct voluntary control is guided and monitored by an intention. For instance, steering one’s car through a left turn signal is an act that is under one’s direct voluntary control, and it is an act that is guided and monitored by one’s intention to turn left. Acquiring a belief, however, is different. It is, by its very nature, not the kind of act that can be guided and monitored by an intention. Thus, acquiring a belief is not under a person’s direct voluntary control. Therefore, direct doxastic voluntarism is conceptually impossible.

The critical premise in the argument is the claim that acquiring a belief is, by its very nature, not the kind of act that can be guided and monitored by an intention. Why, though, should we think that that claim is true? Suppose someone wants to form a belief at will. Let’s take a particular case. Suppose Dave wants to will himself to believe that God exists. The problem, according to Scott-Kakures, is that Dave has a certain perspective on the world, which includes his other beliefs, his desires, etc., and that perspective is incompatible with Dave believing that God exists. Thus, so long as Dave maintains that perspective, he cannot form an intention that could succeed in guiding and monitoring an act of believing that God exists. This problem, however, is not unique to Dave. Any person who wants to will himself or herself to believe a proposition faces the same obstacle. The perspective the person has of the world will not allow him or her to form an intention that is compatible with the belief he or she wants to form. Therefore, as long as the person maintains that perspective, it is simply not possible for him or her to form an intention that could guide and monitor the act of willing himself or herself to believe. Hence, acquiring a belief is, by its very nature, not the kind of act that can be guided and monitored by an intention.

Critics, however, suggest that the perspective of a person who attempts to believe at will might be compatible with the proposition he or she attempts to believe (Radcliffe 1997). They argue as follows. Consider Dave’s case. Because of his isolated background, he may be ignorant both of the standard arguments for and of the standard arguments against the existence of God. Nonetheless, he might understand the proposition God exists and desire to believe it for pragmatic purposes. For instance, reading Pascal’s Pensées may have persuaded him that the potential benefits of believing that God exists outweigh the potential detriments of not believing that God exists. From this perspective, he might form the intention to acquire at will the belief that God exists; however, nothing in the perspective that generates his intention is incompatible with believing that God exists. Hence, the perspective from which Dave generates his intention to believe that God exists is not necessarily incompatible with believing that God exists. Moreover, Dave’s case is not unique. Other people can find themselves in similar circumstances. Thus, at the moment a person attempts to acquire a belief at will, his or her perspective might be compatible with the proposition he or she wants to believe. Hence, the critics conclude, Scott-Kakures’s argument fails to show that direct doxastic voluntarism is conceptually impossible.

iv. The Contingent Inability Argument

Some philosophers, such as Edwin Curley, contend that regardless of whether direct doxastic voluntarism is conceptually impossible, it is false. Curley, specifically, argues as follows (1975, 178). If direct doxastic voluntarism is true, then people should be able to believe at will at least those propositions for which the evidence is not compelling. Let us test the doctrine empirically. Consider the recent meteorological conditions on Jupiter. We do not have compelling evidence either confirming or disconfirming the proposition it rained three hours ago on Jupiter, so it is a proposition about which we ought to be able to form a belief at will. Curley, however, suggests that he cannot form a belief about the proposition and suggests that his readers cannot either, unless they have strikingly different minds than his. Thus, he suggests, there is at least one (and probably many other) clear counterexamples to the claim that people have direct voluntary control over their beliefs. Therefore, he suggests, regardless of whether direct doxastic voluntarism is conceptually impossible, it is false.

Critics could grant that the argument seems to succeed in showing that there are propositions with respect to which we stand, like Buridan’s Ass, unable to decide between our options—in this case, affirming or denying a proposition. They would contend, however, that the argument’s success is limited and that it shows, at most, that there are some propositions with respect to which people do not have direct voluntary control (cf. Ryan 2003, 62-7). Therefore, they would conclude, the argument does not show that direct doxastic voluntarism is false.

b. Arguments for Direct Doxastic Voluntarism

i. The Observed Ability Argument

According to Carl Ginet, there are a number of cases in which people can will to believe certain propositions, provided that their evidence regarding the propositions is inconclusive (2001, 64-5; cf. Ryan 2003, 62-7). He offers a number of examples. Let us consider two. The first involves a person deciding to believe a proposition so that she can stop worrying. The scenario is as follows:

Before Sam left for his office this morning, Sue asked him to bring from his office a particular book that she needs to use for preparing her lecture the next day, on his way back home.. Later Sue wonders whether Sam will remember to bring the book. She recalls that he has sometimes, though not often, forgotten such things. But, given the thought that her continuing to wonder whether he’ll remember to bring the book will make her anxious all day, she decides to stop fretting and decides to believe that he will remember to bring the book she wanted.

The second involves a road trip taken by Ginet and his wife. He says,

We have started on a trip by car, and 50 miles from home my wife asks me if I locked the front door. I seem to remember that I did, but I don’t have a clear, detailed, confident memory impression of locking that door (and I am aware that my unclear, unconfident memory impressions have sometimes been mistaken). But, given the great inconvenience of turning back to make sure the undesirability of worrying about it while continuing on, I decide to continue on and believe that I did lock it.

According to Ginet, a person decides to believe a proposition when he or she stakes something on the truth of the proposition, where to “stake something” on the truth of a proposition is understood as follows:

In deciding to perform an action, a person staked something on its being that case that a certain proposition, p, was true if and only if when deciding to perform the action, the person believed that performing the action was (all things considered) at least as good as other options open to him or her if and only if the proposition, p, was true.

Thus, on Ginet’s account, in deciding not to remind Sam to bring the book she needed, Sue staked something on the truth of the proposition Sam will bring the book and, hence, decided to believe that Sam would bring it. If Sue had decided to remind Sam to bring the book she needed, Sue would have staked something on the truth of the proposition Sam will not bring the book and, hence, decided to believe that Sam would not bring it. Thus, on Ginet’s account, Sue could have decided to believe that Sam will bring the book or that Sam will not bring the book. Similarly, in deciding to continue on his road trip without worrying, Ginet staked something on the truth of the proposition I locked the door and, hence, decided to believe that he locked the door. If Ginet had decided to pull off the road to call and ask his neighbor to check Ginet’s front door, then Ginet would have staked something on the truth of the proposition I did not lock the door and, hence, decided to believe that he did not lock the door. Thus, on Ginet’s account, he could have decided to believe that he did lock the door or that he did not lock the door. Therefore, direct doxastic voluntarism is a thesis that describes an observed ability that people have.

Ginet surely seems correct in noting that people have experiences in which they are (at least moderately) anxious about the truth of some proposition, when the evidence they have for the proposition is ambiguous, and they alleviate their anxiety by electing to act as if the proposition is true (or false). Thus, to rebut Ginet’s argument, critics would have to show that what people do in such cases is not decide to believe. But how else such cases can be described? If such people are not deciding to believe, then what are they deciding to do? A quick survey of the philosophical literature on the nature of belief suggests two possible lines of reply. First, someone might be able to rebut Ginet’s argument by showing that that the kind of cases to which Ginet refers are cases not of believing a proposition, but of accepting a proposition. According to this line of rebuttal, the person understands the proposition and decides to act as if the proposition is true for some practical purpose, but (unlike in cases of believing) the person neither affirms nor denies the proposition (see, for example, Buckareff 2004; cf. Bratman 1999; Cohen 1989, 1992). Second, someone might be able to rebut Ginet’s argument by showing that the kind of cases to which he refers are cases not of believing a proposition, but of acting as if a proposition is true (see, for example, Alston 1989, 122-7; cf. Steup 2000). According to this second line of rebuttal, the person decides to act as if the proposition is true for some practical purpose(s), regardless of whether the person understands the proposition, and of whether he or she affirms, denies, or suspends judgment about the proposition. (For a related discussion of another of Ginet’s cases, see Nottelmann 2006.)

i. The Action Analogy Argument

James Montmarquet offers the following, analogical argument for direct doxastic voluntarism (1986, 49). “[R]easons for action play a role in the determination of action which is analogous to the role played by reasons for thinki