Category Archives: Metaphysics & Epistemology

Theological Determinism

Theological determinism is the view that God determines every event that occurs in the history of the world. While there is much debate about which prominent historical figures were theological determinists, St. Augustine, Thomas Aquinas, John Calvin, and Gottfried Leibniz all seemed to espouse the view at least at certain points in their illustrious careers. Contemporary theological determinists also appeal to various biblical texts (for example Ephesians 1:11) and confessional creeds (for example the Westminster Confession of Faith) to support their view. While such arguments from authority carry significant weight within the traditions in which they are offered, another form of argument for theological determinism which has broader appeal draws on perfect being theology, or a kind of systematic thinking through the implications of the claim that God is—in the words of St. Anselmquo maius cogitari non potest: that than which none greater can be conceived. The article below considers three such perfect being arguments for theological determinism, having to do with God’s knowledge of the future, providential governance of creation, and absolute independence. Implications of theological determinism for human freedom and divine responsibility are then discussed.

Reflection on theological determinism is important for academics, and religious believers alike. Thinking through its implications offers the opportunity to consider various sets of propositions. For example that God has exhaustive foreknowledge but that some events are not determined, or that God determines all events but that humans are culpable for their own sin. Whether all events in the world—such as the birth or death of a child—are understood to be determined by God or not, makes a significant difference to the attitudes and decisions religious believers adopt.

Table of Contents

  1. Defining Theological Determinism
  2. Arguments for Theological Determinism
    1. Divine Foreknowledge
    2. Divine Providence
    3. Divine Aseity
  3. Theological Determinism and Human Freedom
    1. Standard Compatibilism
    2. Theological-but-not-Natural Compatibilism
    3. Libertarianism
    4. Hard Determinism
  4. Theological Determinism and Divine Responsibility for Evil
    1. Theodicies and Defenses
    2. Causing vs. Permitting Evil
    3. God not a Moral Agent
    4. Sin not Blameworthy
    5. Skeptical Theism
  5. References and Further Reading

1. Defining Theological Determinism

As stated above, theological determinism is the view that God determines every event that occurs in the history of the world. What it means for God to determine an event may need some spelling out. Theological determinism is often associated with Calvinist or Reformed theology, and many proponents of Calvinism put their view in terms of the specificity of God’s decree, the efficaciousness of God’s will, or the extent of God’s providential control. John Feinberg, for example, describes his theological determinist position as that view that “God’s decree covers and controls all things” (2001, p. 504), while Paul Helm, another staunch theological determinist of the Calvinist variety, simply says that God’s providence is “extended to all that He has created” (1993, p. 39). The problem with such characterizations is that they are subject to multiple interpretations, some of which would be affirmed by theological indeterminists. For instance, a theological indeterminist might say that God’s providence extends to all events, or that even undetermined events are controlled or decreed by God in the sense that God foresees them and allows them to occur and realizes His purposes through them.

Thus one might think it better to define theological determinism in terms of divine causation, as Derk Pereboom does when he characterizes his view as “the position that God is the sufficient active cause of everything in creation, whether directly or by way of secondary causes” (2011, p. 262). The problem here is that some thinkers who seem committed to theological determinism deny that God should be considered a cause at all, at least in any univocal sense as creatures are. Herbert McCabe, for instance, maintains that when we act freely, we are not caused to act by anyone or anything other than ourselves (1987, p. 12). This is not because McCabe thinks that our free actions are undetermined by God, but because he thinks that God is not an “existent among others,” as created causes are (1987, p. 14). Thinkers like McCabe sometimes appeal to Thomas Aquinas’ doctrine of analogy in explaining their view. According to this doctrine, as Austin Farrer explains it, God’s providential activity cannot be conceived in causal terms without “degrade[ing] it to the creaturely level and plac[ing] it in the field of interacting causalities”—the results of which can only be “monstrosity and confusion” (1967, p. 62). If the views of such Thomists are to count as versions of theological determinism, then we need a way of spelling out the view in non-causal terms.

Perhaps, then, theological determinism will have to be defined in terms of God’s decree or will or control after all; but if so, these concepts will have to be defined so as to rule out indeterministic interpretations. We might, for instance, take Feinberg’s definition of an “unconditional” decree as one “based on nothing outside of God that move[s] him to choose one thing or another” (2001, p. 527) and then characterize theological determinism as the view that God unconditionally decrees every event that occurs in the history of the world. Such a view would exclude the possibility that God merely permits some events which He foresees will happen in some circumstances but which He does not Himself determine.

2. Arguments for Theological Determinism

a. Divine Foreknowledge

One of the divine attributes that have been appealed to in arguments for theological determinism is God’s knowledge of future events, or (simple) foreknowledge. Numerous biblical passages support the idea that God knows all that the future holds, including the free choices of human beings. For instance, the New Testament records Jesus’ prophesies that Judas will betray him and that Peter will deny him three times; and in the Hebrew Bible, the psalmist declares to God, “In your book were written all the days that were formed for me, when none of them as yet existed” (Psalm 29). Furthermore, if we assume that there are truths about the future to be known (a question discussed below), then exhaustive divine foreknowledge—that is, God’s foreknowledge of every future event—may be thought to follow from considerations of perfect being theology, since to not know some truth would seem to be an imperfection.

But if God knows the future exhaustively, theological determinists argue, then all future events must be determined, directly or indirectly, by God. The reasoning they offer in support of this argument can be considered in two steps. First is the claim that for a future event e to be known at some time t (say, “in the beginning”), e must be determined at or prior to t. Otherwise, there would be no truth about e to be known at t. The second claim is that if all future events are determined from the beginning of time, they must ultimately be so by God, since nothing else existed in the beginning to determine them. This is not to say that God’s knowledge is causal, in the sense that simply by knowing something, God is the cause of that thing. Rather, proponents of this line of reasoning contend that God cannot know a proposition unless it is true; and the proposition that some event will occur cannot be true at some time, unless that event is determined by that time; but then if God knows that some event will occur when nothing but God exists, it must be God Himself who ultimately determines the event’s occurrence.

Various responses to this sort of argument, for the incompatibility of divine foreknowledge and undetermined events, have been offered in the history of theology. One popular reply first made by Boethius is to deny that God knows anything at some time, since God exists outside of time altogether and knows all things from an eternal perspective. Another response, inspired by William of Ockham, is to grant the possibility of temporal divine knowledge but deny that what God foreknows must be determined by God. Alvin Plantinga (1986), for instance, has argued that creatures can have a sort of counterfactual power over God’s past knowledge, such that they make it the case that God knows what they themselves determine.

One final, more radical response to this argument is to deny that God has exhaustive foreknowledge. Defenders of open theism, who take this route, maintain that God leaves some future events undetermined, and so does not know exactly what the future holds. This is not to say that God is not omniscient. Rather, according to some open theists, propositions about undetermined events are simply not true (or false) before those events occur; or, according to others, there are true propositions about undetermined events, but they are in principle unknowable. Either way, open theists maintain that it is not a real limitation on God not to know what it is impossible to know, and so the denial of exhaustive foreknowledge is compatible with the affirmation that God is a supremely perfect being

None of these responses to the argument for theological determinism just described are without their critics, however. In reply to the Boethian proposal, questions have been raised about the coherence of the claim that God—a personal being who acts—exists altogether outside of time. Furthermore, the appeal to divine eternality may not even solve the problem, since a parallel argument for theological determinism can be constructed on the assumption that God knows timelessly all that the future—considered from our perspective—holds. Likewise, in reply to the Ockhamist solution, some have questioned whether there is any real distinction between counterfactual power over God’s knowledge of the past and the power to bring about the past, the latter of which seems problematic if not impossible. Finally, many philosophers reject the open theist claim that there are propositions about the future that are neither true nor false, since such a claim requires the denial of the widely accepted principle of bivalence. And the alternative open theist view, that there are true propositions about the future that are unknowable by God, seems to call into question divine omniscience. Furthermore, many theists reject open theism as unorthodox and incompatible with divine sovereignty and providential care of creation—an issue to be discussed below.

b. Divine Providence

In addition to attributing to God exhaustive foreknowledge—or knowledge of all that will happen in the future—many theists are also committed to the claim (explicitly or implicitly, in virtue of other things they believe) that God has exhaustive knowledge of counterfactual conditionals, or facts about what would happen if circumstances were different than they in fact are. One famous biblical example of such knowledge is found in the Hebrew Bible, when David consults God about a rumor he has heard:

David said, “O Lord, the God of Israel, your servant has heard that Saul seeks to come to Keilah, to destroy the city on my account. And now, will Saul come down as your servant has heard?…” The Lord said, “He will come down.” Then David said, “Will the men of Keilah surrender me and my men into the hand of Saul?” The Lord said, “They will surrender you.” (1 Samuel 23: 10-12, N.R.S.V.)

Upon hearing this news, David and his men decide to leave Keilah, and thus Saul, learning that David has left, never ends up going there himself, and the men of Keilah never have the chance to surrender David to him. Thus the truths that the Lord revealed to David are of the counterfactual sort: if David had remained in Keilah, Saul would have sought him there; and if Saul had sought him there, the men of Keilah would have surrendered David to Saul.

Some philosophers have argued that exhaustive divine knowledge of such counterfactual conditionals is essential to God’s perfection—in particular, to God’s sovereignty and providential care for creation—and that such knowledge entails theological determinism. The argument has centered on what are called “counterfactuals of freedom,” or those counterfactual conditionals about what a possible created person (who may or may not ever exist) would freely do in a possible circumstance (which may or may not ever occur). The free actions in question are supposed to be libertarian, or those that are not determined, either by a prior state of the world or by God. Luis de Molina considered knowledge of such counterfactuals to be part of God’s scientia media, or middle knowledge, standing in between God’s “natural knowledge,” or knowledge of God’s own nature and the necessary truths that follow from it, and “free knowledge,” or knowledge of God’s will and the contingent truths that follow from it. Molina claimed that, like the propositions included in God’s natural knowledge, counterfactuals of freedom are pre-volitional, or (logically) prior to, and thus independent of, God’s will; though like the propositions included in God’s free knowledge, they are contingent truths.

One way to reconstruct the line of reasoning from divine knowledge of counterfactual conditionals to theological determinism is thus as follows:

  1. If there are any events in the history of the world that are not determined by God, then—contra Molina—God cannot have exhaustive knowledge of counterfactual conditionals.
  2. If God lacks exhaustive knowledge of counterfactual conditionals, then God take risks with creation.
  3. A God who takes risks with creation is not perfect.
  4. Therefore, since God is perfect, God must determine every event in the history of the world.

Robert Adams has argued in favor of the first premise, focusing in particular on the possibility of God’s knowledge of counterfactuals of freedom. Adams contends that for God to know a proposition, it must have a truth-value; but counterfactuals of freedom lack truth-values, since there is nothing that could ground their truth. While the consequent of a conditional may follow from the antecedent by logical or causal necessity, neither sort of necessity can ground the truth of a conditional about how a person would act if placed in a particular circumstance, if that action is undetermined. And features of a person that do not necessitate her action—such as her particular beliefs and desires—cannot ground the truth of counterfactual conditionals about her action, precisely because such features are non-necessitating. Adams suggests that divine foreknowledge may not face the same grounding problem as middle knowledge, since categorical predictions about undetermined events “can be true by corresponding to the actual occurrence of the event that they predict” (1987, p. 80). But in the case of counterfactual conditionals, there may never be actual events to which the propositions correspond.

Supposing Adams is right that middle knowledge is impossible, what would divine providence look like without it, on the assumption that God does not determine some events in the world? One might think that all God really needs to providentially govern the world is foreknowledge. Yet William Hasker has argued “foreknowledge without middle knowledge—simple foreknowledge—does not offer the benefits for the doctrine of providence that its adherents have sought to derive from it” (1989, p. 19). His reasoning, in brief, is that foreknowledge is about what will actually happen in the world God has created, and so will be useless to God in deciding what to create to begin with or how to arrange events throughout history for the benefit of creatures. Consider, for example, the biblical case discussed above, in which David consults God to determine the best strategy for avoiding capture by Saul. If God had only simple foreknowledge and not middle knowledge, then God could only tell David what he would in fact do, and what Saul’s response would in fact be, and not what better or worse outcomes might result from alternative courses of action. Likewise—and perhaps more worrisome—before creating the world, God could not know without middle knowledge whether, if He gave creatures the libertarian freedom to decide whether to enter a loving relationship with Him and their fellow creatures, any of them would indeed choose to do so. Thus, creating a world with such indeterministic events is risky business for God. In contrast, the view in which God determines all events of the world can be considered a risk-free view of providence.

While Hasker goes on to defend the risky view of providence, others have criticized it as inconsistent with divine perfection. Edwin Curley (2003) has argued that it involves a kind of recklessness inconsistent with the providential wisdom and concern for creatures that is supposed to be characteristic of a perfect Creator. Focusing in particular on indeterminism at the level of human action, Curley points out that a God who gave creatures libertarian freedom without knowing how they would use it would run the risk of their destroying themselves and thwarting God’s purposes for creation. Thomas Flint similarly argues for the superiority of the risk-free view of providence by means of a parental analogy. Imagine, he says, that a parent has two options for her child: under Option One, the child may struggle and seem to be in danger, but the parent will “know with certainty that she will freely develop into a good and happy human being who leads a full and satisfying life”; under Option Two, in contrast, the parent will have no idea how things will turn out for the child, and can only hope for the best. Flint says he would, without hesitation, choose Option One, and that the claim that Option Two is preferable is “just short of absurd” (1998, p. 106). Likewise, he suggests, the claim that a risk-taking God is superior to, or even on par with, a risk-avoiding one is incredible.

If the above line of reasoning is correct, then it follows that a supremely perfect God would not create a world in which events were left undetermined. However, the argument has been questioned on a number of points. With respect to Adams’ argument against the possibility of middle knowledge, at least two assumptions are open to doubt. First, it is unclear whether, for a proposition to have truth-value, there must be something that grounds its truth. Francisco Suárez, an early follower of Molina, seemed to question this claim. Richard Gaskin has as well, maintaining that there is nothing that grounds the truth of any proposition, and that to suppose otherwise “is to slide into a substantial and implausible correspondence theory of truth” (1993, pp. 424-425).

Others, granting that true propositions may need grounding, have proposed possible grounds for counterfactuals of freedom. Alvin Plantinga, for instance, has suggested a parallel between counterfactuals of freedom and propositions about past events. He writes: “Suppose… that yesterday I freely performed some action A. What was or is it that grounded or founded my doing so?… Perhaps you will say that what grounds [the truth of the proposition that I did A] is just that in fact I did A” (1985, p. 378). Plantinga responds that the same kind of answer is available in the case of counterfactuals of freedom; for what grounds such truths is the fact that certain people (actual or possible) are such that if they were put in certain circumstances, they would do certain things.

Other theists who accept that God lacks exhaustive knowledge of counterfactual conditionals question whether this entails that God lacks the sort of providential control over creation essential to His perfection. David Hunt has argued, contra Hasker, that simple foreknowledge can in fact give God a “providential advantage,” allowing Him to “secure results” that He would not be able to secure without such knowledge (2009). If with simple foreknowledge God can thus ensure His central purposes for creation, perhaps the charge that theological indeterminism entails risk-taking with respect to less significant outcomes will not have so much sting.

Alternatively, one may argue with open theists that the risky view of providence involves divine virtues such as experimentation, collaboration, responsiveness, and vulnerability, and that it is the only way to secure the great metaphysical and moral value of creatures with libertarian freedom. One way to put this latter point is in terms of Flint’s parental analogy. After noting that he would of course choose (risk-free) Option One if he could, Flint says, “the fact that we don’t have a choice here, that we as parents are stuck with [risky] Option Two, is one of the things that is especially frustrating (and even terrifying) about being a parent” (1998a, p. 106). An open theist convinced of the impossibility of middle knowledge might respond that this must similarly be what is especially frustrating (and even terrifying!) about being God—that Option One is not available, so that if God wants to create persons with libertarian freedom, He must opt for Option Two. But just as a parent still chooses to give birth to a child, so God still chooses to bring into being such creatures, because of their great value.

c. Divine Aseity

A third argument for theological determinism focuses on the divine attribute of aseity. The word aseity  comes from the Latin phrase a se—“from itself”—refers to God’s absolute independence from anything distinct from Himself. While some have taken divine aseity to be the most fundamental feature of our conception of God, others have suggested that it follows from God’s perfection, since to be dependent on another would seem to be an imperfection (Brower 2011). Closely related to the concept of divine aseity is the medieval conception of God as pure act (actus purus). What medieval thinkers meant by saying God is pure act is He is always complete in Himself. In contrast, all created beings have potentiality and passivity, and, can be changed or acted on by others.

On the basis of considerations of God’s aseity and pure actuality, Reginald Garrigou-Lagrange has offered an argument for theological determinism. For, he says, those who maintain that there are some events that God does not determine—for instance, human choices—must posit “a passivity in the pure Act. If the divine causality is not predetermining with regard to our choice... the divine knowledge is fatally determined by it. To wish to limit the universal causality and absolute independence of God, necessarily brings one to place a passivity in Him” (1936, p. 538). To illustrate his point, Garrigou-Lagrange asks us to imagine that when God gives two men grace to fight temptation, one cooperates with this grace while the other does not, but that the difference between their responses is not determined by God. Supposing that God can foreknow the two men’s responses to His grace, theological indeterminists must admit that “the foreknowledge is passive,” just as a person’s knowledge is passive when she is a mere spectator to some event (1936, pp. 538-539). What Garrigou-Lagrange seems to mean by this suggestive phrasing is that God’s intellect would be passive, in the sense that in coming to know what the two men will do, God’s intellect would be acted upon by something outside of it. Garrigou-Lagrange concludes:

God is either determining or determined, there is no other alternative. His knowledge of free conditional futures is measured by things, or else it measures them by reason of the accompanying decree of the divine will. Our salutary choices, as such, in the intimacy of their free determination, depend upon God, or it is He, the sovereignly independent pure Act, who depends upon us. (1936, p. 546)

In response to this argument for theological determinism, Eleonore Stump contends that the dilemma presented by Garrigou-Lagrange—that God either determines or is determined—is a false one, if determination is taken to be equivalent to causation. She offers examples of both divine and human knowledge in which the knower neither determines what she knows, nor is determined by it. On the human side, a person might know that an animal is a substance, but the human obviously does not determine this truth. And (on Thomas Aquinas’ view of human cognition—which Garrigou-Lagrange would presumably accept) neither is the human rendered passive, or determined in her knowledge of this truth, since the human intellect’s operations are active in the process of deriving it, and nothing acts on the intellect “with causal efficacy” in this process. Likewise, on the divine side, God presumably knows of His own existence without determining that He exists; but neither, presumably, is God determined in His knowledge of this truth (2003, pp. 120-121).

One thing to note about the examples offered by Stump—of a human knowing that an animal is a substance, or of God knowing that He exists—is that the truths known are in both cases necessary. One question that a theological determinist might raise is whether, when it comes to knowledge of contingent events, the indeterminist can likewise maintain that the knower neither determines nor is determined by what she knows. While our coming to know necessary truths on the basis of, say, complex mathematical reasoning would seem to be quite an active process, our coming to know contingent truths on the basis of some very clear and distinct perception—say, that we have hands—would seem to be more passive. If this is right, then the theological determinist might maintain that if God’s knowledge of undetermined future events is quasi-perceptual, then God might indeed be rendered passive by such knowledge. Furthermore, even if the theological indeterminist can defend a conception of divine foreknowledge on which God is not determined by some of what He knows, in the sense that He is not caused to know some truths, it is very hard to see how He would not in some sense be dependent on something outside of Himself for that knowledge. The question for theological indeterminists is whether this sense of dependency is compatible with a conception of God as supremely perfect.

3. Theological Determinism and Human Freedom

So far we have considered arguments that theological determinists have put forward in support of their view of divine providence, as well as some objections raised to these arguments. Critics of theological determinism not only object to the positive reasons offered in favor of the view, but also to certain negative implications. One major issue theological determinists must grapple with is how there can be any creaturely freedom in a world in which all events are determined by God. The claim that at least some creatures are both free and responsible for their actions is a central part of traditional Western theisms—Judaism, Christianity, and Islam—and most contemporary theological determinists affirm this claim, though as we will see, some within these traditions dissent from it. Below, several theological deterministic conceptions of human freedom are discussed.

a. Standard Compatibilism

Perhaps the most common conception of free will espoused by theological determinists is the standard compatibilist one: that determinism of any sort—whether theological (that is determination by God) or natural (that is determination by antecedent events in accordance with the laws of nature)–does not automatically rule out free will. Theological determinists espousing this view often appeal to secular theories of freedom and arguments for the compatibility of such freedom with natural determinism to support their claim that theological determinism is also compatible with free will. For instance, according to the classic compatibilist position defended by Thomas Hobbes, a person is free to the extent that she finds no impediment to doing what she wants or wills to do.

Contemporary compatibilists, recognizing the limitations of this position—for example that it allows for actions resulting from brainwashing to be free—have offered various refinements, such as that, in addition to being able to do what one wants or wills to do, one must act with sensitivity to certain rational considerations (the reasons-responsive view), or one must have the will one wants to have (the hierarchical model). One example of the latter view is Lynn Rudder Baker. According to Baker, “Person S has compatibilist free will for a choice or action if:

    1. S wills X,
    2. S wants to will X,
    3. S wills X because she wants to will X, and
    4. S would still have willed X even if she (herself) had known the provenance of her wanting to will X.” (2003, p. 467)

Baker notes that her account is compatibilist in the sense that “a person S’s having free will with respect to an action (or choice) A is compatible with A’s being caused ultimately by factors outside of S’s control.” She makes no distinction, with respect to the question of an agent’s freedom, whether the agent’s action is caused “by God or by natural events” (2003, pp. 460-461). More generally, theological determinists point out that on all such contemporary compatibilist accounts of free will, divine determination does not automatically rule out human freedom, since none of these accounts specifies what must be true of the first causes of human volition and action. This lack of specificity, however, is precisely the problem that incompatibilists—those who hold that determinism of any sort is incompatible with determinism—find with the compatibilist position. They reason that if either God or events of the distant past are the ultimate causes of our actions, then our actions are not under our control. The debate between compatibilists and incompatibilists has a long history, and is ongoing. See “Free Will for a more in-depth summary.

b. Theological-but-not-Natural Compatibilism

While many theological determinists take the standard compatibilist line, some differentiate between natural and theological determinism, and maintain that only the latter is compatible with free will. Defenders of this position, who might be called “theological-but-not-natural-compatibilists,” appeal to a number of differences between theological and natural determinism to support their view. Hugh McCann, for instance, argues that in contrast to the way in which events that we bring about come to pass, “the manner in which our actions come to pass is not one in which God acts upon us or does anything to us” (2005, p. 145). McCann maintains that God’s causing our actions is like an author’s creating the characters of a novel. He writes: “The author of a novel never makes her creatures do something; she only makes them doing it. It is the same between us and God” (2005, p. 146).

McCann should not be interpreted as denying theological determinism here, that is as saying that God does not determine what creatures do, but only what they are. Rather, he means that, unlike creatures who can only make other creatures do things, God has the unique ability to make creatures themselves. Rather than first bringing creatures into being, and then making them do certain things, God by one and the same act makes creatures doing the things they do. McCann contends that because of such differences between divine and creaturely causation, theological determinism “does not endanger our freedom” as natural determinism does (2005, p. 146).

However, theological compatibilism, like its natural counterpart, has been criticized by standard incompatibilists. One of the most influential arguments for the incompatibility of causal determinism and human freedom—the Consequence argument—relies on the premise that, in a deterministic world, the ultimate causes of our actions are events of the distant past. The reason why this is considered a problem, though, is simply that such causes lie outside of our control. So if the Consequence argument establishes the incompatibility of free will and natural determinism, a parallel argument appealing to the fact that God’s will, taken as a determining cause, likewise lies outside of our control should establish the incompatibility of free will and theological determinism. To put the point differently, it seems that those who hold that God’s determination of our actions is both causal, and compatible with human freedom, ought to be standard compatibilists about determinism and free will, rather than theological-but-not-natural compatibilists, since the differentiating features of natural determining causes pose no additional threat to free will, once one accepts that God’s determining causation is compatible with human freedom.

c. Libertarianism

While the theological determinists described above, who maintain that theological determinism is compatible with human freedom while natural determinism is not, suggest various differences between divine and natural determination, they still recognize God’s determination as a species of causation. As mentioned already, however, some who seem to espouse theological determinism deny that God should be considered a cause at all, at least in any univocal sense as creatures are. Writing in this tradition, Michael Hoonhout applauds Aquinas for intentionally discussing the doctrine of divine providence twice in his Summa Theologiae—first in the context of “the essence of God” and then in the context of “the nature of creation”—in recognition of “two radically different orders of intelligibility.” He maintains that “double affirmations which seemingly contradict each other are to be expected” if we respect the integrity of each order (2002, pp. 4-6).

The seemingly contradictory “double affirmations” to which Hoonhout refers are that God determines everything that occurs in the world, and that humans have a non-deterministic form of freedom. Thus one finds some theologians who seem clearly committed to theological determinism when considering the order of the Creator, speaking of the possibility of libertarian human freedom in the context of the order of creation. Kathyrn Tanner, for instance, maintains a view of divine causation as absolute in terms of both its range (“all inclusive or universally extensive”) and its efficacy (“cannot be hindered, diverted, or otherwise redirected by creatures”). Tanner claims that since “God does not bring about the human agent’s choice by intervening in the created order as some sort of supernatural cause,” one can “still affirm a very strong libertarian version of the human being’s freedom” (1994, pp. 113, 125, 126).

The trouble with such a view, however, is that it seems to face a dilemma. On the one hand, if the way in which God determines events in the world is really nothing like the way creaturely causes do, such that even fundamental concepts like conditional necessity do not apply to the relationship between God’s causal activity and its effects, then, as Thomas Tracy points out (1994), analogy collapses into equivocation, and we are left without any idea of what theological determinism is supposed to mean. On the other hand, if such fundamental concepts do apply to divine causation in something like the way they apply to creaturely causation, then arguments against the compatibility of theological determinism and human freedom must be considered and responded to, rather than simply dismissed as involving a confusion of categories.

d. Hard Determinism

One final position that theological determinists may adopt on the issue of human freedom is the standard incompatibilist one, admitting that determinism of any sort is incompatible with free will and thus that there can be no creaturely freedom. This view, called hard theological determinism, has historically won few adherents, in part because of the centrality of the belief in human freedom to so much civic and religious life. On the civic side, the assumption of free will has been thought to underwrite reactive attitudes such as resentment, indignation, gratitude, and love, and the moral and legal practices of praise and blame, reward and punishment. On the religious side, human freedom has seemed crucial to the logic of divine commandment and judgment, and to such reactive attitudes and practices as guilt, repentance, and forgiveness.

However, some hard theological determinists have challenged such assumptions about the centrality of free will. Derk Pereboom, for instance, has argued that, while theological determinism is not compatible with the basic sense of desert (that is deserving praise or blame simply because of the moral status of what one has done) it is compatible with judgments of value (for example that behavior is good or bad), as well as the reactive attitudes and practices which are most central to traditional theism, and which might seem to presuppose basic desert. For instance, a person without free will might still recognize that she has failed to act according to the principles she believes she should live by, and so experience guilt; or, she might resolve to no longer hold another’s past behavior as a reason to remain at odds with him, and so forgive. Pereboom suggests God’s commanding and judging, rewarding and punishing may serve the moral formation of creatures even without free will, and so may be justified without it. However, some critics have questioned whether such religiously significant attitudes and practices as repentance and the resolution to amend one’s life can really be secured without a sense of either basic desert or the sort of agential control which hard theological determinists deny. Furthermore, even if hard theological determinism is compatible with such attitudes and practices central to theistic traditions, it is another question whether the denial of free will and moral responsibility in the basic-desert sense is itself compatible with the teachings of these religions. One question that remains for hard Christian determinists, for example, is how to make sense of the many New Testament passages that discuss the freedom found in Christ (cf. Galatians 5:1, 2 Corinthians 3:17).

4. Theological Determinism and Divine Responsibility for Evil

Besides explaining how, on their view, humans can be free and responsible for their own actions (or how the denial of human freedom is compatible with traditional theism); theological determinists must also face questions about God’s moral responsibility for the evil in the world that, on their view, He determines. As with the former issue, their responses to the latter are many and varied. Below a number of distinct responses are discussed.

a. Theodicies and Defenses

Some theists attempt to offer a theodicy, or plausible explanation of why God has created a world in which evil exists. Others, uncertain of what God’s actual reasons are, propose instead a defense, or possible explanation. One historic and popular explanation of why evil exists in a world created by God is the free will defense, first proposed by St. Augustine and developed by Alvin Plantinga (1974). According to this defense, the evil we witness in God’s creation is not in fact God’s doing at all, but the result of humans’ misuse of their own freedom: God created humans to live in harmony with Himself and each other, but they freely chose to rebel against God and to sin against one another. Some proponents of this defense extend it to explain natural as well as moral evil, suggesting all suffering in the world is ultimately due to sinful choices of fallen creatures, some of which lie behind the destructive natural forces of the world. However, the free will defense seems to assume it was impossible for God both to create free persons and to determine all of their actions, such that they never do evil. In other words, it seems to assume an indeterministic conception of human freedom incompatible with theological determinism. Thus, the traditional free will defense would not seem to be an option for theological determinists.

Some compatibilists have argued, however, that the free will defense need not presuppose an indeterministic conception of human freedom. Jason Turner, for instance, suggests if “free actions can be determined but must not be dependent on another’s will”—a view he calls “independent compatibilism”—then the free will defense may still be open to theological determinists (2003, p. 131). On independent compatibilism, whether God could create a world with free persons who were determined in their actions and never committed moral evil depends on whether God would create such a world because the persons never committed evil, or for some other reason. Supposing that the reason God would create a world in which persons who were determined in their actions never committed moral evil was indeed because they never committed evil, their actions would be dependent on God’s will, and so would not be free.

While there thus may be some versions of the free will defense open to the theological determinist, such versions require metaphysical assumptions that may seem implausible—for instance, that events in the causal history of an agent’s action occurring before she was even born may determine whether her (determined) actions are free or not, and that whether an event depends on God’s will in a freedom-undermining way depends on what God’s reasons were for causing it. Still, theological determinists may argue that even the traditional indeterministic version of the free will defense is implausible, and that more plausible explanations of evil are available. John Hick, for instance, contends that, given modern understanding of evolutionary theory, the claim that humans were created perfect and fell from grace is an incredible one. Inspired by the writings of St. Irenaeus, Hick proposes instead the soul-making theodicy, according to which God created imperfect creatures in a world in which they are prone to suffering and sin. He argues that it is not the freedom of creatures, per se, which is so valuable as to outweigh these evils, but rather their development, morally and spiritually, through struggle, suffering, trial and temptation, and the virtuous characters which result from “the investment of costly personal effort” (2010, p. 256). While Hick is himself committed to theological indeterminism, his basic theodicy is compatible with theological determinism as well.

Two other theodicies that theological determinists have adopted likewise focus on the value of development or process. Eleonore Stump has suggested that a world of sin and suffering is “most conducive” to bringing about both humans’ willingness to receive the gift of salvation from God and also their subsequent sanctification (1985, p. 409). While Stump holds that human freedom is incompatible with theological (and natural) determinism, and that receiving the gift of salvation and undergoing the process of sanctification both require free will, Derk Pereboom contends that “no feature of [her] account demands libertarian freedom, nor even a notion of free will of the sort required for moral responsibility… It is sufficient that this change [the turning to God on the occasion of suffering] is seriously valuable, and that it results in more intimate relationship with God” (2015). Marilyn McCord Adams, likewise, has proposed that participating in evil might facilitate creatures’ identification with Christ and union with God (1999). Such work on theodicy has drawn on specifically Christian conceptions of God and the human good, and advanced them in innovative ways. Yet, these proposals raise many questions about the value of processdeveloping moral character, becoming sanctified, or coming to identify with God—as well as the comparative value of such processes with the disvalue of the sin and suffering that make them possible.

b. Causing vs. Permitting Evil

Even supposing the disvalue of all sin and suffering in the world is outweighed by the value of the moral development of creatures, another concern critics have raised is whether it is morally permissible for God to cause humans to sin in order to realize some good. Peter Byrne, in response to Paul Helm’s deterministic theodicy, asks:

How does it square with the Pauline injunction that one should not do evil that good may come of it? The place of that injunction in traditional moral theology is to set limits to how far we can pursue good by way of doing evil as its precondition. There are some acts that are so heinous that one may not do them for the sake of the bringing about a greater good…. One may not murder that good may come of it. But Helm’s God has precisely planned, purposed, and necessitated acts of murder and instances of other kinds of horrendous wickedness so that good may come of them. (2008, p. 200)

In response, some theological determinists have argued that the difference between God’s causing humans to commit sin for the purpose of realizing some good (the theological determinist’s view), and knowing that humans would sin if they were created in particular circumstances and choosing to create them in those circumstances anyway, for the purpose of realizing some good (the Molinist view), is morally insignificant. Indeed, theological determinists contend, even the open theist’s view, according to which God allows horrendous evil that He could prevent—presumably for the purpose of realizing some good—raises similar questions about God’s moral responsibility for evil. So, they maintain, this concern about divine responsibility should not be a reason to reject theological determinism in favor of such competing views of divine providence.

c. God not a Moral Agent

While some theological determinists offer theodicies or defenses in attempt to demonstrate that there is some actual or possible reason for evil which morally justifies God in creating it, others eschew such explanations altogether. Some argue that they are unnecessary, on the grounds God cannot, in principle, be morally responsible for anything, since He is above or beyond morality altogether. One line of argument for this conclusion is based on the idea that morality depends on God’s will and command, and that God is not Himself subject to the commandments that He establishes. Morality, on this view, only applies to creatures, over which God has ultimate moral authority. One problem facing such a divine command theory of morality is the familiar Euthyphro problem—that if God’s commandments determine the content of morality, then morality is arbitrary, such that what is right might have been wrong and vice versa if God had willed that it be so. Another implication of this argument that many theists find difficult to accept is that, if God cannot in principle be morally blameworthy since He is above morality, then He cannot be morally praiseworthy either.

d. Sin not Blameworthy

An alternative response to the question of how God could not be blameworthy for causing humans to sin is the hard theological determinist one. As discussed above, hard theological determinists maintain that, since God causes all events in creation, humans are not free or morally responsible in the basic desert sense. As Derk Pereboom notes, it follows on this view that since humans are not blameworthy for their actions, God is not the cause of blameworthy actions. Thus, God’s causing human sin is more similar to His causing natural evils, such as animal predation and its associated sufferings, than it is to His causing moral evils, traditionally understood. Since most theists agree that God has control over all such natural forces, the problem of natural evil poses no more difficulty for the theological determinist than for the theological indeterminist. However, this hard deterministic response to the problem of moral evil is compatible with the offering of a theodicy or defense particular to human sin, as well as with the appeal to skeptical theism discussed below.

e. Skeptical Theism

One final response to the problem of evil that theological determinists make is to admit that they are unable to think of reasons that would justify God in creating a world with the sort and extent of evil that we see, but nevertheless to maintain that such an inability should not be taken as good evidence that there is no divine justification for evil. This is the response offered by skeptical theists, so named because of their skepticism about their own ability to discern God’s reasons for creating and governing the world as He does. Several lines of reasoning have been offered for this position, ranging from arguments from analogy, likening the cognitive distance between us and God to that between a very young child and her parents, to arguments focusing on the massive complexity of the causal networks in the world, and our inability to comprehend how actual and possible goods and evils are connected. The view has also been subject to various objections, regarding purported negative implications of the view for theological knowledge and trust in God, and moral deliberation and action. The debate regarding these issues is ongoing, and the interested reader should see Skeptical Theism for more information.

While skeptical theism is a response to the problem of evil available to theological determinists and indeterminists alike, theological determinists who embrace the view must grapple with further issues. Like those offering a theodicy or defense, theological determinists who maintain their justified ignorance of God’s reasons must still come to terms with the fact that, on their view, evil is not merely permitted but determined by God. This would seem to lead to a sort of double-mindedness specifically about the value of moral evil in the world. It is, after all, central to religious practice to strive to see the events in one’s life from God’s perspective, and to value them as God would, in His wisdom and benevolence. Thus, if some horrendous evil—say, severe child abuse—is divinely determined, then one ought to strive to accept, and even embrace it as instrumental to God’s purposes and so for the greater good. Such an attempt, however, would seem to be in serious tension with a teaching central to the traditional theism, that moral evil is opposed by God, and should be opposed by humans as well.

5. References and Further Reading

  • Adams, Marilyn McCord (1999). Horrendous Evils and the Goodness of God. Ithaca, NY: Cornell University Press.
    • Contains proposal that experience of evil might facilitate humans’ identification with Christ and union with God.
  • Adams, Robert (1987). “Middle Knowledge and the Problem of Evil.” The Virtue of Faith and Other Essays in Philosophical Theology. New York: Oxford University Press.
    • Raises grounding objection against the possibility of middle knowledge.
  • Baker, Lynn Rudder (2003). “Why Christians Should Not Be Libertarians: An Augustinian Challenge.” Faith and Philosophy, Vol. 20 No. 4, pp. 460-478.
    • Argues for compatibilism on the basis of tradition, and offers standard compatibilist account of free will.
  • Basinger, David and Randall Basinger (1986). Predestination and Free Will: Four Views of Divine Sovereignty and Human Freedom. Downers Grove, IL: InterVarsity Press.
    • Contains discussion of how embracing theological determinism might shape one’s personal deliberations and decision-making.
  • Boethius (1969). The Consolation of Philosophy. Trans. V. E. Watts. New York: Penguin Books.
    • Contains proposal of divine timelessness as resolution to the problem of divine foreknowledge and human freedom.
  • Brower, Jeffrey (2011). “Simplicity and Aseity.” The Oxford Handbook of Philosophical Theology. Ed. Flint, Thomas and Michael Rea. Oxford: Oxford University Press.
    • Defines aseity and summarizes argument for theological determinism on the basis of aseity.
  • Byrne, Peter (2008). “Helm’s God and the Authorship of Sin.” Reason, Faith and History: Philosophical Essays for Paul Helm. Ed. M. W. F. Stone. Burlington, VT: Ashgate.
    • Raises concern that Helm’s theological determinism commits him to the claim that God “plans, purposes, and values moral evil.”
  • Curley, Edwin (2003). “The Incoherence of Christian Theism.” The Harvard Review of Philosophy, Vol. 11, pp. 74-100.
    • Contains argument that the risky view of providence is incompatible with divine wisdom and care for creation.
  • Farrer, Austin (1967). Faith and Speculation. London: A. and C. Black.
    • Explicates the doctrine of analogy and its implications for the “paradox” of divine agency and human freedom.
  • Feinberg, John S. (2001). No One Like Him. Wheaton, IL: Crossway Books.
    • Defends theological determinism on biblical, theological, and philosophical grounds, and responds to a number of objections to the view.
  • Flint, Thomas (1998). Divine Providence: The Molinist Account. Ithaca, NY: Cornell University Press.
    • Contains argument for superiority of the risk-free over the risky view of providence.
  • Gaskin, Richard (1993). “Conditionals of Freedom and Middle Knowledge.” The Philosophical Quarterly, Vol. 43, No. 173, pp. 412-430.
    • Argues against claim that counterfactuals of freedom need grounds.
  • Garrigou-Lagrange, R. (1936). God, His Existence and His Nature: A Thomistic Solution of Certain Agnostic Antinomies, Vol. 2. Trans. Rose, Dom Bebe. London: B. Herder Book Co.
    • Contains argument for theological determinism on the basis of God’s aseity.
  • Hasker, William (1985). “Foreknowledge and Necessity,” Faith and Philosophy, Vol. 2 No. 2, pp. 121-156.
    • Criticizes Plantinga’s distinction between counterfactual power over the past and the power to bring about the past.
  • Hasker, William (1989). God, Time and Knowledge. Ithaca, NY: Cornell University Press.
    • Contains argument that simple foreknowledge is providentially useless to God.
  • Helm, Paul (1993). The Providence of God. Downers Grove, IL: InterVarsity Press.
    • Contains arguments for the “risk-free” view of providence on the basis of divine knowledge and goodness.
  • Hick, John (2010). Evil and the God of Love. New York: Harper and Row.
    • Contains explication and defense of the soul-making theodicy.
  • Hoonhout, Michael (2002). “Grounding Providence in the Theology of the Creator: The Exemplarity of Thomas Aquinas.” The Heythrop Journal, Vol. 43, No. 1, pp. 1-19.
    • Defends Aquinas’ seemingly contradictory “double affirmations” of divine causation and human freedom.
  • Hunt, David (2009). “The Providential Advantage of Divine Foreknowledge.” Arguing about Religion. Ed. Timpe, Kevin. New York: Routledge, pp. 374-385.
    • Argues that simple foreknowledge enables God to secure results that He would not be able to secure without it.
  • McCann, Hugh (2005). “The Author of Sin?” Faith and Philosophy Vol. 22. No. 2, pp. 144-159.
    • Argues that theological determinism does not endanger human freedom, as natural determinism does, and that God cannot do moral wrong, since morality is grounded in divine commands.
  • Pereboom, Derk (2011). “Theological Determinism and Divine Providence.” Molinism: The Contemporary Debate. Ed. Ken Perszyk. Oxford: Oxford University Press, pp. 262-280.
    • Defends compatibility of hard theological determinism and traditional theism.
  • Pereboom, Derk (2015). “Libertarianism and Theological Determinism.” Free Will and Theism: Connections, Contingencies, and Concerns. Ed. Timpe, Kevin and Dan Speak. Under contract with Oxford University Press.
    • Offers response to the problem of evil compatible with hard theological determinism.
  • Plantinga, Alvin (1974). God, Freedom, and Evil. Grand Rapids, MI: Eerdmans.
    • Develops a free will defense.
  • Plantinga, Alvin (1985). “Reply to Robert M. Adams.” Alvin Plantinga (Profiles. Vol. 5). Ed. Tomberlin, James and Peter van Inwagen. Dordrecht: D. Reidel, pp. 371-382.
    • Contains proposal of possible grounds for counterfactuals of freedom.
  • Plantinga, Alvin (1986). “On Ockham’s Way Out.” Faith and Philosophy, Vol. 3 No. 3, pp. 235–269.
    • Defends claim that humans have counterfactual power over God’s past knowledge.
  • Rogers, Katherin (2000). Perfect Being Theology. Edinburgh: Edinburgh University Press.
    • Considers implications of the description of God as “that than which none greater can be conceived.”
  • Stump, Eleonore (1985). “The Problem of Evil.” Faith and Philosophy Vol. 2 No. 4, pp. 392-423.
    • Contains proposal that sin and suffering facilitate human acceptance of saving grace and process of sanctification.
  • Stump, Eleonore (2003). Aquinas. New York: Routledge.
    • Contains response to argument for theological determinism on the basis of divine aseity.
  • Tanner, Kathryn (1994). “Human Freedom, Human Sin, and God the Creator.” The God Who Acts: Philosophical and Theological Explorations. Ed. Thomas Tracy. University Park: Pennsylvania State University Press, pp. 111-135.
    • Argues for the compatibility of universal divine causation and libertarian human freedom.
  • Tracy, Thomas (1994). “Divine Action, Created Causes, and Human Freedom.” The God Who Acts: Philosophical and Theological Explorations. Ed. Thomas Tracy. University Park: Pennsylvania State University Press, pp. 77-102.
    • Contains critique of attempt to hold together theological determinism and libertarian human freedom.
  • Turner, Jason (2013). “Compatibilism and the Free Will Defense.” Faith and Philosophy. Vol. 30, No. 2, pp. 125-137.
    • Offers version of free will defense compatible with theological determinism.
  • Vicens, Leigh (2012). “Divine Determinism, Human Freedom, and the Consequence Argument.” International Journal for Philosophy of Religion, 71:2, pp. 145-155.
    • Argues that if natural determinism is incompatible with human freedom, so is theological determinism.
  • Zagzebski, Linda (2011). “Eternity and Fatalism.” God, Eternity, and Time. Ed. Christian Tapp. Aldershot: Ashgate Press.
    • Argues that appeals to divine timelessness do not solve the problem of how divine foreknowledge is compatible with our ability to do otherwise. A parallel point can be made about the problem of how divine foreknowledge is compatible with indeterminism.

 

Author Information

Leigh Vicens
Email: lvicens@augie.edu
Augustana College
U. S. A.

Truthmaker Theory

Truthmaker theory is the branch of metaphysics that explores the relationships between what is true and what exists. Discussions of truthmakers and truthmaking typically start with the idea that truth depends on being, and not vice versa. For example, if the sentence ‘Kangaroos live in Australia’ is true, then there are kangaroos living in Australia. And if there are kangaroos living in Australia, then the sentence ‘Kangaroos live in Australia’ is true. But we can ask whether the sentence is true because of the way the world is, or whether the world is the way it is because the sentence is true. Truthmaker theorists make the former claim that the sentence is true because of what exists in the world; it is not the case that the world is the way it is because of which sentences are true. Truthmaker theorists use this fundamental idea as a starting point for clarifying the nature of truth and its relationship to ontology, and to advance various views in metaphysics concerning the nature of the past and future, counterfactual conditionals, modality, and many others. Because truthmaker theorists end up with differing views concerning all these matters, what ultimately unites them is not any single thesis but rather a commitment to thinking that the idea of truthmaking is a useful one for pursuing metaphysical inquiry. Others might conceive of ‘truthmaker theory’ more strictly (such as by requiring a commitment to all truths having truthmakers, or all truthmakers being of a particular ontological variety), though defining the enterprise in this way will inevitably fail to capture all those earnestly pursuing investigation into truthmaking.

Philosophical discussion of truthmakers falls into two broad categories. First, there are ‘internal’ debates about the nature of truthmaker theory itself. For instance, there are open questions as to which truths have truthmakers: do all truths have truthmakers, or just some proper subset of truths (such as the positive truths or synthetic truths)? There are questions as to the nature of the truthmaking relation: is it a necessary relation or a contingent one? Is it a kind of supervenience, dependence, or something else? And it is an open question as to what sorts of objects serve as truthmakers: perhaps there are states of affairs, tropes, or counterparts that serve as truthmakers, or perhaps none of these. There is also frequent debate as to whether truthmaker theory constitutes a theory of truth (similar to, in particular, the correspondence theory of truth), or whether it is an entirely separate philosophical enterprise, one concerned more with metaphysics rather than semantics.

There are also ‘external’ truthmaking discussions that apply basic ideas about truthmaking to longstanding metaphysical topics. The hope is that truthmaker theory can bring new insights and argumentative resources to bear on traditional metaphysical inquiries. For example, truthmaker theorists investigate whether presentism—the view that only the present exists—can satisfy the obligations of truthmaker theory. Truthmaker theory has also been wielded against metaphysical views such as behaviorism and phenomenalism, and it has made contributions to the metaphysics of modality.

Table of Contents

  1. History of Truthmaker Theory
  2. The Truthmaking Relation
  3. Maximalism and Non-Maximalism
  4. Kinds of Truthmakers
  5. Truthmaking Principles
  6. Truthmaking and Truth
  7. Truthmaking and the Past
  8. Truthmaking and Modality
  9. Objections to Truthmaker Theory
  10. References and Further Reading

1. History of Truthmaker Theory

Perhaps the first occurrence of a basic truthmaking idea is found in Aristotle’s Categories. There Aristotle points out that if a certain man exists, then a statement that that man exists is true, and vice versa. But it seems that there is a difference in priority between these two states of affairs. The statement is true because the man exists; it is not the case that the man exists because the statement is true. Aristotle is, in effect, raising a ‘Euthyphro’ question, drawing on Plato’s famous dialogue. Is the statement true because of the way the world is, or is the world the way it is because of which statements are true? Aristotle chose the former answer, and set the stage for discussions of truthmakers far down the road.

The idea of a truthmaker did not play a significant role in philosophy until the rise of logical atomism in the work of Bertrand Russell and Ludwig Wittgenstein. In the Philosophy of Logical Atomism, Russell takes it to be a truism that there are facts, and says that facts are the sort of thing that make propositions true or false. The project of logical atomism is then to determine what sorts of facts are ontologically required in order to make true all the different kinds of propositions. The most basic kind of fact for Russell is the atomic fact, which consists of no more than the possession of a quality by a particular object (or of the holding of a relation between multiple objects). Sentences like ‘X is green’ and ‘X is heavier than Y’, if true, are made true by atomic facts. More complex sentences like ‘X is green and is heavier than Y’ do not call for more complex, ‘molecular’ facts. Instead, the same atomic facts from before can explain the truth of conjunctive sentences. Particularly worrisome are negative truths, such as ‘X is not red’. Russell believed that there need to be negative facts to account for negative truths. In advocating for the existence of negative facts, Russell claims to have ‘nearly produced a riot’ when he suggested the idea at a seminar at Harvard (1985: 74). The idea that reality contains entities that are fundamentally negative in nature has long struck many philosophers as puzzling and metaphysically unacceptable, and there has been continuing controversy over what, if anything, makes negative truths true.

The next major advance in truthmaker theory came from the work of the Australian philosopher David Armstrong. Armstrong—who credits fellow philosopher Charlie Martin with inspiring him on the topic—has long advocated the use of truthmakers in metaphysics. Armstrong cites two paradigm examples of how truthmakers can be put to work in philosophy. First, there is the case of behaviorism, as defended by Gilbert Ryle (1949). Ryle’s philosophy of mind relies heavily on dispositions; Ryle thought that claims involving mental terms could be analyzed into subjunctive conditionals involving dispositions. What it is for Ryle to believe that he is a philosopher is that if he were to be asked what his profession was, he would reply ‘philosopher’. While this counterfactual may be true, the truthmaker theorist asks: but what is it that makes it true? The behaviorist faces a challenge of either accepting this counterfactual as a brute truth, a truth with no further explanation, or admitting that it is made true by some sort of mental state, thus abandoning the supposed ontological economy of behaviorism.

Similarly, Armstrong argues that the phenomenalism of philosophers such as Berkeley and Mill faces a parallel difficulty. According to phenomenalism, all that exists are sensory impressions. But might it not be true that there is a rock on the dark side of the moon that no one has ever observed? The phenomenalist accounts for this idea by claiming that if you were to go to that part of the moon, you would have a rock-like sensory impression. But again: what makes that counterfactual true? The anti-phenomenalist will say that the counterfactual is true because it is made true (at least in part) by the rock itself. The phenomenalist, limited by an ontology of actual sense impressions, is hard-pressed to find a plausible answer to the truthmaker theorist’s question.

In the wake of Armstrong’s (and others’) writings, truthmaker theory became a lively corner of contemporary metaphysics.

2. The Truthmaking Relation

A key concern of truthmaker theory is giving an account of the truthmaking relation. When some object X is a truthmaker for some truth Y, what is the nature of the relationship that X and Y stand in?

One universally agreed upon fact about the truthmaking relation is that it is not a one-one relation. That is, in principle an object can be a truthmaker for multiple truths, and any given truth can have multiple truthmakers. For example, Socrates is frequently thought to be a truthmaker not only for ‘Socrates exists’, but also for ‘Socrates is human’ and ‘There are humans’. For it is impossible that Socrates—who is essentially human—could exist and yet any of those sentences be false (at least given some familiar assumptions about essences). Similarly, ‘There are humans’ is made true by many things—anything that is essentially human, in fact. Hence, it can be misleading to ask what the truthmaker for some truth is, since it is not necessary that truths have only one, unique truthmaker.

So what exactly is the nature of the relation? To ask this question is to probe what sort of analysis, if any, can be given of the truthmaking relation. Many truthmaker theorists have argued that the truthmaking relation, at the least, requires metaphysical necessitation. Some object X necessitates the truth of Y if and only if it is metaphysically impossible for X to exist, and yet Y not be true. In the language of possible worlds, X necessitates Y if and only if every possible world in which X exists is a world in which Y is true. Necessitation is thought to be a necessary component of the truthmaking relation because it shows that the truthmaker’s existence is a sufficient condition on the truth in question. If X’s existence were not enough to guarantee Y’s truth, then X would not yet adequately explain or account for the truth of Y. Something else, in addition to X, would be needed to properly account for Y’s truth.

Not all theorists have agreed that necessitation is necessary for truthmaking. Hugh Mellor (2003), for instance, at one point argued that truthmakers need not necessitate the truths that they make true. Mellor relied on the controversial case of general truths, such as ‘All gold spheres are less than a mile in diameter’. Suppose there are three such spheres, A, B, and C. Then there are three states of affairs (Mellor calls them ‘facta’): A’s being less than a mile in diameter, B’s being less than a mile in diameter, and C’s being less than a mile in diameter. For Mellor, the truthmaker for the general truth is no more than the sum of the three states of affairs. But these three states of affairs do not necessitate the truth of ‘All gold spheres are less than a mile in diameter’, since it is possible that that very sum could exist, and yet the sentence be false. That is a case where, for example, A, B, and C all exist with diameters less than a mile, but a fourth gold sphere D exists whose diameter is greater than a mile. Mellor reasons that the sum of the three states of affairs is the truthmaker for ‘All gold spheres are less than a mile in diameter’, and thus concludes that truthmaking does not require necessitation. (Furthermore, on his view, the truthmaking relation is contingent in the sense that whether X is a truthmaker for Y can vary from world to world. Those who accept necessitation would reject this consequence.) Other theorists argue that truthmaking does require necessitation, and so the sum is not a truthmaker for the sentence; something else (such as one of the totality states of affairs discussed below) is needed to provide a truthmaker, or perhaps it has no truthmaker at all (according to advocates of the supervenience accounts discussed below).

It is more common for philosophers to challenge the sufficiency of the necessitation condition, rather than its necessity. The concern that necessitation is not enough derives in large part from the fact that all objects necessitate the truth of all necessary truths. This is the problem of trivial truthmakers for necessary truths. For example, Socrates necessitates ‘2 + 2 = 4’, for it is metaphysically impossible for Socrates to exist and yet ‘2 + 2 = 4’ be false. Similarly, if God exists, and exists necessarily, then a torn, dog-eared copy of Lolita rotting away in some landfill necessitates the truth of ‘God exists’. If it is impossible for that sentence to be false, then it is impossible for that sentence to be false should that rotting copy of Lolita exist. But—according to this line of thought—Socrates is not a truthmaker for ‘2 + 2 = 4’, and the copy of Lolita is not a truthmaker for ‘God exists’. Truthmaking requires more than just necessitation.

Theories divide as to what exactly else is required of the truthmaking relation. Trenton Merricks (2007) has argued that truthmaking requires ‘aboutness’, in that X is a truthmaker for Y only if Y is about X. Mathematical claims are not about Socrates, and so Socrates cannot make them true. ‘God exists’ is about God, so only God is a candidate truthmaker for it. Those who accept Merricks’s proposal thereby avoid the problem of trivial truthmakers for necessary truths.

E. J. Lowe (2007) conceives of truthmaking as depending upon the essences of propositions. X is a truthmaker for Y only if it is part of the essence of Y that it be true should an object like X exist. This amendment solves the problem of trivial truthmakers because it is no part of the essence of the proposition expressed by ‘God exists’ that it be true should the copy of Lolita exist. The essence of the proposition that God exists has nothing to do with the rotting copy of Lolita, just as the proposition that two and two are four has nothing to do with Socrates. Lowe criticizes his own view on the grounds that it implies that propositions can be related to things that do not exist. For example, Batman could have been a truthmaker for ‘There are humans’, since the nature of the proposition that there are humans is such that it is true if things like Batman existed. So according to Lowe’s account, the proposition’s essence appears to stand in a relation to a non-existent entity, which is concerning for anyone who takes relations to entail the existence of their relata.

Regardless of how the problem of trivial truthmakers is solved, theorists seem to be agreed that the truthmaking relation, however ultimately analyzed, needs to be treated as a hyperintensional relation. That is, as a matter of necessity, a particular object could exist and a particular claim could be true in all the same possible worlds without that object being a truthmaker for the claim. Hence, truthmaking is a relation that is more discriminating than modal relations such as necessitation. Truthmaking is thus more like a dependence relation, or a grounding relation, than relations like necessitation or supervenience. Sometimes it is said that truthmaking is an ‘in virtue of’ relation: X is a truthmaker for Y because Y is true ‘in virtue’ of the existence of X (for example, Rodriguez-Pereyra 2006c). X is somehow ontologically responsible for the truth of Y, and no merely intensional relation is thought to capture this deeper connection between a truth and its truthmaker.

Some theorists accept that truthmaking is a kind of ‘in virtue of’ relation, but deny that it can be further analyzed. This is the view of, for example, Gonzalo Rodriguez-Pereyra (2006c), who holds that the truthmaking relation is a primitive notion that resists analysis.

In addition to the project of analyzing the components of the truthmaking relation (or admitting that such an analysis cannot be offered), there is also a question of what the structural and logical features of the relation are. One issue concerns the nature of the kinds of relata that the relation takes. The relation is typically understood to hold between a truth and a truthmaker. In this sense it is usually ‘cross-categorial’ in that it obtains between very different kinds of things, items from different categories. The truth that Socrates exists is made true by Socrates: here we have a case where the truthmaking relation obtains between a person and a truthbearer.

For many truthmaker theorists, there is no restriction on the kind of object that can be a truthmaker. To be a truthmaker, something just needs to appropriately account for the truth of some truthbearer. On this view, truthmakers are just whatever sorts of things are ontologically available. Other views impose restrictions. For example, one might argue that only facts or state of affairs are properly thought of as truthmakers. On this view, Socrates could not be a truthmaker for ‘Socrates exists’ because Socrates himself is not a fact or state of affairs. (At best he is a sort of abstraction from various states of affairs or facts.) There must be some other entity, such as the fact that Socrates exists, or a state of affairs composed by Socrates and an existence property, that makes the sentence true. Other views would find this perspective ontologically inflating: we do not need, in addition to Socrates, some further state of affairs that requires a property of existence in order to give an ontological account of the truth of ‘Socrates exists’. Finally, some have thought that only certain entities deserve to be thought of as truthmakers, such as fundamental entities (for example, Cameron 2008). On this view, X is a truthmaker for Y only if X is a fundamental entity.

As for the other side of the truthmaking relation, theorists disagree as to what sorts of objects are the bearers of truth. More restrictive views maintain that there is only one sort of truthbearer, or that there is only one primary kind of truthbearer, compared to which all other truthbearers are derivative. For example, a common view is that some sentence or belief bears truth only in virtue of expressing a true proposition, where propositions are the primary bearers of truth and falsity. More liberal views are happy to concede that there are a variety of truthbearers, and that they can all stand in the truthmaking relation. It is not clear that substantive questions about truthmaker theory turn on one’s background views about truthbearers, but it is wise to be sensitive to the ways in which truthmaking considerations might be affected by issues concerning truthbearers. For example, one could argue that while Socrates is a sufficient truthmaker for the proposition that Socrates exists (for it is impossible for Socrates to exist and yet that proposition be false), he is not a sufficient truthmaker for the sentence ‘Socrates exists’ because it is possible for Socrates to exist and yet the sentence be false, should the sentence have turned out to have a different meaning. For example, it is possible that ‘Socrates exists’ could have meant something else—such as that Socrates is Persian—and so it is possible that Socrates could have existed and that sentence be false. On this reading, then, one might take the truthmaker for the (uninterpreted) sentence to be more involved than the truthmaker for the proposition that sentence contingently expresses. What makes ‘Socrates exists’ true is Socrates plus whatever it is that makes it true that ‘Socrates exists’ means that Socrates exists.

Finally, consider some of the logical features of the truthmaking relation. In particular, there is the issue of how truthmaking stands with respect to reflexivity, symmetry, and transitivity. A relation is reflexive when every object that stands in the relation stands in the relation to itself. This would mean that every truth is its own truthmaker. The cross-categorial nature of truthmaking prohibits this possibility. Because not all truthmakers are truthbearers, the truthmaking relation is not reflexive.

Many theorists argue that truthmaking is irreflexive, in that there is no instance of something standing in the truthmaking relation to itself. (Hence, irreflexivity is stronger than the view that truthmaking is non-reflexive, which means that not every truth is its own truthmaker.) The general thought here is that truthmaking is a kind of dependence relation, and nothing can depend upon itself. But there are plausible counterexamples to irreflexivity. For example, the proposition that there are propositions appears to be a case of self-truthmaking. Because that proposition exists, it is true. One might respond by saying that the relation in this case actually holds between the existence of the proposition and the truth of the proposition, and so not between one and the same thing. This response, however, requires a substantial rethinking of the nature of the truthmaking relation (such that it no longer holds between truthmakers and truthbearers), and the apparent reification of properties like truth and existence.

Similar remarks apply to symmetry. A symmetric relation is one where if X bears it to Y, Y bears it to X. The cross-categorial nature of truthmaking again shows that the truthmaking relation is not in general symmetric. Not all truthmakers are truthbearers. But because some truthbearers can be truthmakers, the possibility for symmetry arises, in which case the relation is just non-symmetric. (Again, some will resist by suggesting that truthmaking, as a kind of dependence, must be anti-symmetric: if X depends on Y, Y does not depend on X.) In fact, any case of reflexive truthmaking will provide a case of symmetric truthmaking.

Finally consider transitivity: if X stands in R to Y, and Y stands in R to Z, then X stands in R to Z. Transitivity fails for obvious reasons. Socrates is a truthmaker for the proposition that Socrates exists, and the proposition that Socrates exists is a truthmaker for the proposition that there are propositions. But Socrates is no truthmaker for the proposition that there are propositions. Truthmaking is not transitive in general, but there could be individual instances of it (drawing on the same cases of reflexivity and symmetry).

3. Maximalism and Non-Maximalism

Another central question any truthmaker theorist must address concerns which truths have truthmakers. Perhaps all truths have truthmakers, or perhaps just some proper subset of the truths have truthmakers. Truthmaker maximalism is the thesis that all truths have truthmakers. Truthmaker non-maximalism maintains that there are truthmaker gaps: truths that have no truthmaker.

There have not been many arguments for maximalism. Its defenders frequently claim that the view is on its own quite intuitive and plausible. Resisting maximalism, according to such advocates, threatens to court the view that truths can ‘float free’ of reality. A truth without a truthmaker, on this view, is a brute truth, a truth for which there can be no explanation. Such truths, if they exist, are thought by maximalists to be metaphysically mysterious. Others have argued for maximalism by conceiving of having a truthmaker as being somehow essential to being true. If what it is to be true is to have a truthmaker, then something cannot be true without having a truthmaker. (The relationship between truth and truthmaking is further discussed in section 5.)

One motivation for non-maximalism is the existence of plausible counterexamples to the thesis that all truths have truthmakers. Consider negative existential truths, such as ‘There are no merlions’. On the face of it, the sentence is true not because some kind of thing exists; it is true because nothing of a different kind exists. A truthmaker for the negative existential would have to be some sort of entity whose existence excluded the existence of merlions, and explained their non-existence. But there is nothing in the world among the ‘positive’ entities that can guarantee that there are no merlions. Take, for example, the set of all the actually existing animals. Taken together, their existence does not guarantee the absence of merlions. For that set of animals could exist and yet it still be true that there are, in addition, merlions. It is only if we somehow combine the existence of those animals together with the fact that those animals are all the animals that we can find a suitable truthmaker for the negative existential.

Armstrong introduced a ‘totaling’ relation in response to these difficulties. For example, there is a state of affairs composed of the sum of all the animals standing in the totaling relation to the property of being an animal. This state of affairs fixes which animals exist, and so excludes the existence of any merlions. Armstrong generalizes this approach when he argues for the existence of what he calls the ‘totality state of affairs’. This is a second-order state of affairs that is composed of the sum of all the first-order state of affairs standing in the totaling relation to the property of being a first-order state of affairs. The existence of this second-order state of affairs thereby guarantees that the first-order states of affairs that partially compose it are all the first-order states of affairs there are. This single totality state of affairs can be a truthmaker for all negative existentials (and every other truth besides).

Like Russell’s negative facts, totality states of affairs are thought by many to be entities that are not fully ‘positive’. Their existence seems to concern what is not in addition to what is, and this is thought to be metaphysically suspicious. One way of putting the worry is that they are entities whose existence bears on the existence of things that are fully distinct from them. Ordinarily, one object’s existence does not bear on the existence of other objects that are separate from it. The existence of the Statue of Liberty neither entails nor excludes the existence of the Eiffel Tower. Neither does their existence exclude the existence of other potential landmarks that happen not to exist (such as a replica of the Statue of Liberty in Victoria Harbour). Totality states of affairs are different. The totality of animals excludes the existence of merlions, though merlions are entirely distinct from totalities of animals. For this reason, some philosophers have sought to develop non-maximalist approaches to truthmaker theory.

One prominent way of defending non-maximalism is to defend alternate principles that attempt to capture the dependence of truth upon being, but without admitting that all truths have truthmakers. One such principle is the thesis that truth supervenes on being, and it has been defended in both strong and weak versions. The strong version, defended by John Bigelow (1988), is the principle that if some proposition P is true at some world W1 but not world W2, then there must exist some entity at W1 that does not exist at W2, or some entity that exists at W2 but not W1. This principle captures the idea that what is true cannot vary from possible world to possible world unless there is some corresponding difference in the ontology of those worlds. Truth thus depends on being, although some truths escape having truthmakers. To see why, suppose that ‘There are merlions’ is false at W1 but true at W2. The principle implies that something must exist in one of these worlds but not the other. In this case, there is a merlion that exists at W2 but does not exist at W1. Although the negative existential ‘There are no merlions’ is true at W1, it has no truthmaker in that world. Nevertheless, its truth depends on the ontology of the world in the sense that, had it been false, there would have been something in the world’s ontology (namely, a merlion) that it does not currently have.

David Lewis (2001) has defended a weaker supervenience principle. For Lewis, if some proposition P is true at some world W1 but not world W2, then either there must exist some entity at only one of the worlds, or some group of things must stand in some fundamental relation at one of the worlds but not the other. Like the strong supervenience principle, this weaker principle allows one to accept negative existentials as truthmaker gaps, but also allows one to treat contingent predications as truthmaker gaps. For example, suppose that while W1 and W2 contain all the same objects, they differ with respect to the properties those objects have. For example, suppose some object O is blue in W1, but red in W2. Because ‘O is blue’ is true in W1 but false in W2, the strong supervenience principle requires that there be some entity that exists in one of the worlds but not the other. But ex hypothesi the two worlds have the same ontology. The advocate of strong supervenience (alongside the maximalist) requires something like a blueness trope or state of affairs (that is, O’s being blue) to exist in W1 but not W2. The contingent predication still needs a truthmaker. The advocate of weak supervenience, by contrast, does not require the contingent predication to have a truthmaker. While there is no entity that guarantees the truth of ‘O is blue’ in W1, its truth nevertheless depends on being in the sense that had it been false, there must be some difference in what exists, or in what properties those things have and what relations they stand in. The worlds where ‘O is blue’ is false are worlds where either O does not exist, or has different properties, such as being red.

Maximalism, strong supervenience, and weak supervenience are all attempts to capture the basic intuition behind truthmaker theory, and avoid the commitment to there being truths that ‘float free’ of reality. Some philosophers, however, have admitted that there are truths that do not depend on being at all, in any sense. Roy Sorensen (2001), for example, has argued that the puzzling truthteller sentence ‘This very sentence is true’ has a determinate truth value, but that it can never be known. Unlike the paradoxical liar sentence (‘This very sentence is false’), the truthteller is consistent: it can be true or false without contradiction. Sorensen argues that the truthteller is what we might call a deep truthmaker gap. Its truth does not depend on being in any sense, whereas shallow truthmaker gaps like contingent predications and negative existentials (if indeed they are truthmaker gaps) still in some sense depend on being. Sorensen argues that the truthteller’s status as a deep truthmaker gap explains why its truth value is unknowable: because we usually come to know truths by way of some kind of connection to their truthmakers, the fact that the truthteller (or its negation) lacks a truthmaker explains why we do not know its truth value.

Other forms of non-maximalism include the thesis that only ‘positive’ truths have truthmakers (however the positive/negative distinction may be articulated), that only synthetic truths have truthmakers, and that only contingent truths have truthmakers. It is incumbent upon theorists adopting such views that they explain why negative, analytic, or necessary truths are best thought of as not requiring truthmakers when accounting for their truth.

Finally, consider the following argument against maximalism, which does not turn at all on the plausibility of the various sorts of ontological truthmaking posits. Consider the sentence ‘This very sentence has no truthmaker’. This sentence is provably true (see Milne 2005). To see why, first suppose it is false. In that case, it has a truthmaker, in which case it is true: contradiction. So it must be true after all. Therefore, it has no truthmaker, since that is what it says about itself. It is a truthmaker gap. Here, simple reasoning leads to the view that there is at least one truth without a truthmaker. Many maximalists reject this argument (sometimes by assimilating it to the liar paradox), but nevertheless it remains to be seen where the reasoning goes wrong (see, for example, Rodriguez-Pereyra 2006a).

4. Kinds of Truthmakers

Truthmaker theorists are motivated by ontological questions: we can make progress on figuring out what exists by pursuing questions about what truthmakers there are. Considerations about truthmaking have thus led to different views about what exactly is included in the world’s ontology. These considerations often go hand in hand with the ancient metaphysical debate between realists and nominalists in discussions over the nature and existence of universals.

In his logical atomism, Russell just accepted as a truism the existence of facts, which are the sorts of things that make propositions true. Armstrong accepts the existence of similar objects, but he calls them ‘states of affairs’. A state of affairs is a complex object composed (in a non-mereological way) by a particular together with a universal. To offer a simplified example, suppose there is a universal of being a philosopher. Socrates instantiates this universal, and so in addition to the existence of Socrates and the universal, there is a third thing—we might call it ‘Socrates’s being a philosopher’—that is a kind of fusion of the other two.

Armstrong offers a truthmaking argument for the existence of states of affairs. It is true that Socrates is a philosopher. But Socrates does not make this claim true. Because the claim is a contingent predication, it is possible that Socrates could have existed and yet not been a philosopher. So Socrates does not necessitate the truth of ‘Socrates is a philosopher’, and so is not a truthmaker for the sentence. Nor does the universal being a philosopher necessitate ‘Socrates is a philosopher’, for it might have existed without Socrates being a philosopher. (Something else could have instantiated the universal.) Furthermore, not even the mereological sum of Socrates together with being a philosopher necessitates ‘Socrates is a philosopher’. For a world in which Socrates exists but is not a philosopher, though someone else is, is a world where the mereological sum exists but the sentence is false. On this basis, Armstrong argues that there must be something else, a state of affairs, that is a fusion of the particular and the property. Every world where the state of affairs composed by Socrates and being a philosopher is a world where ‘Socrates is a philosopher’ is true. On this basis, Armstrong defends the existence of states of affairs in the name of offering a satisfying truthmaker theory for contingent predications.

Similarly, Armstrong argues that we also need totality states of affairs in order to find truthmakers for negative and general truths. All the first-order states of affairs that exist are not enough to guarantee that there are no unicorns, or that all spheres of gold are less than a mile in diameter. So Armstrong posits the existence of a totaling relation, and second-order states of affairs partially composed by it. Again we see truthmaking considerations driving an ontological argument for the existence of entities that we might not ordinarily posit.

Not all truthmaker theorists accept Armstrong’s pro-universals and pro-states of affairs approach to truthmaker theory. Others have defended nominalist positions that reject the existence of universals, and so maintain the thesis that reality is exhausted by the particular. One popular ‘moderate’ form of nominalism is the view that there are tropes, which are individual, particularized property instances. Whereas the realist maintains that there is one unified thing, the universal of being a philosopher that is commonly instantiated by both Plato and Aristotle, the trope nominalist argues that there are two different ‘being a philosopher’ tropes: the trope associated with Plato is a distinct existence from the trope associated with Aristotle. Tropes, at least if thought of as essentially tied to their bearers, can serve as truthmakers for contingent predications. If Socrates’s being a philosopher trope exists, it must be true that Socrates is a philosopher. That trope, whose identity is bound up with Socrates, cannot in any sense be ‘transferred’ to Aristotle or anyone else. So tropes are sufficient necessitators for contingent predications. For those who find tropes ontologically advantageous over universals and states of affairs, this is a compelling argument. (It remains to be seen, however, whether trope theorists can provide truthmakers for negative and general truths, and so whether they must also, in the end, posit the existence of states of affairs.)

Another nominalist-friendly approach to truthmakers comes from David Lewis (2003), who uses counterpart theory to resist the above arguments for states of affairs and tropes. On Lewis’s view, an object exists in only one possible world, but has counterparts in different possible worlds. But there are multiple ways of thinking about objects, and so multiple ways of identifying an object’s counterparts. For example, we can use the name ‘Socrates qua philosopher’ to identify a series of counterparts to Socrates, all of whom are philosophers. Similarly, ‘Socrates qua Greek’ identifies Socrates in a way such that all his counterparts are Greek. Lewis next maintains that objects under counterpart relations can be truthmakers for contingent predications: every possible world in which Socrates qua philosopher exists is a world in which Socrates (or his counterpart) is a philosopher. So Lewis provides necessitating truthmakers for contingent predications without admitting the existence of tropes or states of affairs.

The previous arguments presuppose that contingent predications and/or negative and general truths require truthmakers. If they do, then truthmaker theorists are led to positing the existence of objects such as universals, tropes, states of affairs, and counterparts. A competing perspective, however, derives from a refusal to assume maximalist truthmaking principles, and so avoids such arguments. This alternative approach does not assume from the beginning that contingent predications and/or negative and general truths require truthmakers, and so is not ready to concede that we need an ontology of counterparts, tropes, or states of affairs. Instead of defending the existence of such entities, these truthmaker theorists defend the truth of non-maximalist truthmaker principles (as discussed in section 3). For example, advocates of the strong supervenience principle—that any difference in truth between two possible worlds requires a difference in ontology between the two worlds—believe that negative and general truths do not require truthmakers, and so, for example, Armstrong’s argument for totality states of affairs is unsuccessful. Similarly, advocates of the weak supervenience principle—that any difference in truths between two possible worlds requires either a difference in ontology or a difference in what fundamental relations objects stand in—argue that contingent predications do not require truthmakers, and so the arguments above do not succeed in showing that such posits exist.

5. Truthmaking Principles

Some very general and controversial principles concerning truthmaker theory have been canvassed above, such as maximalism, strong and weak supervenience, and principles concerning whether the truthmaking relation is irreflexive (or merely non-reflexive), asymmetric (or merely non-symmetric), or anti-transitive (or merely non-transitive). Other disputed truthmaking principles concern how truthmakers relate to one another, and what other logical principles apply in the theory of truthmaking.

One such principle in truthmaker theory is the entailment principle: if X is a truthmaker for Y, then X is a truthmaker for anything entailed by Y. For example, suppose that the state of affairs of Socrates’s being a philosopher exists, and is a truthmaker for ‘Socrates is a philosopher’. Because ‘Socrates is a philosopher’ entails ‘Something is a philosopher’, the entailment principle holds that the state of affairs of Socrates’s being a philosopher is also a truthmaker for ‘Something is a philosopher’. Furthermore, any other state of affairs involving the universal being a philosopher will also be a truthmaker for ‘Something is a philosopher’, since the truthmaking relation is not one-one.

While seemingly quite plausible, the entailment principle runs into an immediate difficulty: the problem of trivial truthmakers for necessary truths. ‘Socrates is a philosopher’ also entails ‘2 + 2 =4’, at least when entailment is thought of on the model of necessary truth preservation. Every world where ‘Socrates is a philosopher’ is true is a world where ‘2 + 2 = 4’ is true. But, presumably, the state of affairs of Socrates’s being a philosopher is not a truthmaker for ‘2 + 2 =4’, though the entailment principle suggests otherwise. In response, truthmaker theorists find ways to restrict the entailment principle, or offer alternate understandings of the kind of entailment in question. Generally speaking, truthmaker theorists attempt to articulate a hyperintensional account of entailment that is more modally discriminating than standard entailment. For example, one might think that some sort of relevance notion of entailment is at stake (for example, Restall 1996); the hope is to develop a conception of entailment that maintains that while ‘Socrates is a philosopher’ entails ‘Someone is a philosopher’, it does not entail ‘2 + 2 =4’.

Another plausible truthmaking principle—and one entailed by the entailment principle—is the conjunction principle. According to this principle, any truthmaker for a conjunction is also a truthmaker for the individual conjuncts. The conjunction principle follows from the entailment principle simply because conjuncts are entailed by the conjunctions they compose. While plausible, the principle has been doubted (for example, Rodriguez-Pereyra 2006c). The principle might seem appealing so long as we think of the truthmaking relation as tracking entailment relations. But recall that the truthmaking relation is not just a necessitation or entailment relation. As an ‘in virtue of’ relation, there is more to being a truthmaker than just being a necessitator. Take, for example, the conjunctive truth ‘Socrates exists and Aristotle exists’. A plausible truthmaker for this conjunction is the mereological sum composed by Socrates and Aristotle. If that sum exists, the conjunction has to be true. But is that mereological sum a truthmaker for the individual conjuncts? Put another way: is ‘Socrates exists’ true in virtue of the existence of the mereological sum Socrates + Aristotle? One might say: no, ‘Socrates exists’ is true in virtue of the existence of Socrates, period. The mereological sum, while a genuine necessitator of the truth of ‘Socrates exists’, is not the entity responsible for the sentence’s truth. The truthmaker for the conjunction, in effect, has ‘extraneous’ parts that are irrelevant to the truth of some of its conjuncts. Since truthmaking is thought of as a hyperintensional relation such that mere necessitation is not sufficient for truthmaking, there is room to doubt that Socrates + Aristotle is a genuine truthmaker for ‘Socrates exists’. Other philosophers who defend the conjunction principle may simply accept the sum as an adequate, albeit non-‘minimal’ truthmaker for the conjunct. (That is, the truthmaker has a proper part that is also a truthmaker.) After all, a truth may have multiple truthmakers on the standard view.

A similar candidate truthmaking principle is the disjunction principle: any truthmaker for a disjunction is a truthmaker for at least one of the disjuncts. For example, if Socrates is a truthmaker for ‘Socrates exists or Cthulhu exists’, then he is a truthmaker either for ‘Socrates exists’ or ‘Cthulhu exists’. The principle seems innocuous enough, until one considers necessary disjunctions of the form ‘p or it is not the case that p’. If one accepts the basic entailment principle, then any object whatsoever is a truthmaker for every claim of the form ‘p or it is not the case that p’. By the disjunction principle, any object whatsoever is therefore a truthmaker of either ‘p’ or ‘it is not the case that p’, depending upon which one is the true disjunct. As a result, every object is a truthmaker for every truth. This unfortunate result has led many to rethink the plausibility of the entailment and disjunction principles. (This problem may well be circumvented if a ‘relevance’ style amendment to the entailment principle is offered.)

A similar, but less controversial truthmaking principle about disjunction would be that any object that is a truthmaker for some truth is also a truthmaker for any disjunction that includes that truth as a disjunct. So since Socrates is a truthmaker for ‘Socrates exists’, he is also a truthmaker for ‘Socrates exists or Caesar sank in the Rubicon’. This sort of principle has been at work since the beginning of truthmaker theory; Russell (1985) relied on it when arguing that we need not posit a realm of disjunctive facts to make disjunctive propositions true. Atomic facts on their own suffice to serve as truthmakers for disjunctions.

6. Truthmaking and Truth

This section is a halfway house in the transition away from the internal concerns of truthmaker theory, and toward its external connections with other domains of philosophy, for it is controversial whether or not the theory of truth is a distinct domain from the theory of truthmakers. This section explores the relationship between the theory of truth and the theory of truthmakers, and surveys the possible attitudes one might take about their relationship to one another.

The history of truthmaker theory is inextricably linked with the correspondence theory. The metaphysical ambitions of Russell’s logical atomism are a natural extension of the correspondence theory of truth that he was beginning to accept around the same time period. Nowadays truthmaker theory is sometimes thought of as a modified, contemporary update of correspondence theory. It is no great mystery why. According to correspondence theories of truth, a proposition is true if and only if it stands in the correspondence relation to some worldly entity. (Oftentimes these entities are thought to be facts.) According to truthmaker theory, it seems that propositions are true if and only if they have a truthmaker; that is, a proposition is true just in case it stands in the truthmaking relation to some worldly entity, its truthmaker. If one identifies the truthmaking relation with the correspondence relation, and the set of truthmakers (facts or not) with the set of corresponding objects, then it certainly appears that truthmaker theory provides a correspondence-style theory of truth.

Notice that the above perspective presupposes maximalism. The only possible way of finding a theory of truth (let alone a correspondence theory of truth) inside truthmaker theory is to first commit to the thesis that every truth has a truthmaker. Any truthmaker gap would be an exception to anyone trying to explain the nature of truth by way of truthmakers. So the fact that maximalism is an optional requirement of truthmaker theory shows that taking truthmaker theory to be a theory of truth is also optional at best.

Even granting maximalism, anyone who seeks to define truth in terms of truthmakers still faces a crucial challenge. The truthmaking relation is itself typically understood in terms of truth. Truthmakers are objects that necessitate the truth of certain propositions, and not their other features. The accounts of the truthmaking relation canvassed in section 2 all presuppose the notion of truth. The essential dependence account, for example, holds that X is a truthmaker for Y only if Y is essentially such that it is true if X exists. Unless truthmaking can somehow be analyzed without further resort to truth, it cannot, on pain of circularity, be put to work in defining truth. Truth, it seems, is prior to truthmaking. Truthmaker theory presupposes the notion of truth, and so is not fit to serve as a theory of truth itself.

If truthmaker theory presupposes the notion of truth, does it presuppose any particular conception of truth? Again, many might think that truthmaker theory presupposes a correspondence theory of truth, or some similar substantive theory of truth. Several philosophers have also argued that truthmaker theory is incompatible with deflationary theories of truth (for example, Vision 2005). According to deflationary theories, truth is not a substantive property of propositions, in virtue of which they are true. The proposition that snow is white is not true in virtue of its having some property, or standing in a particular relation (for example, correspondence) to some object (or fact). Rather, the deflationist maintains, there is nothing more to the truth of the proposition that snow is white other than snow being white.

Accordingly, some might see deflationary theories of truth as containing an implicit rejection of truthmaker theory. As a result, truthmaker theory is incompatible with deflationary theories, and must presuppose some substantive theory of truth. (If not correspondence, there are coherence theories, pragmatic theories, epistemic theories, and others.) But it is not at all clear that anything in truthmaker theory conflicts with deflationary theories of truth. The latter tend to consist of axioms such as ‘The proposition that snow is white is true if and only if snow is white’ and ‘The proposition that Socrates is a philosopher is true if and only if Socrates is a philosopher’. These biconditionals themselves do not conflict with anything in truthmaker theory (or, typically, with any other theory of truth, either). Deflationists also maintain, in addition, that these axioms exhaust all there is to be said about the nature of truth. (It is this negative claim that substantive theories of truth must reject.) But truthmaker theorists need not be offering the claims of their theories as in any way revealing the nature of truth itself. To say that the truthmaker for the proposition that Socrates is a philosopher is a particular trope, state of affairs, or Socrates under a counterpart relation is not to say anything about the nature of truth itself. Rather, it is a claim about the particular ontological grounds needed for a particular claim about Socrates. In principle, truthmaker theorists and deflationists have nothing that they must disagree about.

7. Truthmaking and the Past

A longstanding metaphysical question concerns the reality of the past. Everyone can agree that entities in the present exist. But what about the objects that do not currently exist but someday will? And what about objects that used to exist but exist no longer? Presentism is the view that reality is exhausted by the present; the only things that exist are entities in the present. Eternalism, by contrast, is the view that there is no time limit on what exists: entities from the past are just as real as presently existing entities, which are just as real as future entities.

The existence of non-present entities is a highly contentious issue in philosophy. What is less controversial is the fact that there are, presently, truths about entities from the past. Presentists and eternalists disagree as to whether Socrates, a past entity, exists. But they agree that ‘Socrates existed’ is true. (What is more contentious is whether or not there are, right now, truths about the future. Parallel problems arise for those who think that there are truths about the future, but do not believe in the existence of purely future entities.) Eternalists face no difficulty in accounting for how such claims can be true. Socrates is the truthmaker for ‘Socrates existed’ in just the way that the Eiffel Tower is the truthmaker for ‘The Eiffel Tower exists’. Socrates and the Eiffel Tower are equally real, from the eternalist’s metaphysical point of view. One is located entirely in the past, and the other is located (but not entirely) in the present. But the present is not metaphysically privileged, so entities from the past and future are freely available to eternalists to serve as truthmakers.

Presentism, by contrast, faces a challenge from truthmaker theory. Given that there are truths about the past, but nothing (fully) from the past that exists, presentists are at pains when accounting for what, if anything, there is that can make those truths about the past true. Presentists have two available options: First, they can deny that truths about the past have truthmakers. Second, they can attempt to show that there are sufficient ontological resources in the present to ground the truths about the past.

Consider first the strategy of denying that truths about the past have truthmakers. This is a form of non-maximalism that limits truthmakers to truths about the present. Recall from section 3 that there are two distinct ways of conceiving of truthmaker gaps, that is, truths without truthmakers. There are deep truthmaker gaps, which are truths that do not depend in any way whatsoever upon what exists. Deep truthmaker gaps violate the principle that truth supervenes upon being: a deep truthmaker gap could be true in one world, but false in another, without there being any other difference between the two worlds. Shallow truthmaker gaps, by contrast, do not have truthmakers, but their truth is nonetheless ontologically accountable (by way, perhaps, of their adherence to one of the supervenience principles).

It appears that presentists cannot take advantage of the supervenience principles that have been defended by truthmaker theorists, and so appear to be forced into the view that if truths about the past are truthmaker gaps, they are deep truthmaker gaps. To see why, consider two presentist universes. These worlds are metaphysically indiscernible at the present moment: all the same things exist, and stand in the same fundamental relations. But they have different histories. In one of the universes, at some point some radioactive atom A decayed within its half-life, while a neighboring atom B did not. In the other universe, B decayed within its half-life, that is, within the predicted time it would take for half of a group of B-like atoms to radioactively decay, while A did not. So in the first universe, ‘A decayed within its half-life’ is true, while it is false in the second universe. But this difference has made no later difference in the histories of these universes, and so now, at present, the two universes are indiscernible. Yet something is true in one of them but not the other. So supervenience has been violated: they are discernible with respect to truth, but indiscernible with respect to being. Hence, presentists cannot defend a non-maximalist perspective on truths about the past without conceding that those truths are deep truthmaker gaps. But deep truthmaker gaps are highly unattractive—they make the truths in question brute, inexplicable truths. Given that eternalists have an easy, straightforward account of truthmakers for truths about the past, presentists face a serious objection. Presentists might respond by claiming that the supervenience principles need to be appropriately modified, such that truth supervenes on not just present being, but past being as well. But this response requires that present truths stand in relations to past entities, which is impossible for presentists who do not believe in past entities. If there are no past entities, there are no past entities for present truths to supervene upon.

The second strategy for presentism is to deny that there are no presently available truthmakers for truths about the past. On this kind of account, the burden is on the presentist to offer an ontological account of what present entities are available that can provide grounds for truths about the past. An eclectic menagerie of entities has been posited by presentists over the years to serve as truthmakers. Some have suggested that the world—the present world—has a variety of ‘tensed properties’ (for example, Bigelow 1996). For example, while echidnas make true ‘There are echidnas’, the world’s having the property there having been dinosaurs makes true ‘There were dinosaurs’. Others have posited a realm of ‘tensed facts’ (for example, Tallant 2009). A tensed fact is a sui generis entity posited solely to provide a truthmaker for past truths. So the truthmaker for ‘There were dinosaurs’ is on this view just an entity of some sort that we call ‘the fact that there were dinosaurs’. Still others have suggested that, for example, God’s memory of there being dinosaurs is a truthmaker for ‘There were dinosaurs’ (for example, Rhoda 2009).

Anyone can posit an entity to be a truthmaker. Such posits constitute a genuine solution to the truthmaking challenge to presentism only if those entities are the right sorts of entities to be truthmakers, and only if they are entities whose existence is plausible and can be independently motivated (lest they remain ad hoc posits). After all, the eternalist stands ready with plausible, independently motivated truthmakers. Hence, presentists do not need to just offer some account of truthmakers for past truths; they need to provide an equally good account.

Tensed facts fail both sorts of challenges. Consider Socrates’s last moments, as the hemlock spread through his blood. During those moments, ‘Socrates exists’ was true, and made true by Socrates. A few moments later, ‘Socrates existed’ is true, and made true by a tensed fact that has just sprung into existence. That two truths so similar should be made true by such drastically different entities should be fairly disquieting. Socrates seems to be the perfect sort of thing to explain why ‘Socrates exists’ is true. After all, the sentence is about Socrates, a human being, and so a human being seems fit to provide the grounds for its truth. ‘Socrates existed’ is also about a human being, but now the supposed truthmaker is some sort of sui generis entity, something that is certainly composed in no way by a human being. There is no independent reason to believe in tensed facts; they are put forward as truthmakers for truths about the past by brute force, since it is unclear what they are apart from their stipulated role of being truthmakers for truths about the past.

Tensed property views face a similar sort of objection. ‘Socrates exists’ is true at some moment in virtue of Socrates. ‘Socrates existed’ is true the next moment, but in virtue of the world’s having some tensed property. Why, one might wonder, is not ‘Socrates exists’ true, when it is true, in virtue of the world having the tensed property presently containing Socrates? If such properties are not motivated to account for the present, it is unclear why we should posit them to account for the past.

In general, any strategy using presently existing entities to make true truths about the past will face a common explanatory problem (Sanson and Caplan 2010). Why are truths about the past true in virtue of things in the present? After all, truths about the past seem to be about the past, and so it is unclear how anything not from the past could be an adequate explanation of why they are true. Truthmakers are not mere necessitators; they have to give the right sort of grounds for their truths. God’s memory of there being Socrates certainly necessitates the truth of ‘Socrates existed’. But it is fair to claim that ‘Socrates existed’ is not true in virtue of God’s having a particular memory. (To deny this seems to accept some form of divine idealism.) So God’s memories aren’t the right kind of thing to make true ‘Socrates existed’. (To the view’s credit, the existence of God’s memories can at least be motivated independently—for anyone motivated to believe in God. The view is obviously a non-starter for naturalistic metaphysics.)

8. Truthmaking and Modality

Another traditionally problematic domain of truths are the modal claims: claims involving possibility and necessity, as well as related kinds of claims such as counterfactuals. For example, there are claims about mere possibilities, that is, possibilities that do not obtain, but could have. There are also necessary and impossible truths, and truths that those truths are necessary or impossible. Since such claims appear to concern a realm beyond the actual world, the grounds for their truth have long intrigued metaphysicians.

Though defended independently of his views about truthmaking, David Lewis’s modal realism can be put to work as a theory of truthmakers for some modal truths. According to Lewis, there exists, in addition to the actual world, infinitely many other concrete worlds. These other possible worlds are just as real as the actual world; the actual world bears no special metaphysical significance. While objects exist only in one possible world, they have counterparts in other worlds. An object’s counterparts are the entities in other possible worlds that are highly similar to the object (where similarity is explicated contextually). These counterparts can serve as truthmakers for modal truths concerning the actual world. For example, Socrates could have been a sophist. What makes that true, Lewis could maintain, is one of Socrates’s sophistic counterparts. Because there exists a counterpart of Socrates that is a sophist, ‘Socrates could have been a sophist’ is true in the actual world. At the same time, this view might face a relevance objection: the truth in question is a claim about Socrates, so how could it be made true by some individual existing in a separate, causally isolated possible world?

Armstrong hopes for a more austere account of the truthmakers for truths of mere possibility. To do this, he defends the principle that any truthmaker for a contingent truth is also a truthmaker for the truth that that truth is contingent. So, if some object X is a truthmaker for some contingent proposition that p, then X is a truthmaker for the truth that it is contingent that p. And if it is contingent that p, it follows that it is possible that it is not the case that p. X will therefore provide a truthmaker for the truth of mere possibility (assuming the truth of the right sort of entailment principle). For example, Socrates might not have been a philosopher, even though he was. Suppose the truthmaker for ‘Socrates is a philosopher’ is the state of affairs of Socrates’s being a philosopher. In that case, Socrates’s being a philosopher also makes it true that it is contingent that Socrates is a philosopher. By the entailment principle, Socrates’s being a philosopher is also a truthmaker for the claim that it is possible that Socrates is not a philosopher. In this way, Armstrong defends an account of truthmakers for truths of mere possibilities that does not employ resources above and beyond the ordinary truthmakers needed to grounds truths solely about the actual world.

As for necessary truths (and claims that such truths are necessary), most truthmaker theorists are agreed that not just any old entity will do, since mere necessitation is not sufficient for truthmaking. If it is true that God exists, and necessarily so, then presumably God is the truthmaker for such claims, not every object whatsoever. What is more contentious is what it is that makes mathematical statements true. Platonists might defend their view on the basis that numbers, understood Platonically, are necessary for giving an account of truthmakers for mathematical truths (for example, Baron 2013). Others might hope for a non-Platonic basis for mathematical truthmakers. Since it is agreed that truthmakers need to be ‘about’ or relevant to their corresponding truths, non-Platonists face the challenge of explaining how their purported truthmakers ground the truth of claims that at least appear to concern Platonic entities.

There are many more modal cases to keep truthmaker theorists busy. There are truths of natural necessity (for example, that all copper conducts electricity), conceptual truths (for example, that all bachelors are male), and logical truths (for example, that someone is human only if someone is human). All pose unique challenges for truthmaker theory.

9. Objections to Truthmaker Theory

Many philosophers are unmoved by truthmaker theory. A common thread running between the various objections that have been raised is that truthmaker theory lacks the sufficient motivation that would be necessary to justify its ontological posits. Truthmaker theory traditionally defends the existence of ontologically controversial entities (such as states of affairs or tropes), and so such posits should figure into theories only when they have some indispensable theoretical role to play. And many are convinced that no such role exists.

One line of objection maintains that truthmaker principles that are weaker than maximalism are not worthy of the name, and that the ontological posits required for maximalism are unacceptable. So no form of truthmaker theory is tenable. (See, for example, Dodd 2002 and Merricks 2007.) Such objections rely on conceptions of truthmaker theory that are substantially narrower than what is actually found in the literature; non-maximalists will be unmoved by such supposed refutations. It is up to truthmaker theorists, not their opponents, to decide who counts as a truthmaker theorist.

Another common style of objection is to claim that the intuitions behind truthmaker theory can be saved far more economically by ontologically innocuous principles (for example, Hornsby 2005). As a result, the key but controversial principles supporting truthmaker theory (and the ontological results they produce) are unmotivated, and so should be rejected. The objection runs as follows. As above, a central motivating thought behind truthmaker theory is that truth depends on reality. Maximalists account for this intuition by way of requiring that every truth be made true by some entity, in virtue of which that truth is true. Non-maximalists might look to the strong or weak supervenience principles to explain how what is true is not independent from what exists and how those things are arranged. But other philosophers find these principles to be overreactions to the idea that truth depends on being. For these philosophers, that idea is best cashed out by pointing to the instances of the following schema:

The proposition that p is true because p.

For instance, the proposition that Socrates is a philosopher is true because Socrates is a philosopher. According to the objection, this ‘because principle’ suffices to explain how the truth of the proposition that Socrates is a philosopher depends upon reality. After all, this maneuver seems to capture the asymmetry between truth and reality. For instances of the reverse schema are false:

p because the proposition that p is true.

It is not the case that Socrates is a philosopher because the proposition that Socrates is a philosopher is true. Hence, there is no need to entertain the existence of a state of affairs or trope, and no need to posit general claims about the supervenience of truth on being.

The most natural response for truthmaker theorists to make is that the above ‘because principles’ remain silent on the questions of interest to truthmaker theorists. Advocates of the objection claim that such principles express the appropriate dependency between truth and reality. But there is no mention of reality anywhere in the principles. Consider what is being expressed by the ‘because principles’. They appear to apply a relation—the ‘because’ relation—between two sentences, or perhaps two propositions. The first sentence applies truth to a proposition; the second is just the use of a sentence that expresses that proposition. The ‘because principle’ cannot be expressing a relation involving entities such as facts or states of affairs, since the objector does not believe in the need for an ontology of those kinds of things. In fact, one can endorse a ‘because principle’ without taking any metaphysical or ontological stand about anything. The sentence ‘Socrates is a philosopher’ is completely silent on what exists. The sentence itself does not tell you what its ontological commitments are; one must bring to the sentence a theory of ontological commitment or truthmaking in order to determine what its metaphysical implications are. Presumably, advocates of the ‘because principles’ think that the used sentence following ‘because’ somehow involves reality. In so doing, they betray the fact that they are reading ontological implications already into the sentence. They are bringing, in other words, an implicit theory of truthmaking to the table.

Consider again the sorts of suspicious counterfactual conditionals that motivated truthmaker theory in the first place. The counterfactual ‘If I were to go to the quad I would have a tree-like sensory impression’ appears to be true, and true in virtue of the existence of a real, live tree in the quadrangle courtyard. That is the view that puts pressure on ontologies limited to actual sensory impressions: they have no available truthmakers for such counterfactuals, and so must take such claims to be primitive, brute truths. The objector to truthmaker theory points out that the proposition that if I were to go to the quad I would have a tree-like sensory impression is true because if I were to go to the quad I would have a tree-like sensory impression. That is true, but beside the point. It does not explain the need for something to exist in order for something to be true. We’re left wondering why I would have a tree-like sensory impression if I were to go to the quad. All the ‘because principle’ does (at least on the readings available to the objector) is cite a relation that obtains between two sentences or propositions; but truthmaker theorists are after a relation between truth and reality.

10. References and Further Reading

  • Armstrong, D. M. 2004. Truth and Truthmakers. Cambridge: Cambridge University Press.
    • A systematic account of truthmaker theory by one of its most established proponents.
  • Baron, Sam. 2013. A truthmaker indispensability argument. Synthese 190: 2413-2427.
    • Argues for mathematical Platonism on the basis of certain truthmaking considerations.
  • Beebee, Helen and Julian Dodd. 2005. Truthmakers: The Contemporary Debate. Oxford: Clarendon Press.
    • An anthology of various essays both critical and supportive of truthmaker theory.
  • Bigelow, John. 1988. The Reality of Numbers: A Physicalist’s Philosophy of Mathematics. Oxford: Clarendon Press.
    • Defends the strong supervenience principle, offering a non-maximalist approach to truthmaker theory.
  • Bigelow, John. 1996. Presentism and properties. Philosophical Perspectives 10: 35-52.
    • Discusses the relationship between truthmaker theory and presentism; defends the view that truths about the past have truthmakers in the present.
  • Cameron, Ross P. 2008. Truthmakers and ontological commitment: or how to deal with complex objects and mathematical ontology without getting into trouble. Philosophical Studies 140: 1-18.
    • Defends a view that requires truthmakers to be fundamental entities.
  • Caplan, Ben and David Sanson. 2011. Presentism and truthmaking. Philosophy Compass 6: 196-208.
    • Provides an accessible introduction to presentism and truthmaker theory.
  • Dodd, Julian. 2002. Is truth supervenient on being? Proceedings of the Aristotelian Society (New Series) 102: 69-85.
    • Argues that truthmaker theory is unmotivated.
  • Hornsby, Jennifer. 2005. Truth without truthmaking entities. In Truthmakers: The Contemporary Debate, eds. Helen Beebee and Julian Dodd, 33-47. Oxford: Clarendon Press.
    • Argues that the intuitions behind truthmaking can be captured without resort to contentious ontological posits.
  • Lewis, David. 2001. Truthmaking and difference-making. Noûs 35: 602-615.
    • Provides an important critical perspective on maximalist truthmaker theory that relies on defending the weak supervenience principle.
  • Lewis, David. 2003. Things qua truthmakers. In Real Metaphysics: Essays in Honour of D. H. Mellor, eds. Hallvard Lillehammer and Gonzalo Rodriguez-Pereyra, 25-42. London: Routledge.
    • Provides a nominalist-friendly account of truthmaker theory that employs counterpart theory.
  • Lowe, E. J. 2009. An essentialist approach to truth-making. In Truth and Truth-Making, eds. E. J. Lowe and A. Rami, 201-216. Stocksfield: Acumen.
    • Defends the view that the truthmaking relation is a kind of essential dependence.
  • Lowe, E. J. and A. Rami, eds. 2009. Truth and Truth-Making. Stocksfield: Acumen.
    • An anthology of papers on truthmaker theory, including several on this list, that provides an introduction to core issues in truthmaker theory.
  • MacBride, Fraser. 2014. Truthmakers. In The Stanford Encyclopedia of Philosophy (Spring 2014 Edition), ed. Edward N. Zalta.
    • Provides a detailed overview of several main theoretical concerns within truthmaker theory.
  • Mellor, D. H. 2003. Real metaphysics: replies. In Real Metaphysics: Essays in Honour of D. H. Mellor, eds. Hallvard Lillehammer and Gonzalo Rodriguez-Pereyra, 212-238. London: Routledge.
    • Offers an argument that the truthmaking relation does not require necessitation.
  • Merricks, Trenton. 2007. Truth and Ontology. Oxford: Clarendon Press.
    • Offers a sustained and ultimately negative critical evaluation of truthmaker theory.
  • Milne, Peter. 2005. Not every truth has a truthmaker. Analysis 65: 221-224.
    • Raises a potential paradox for maximalism.
  • Molnar, George. 2000. Truthmakers for negative truths. Australasian Journal of Philosophy 78: 72-86.
    • Introduces and discusses the problem of negative truths for truthmaker theory.
  • Mulligan, Kevin, Peter Simons and Barry Smith. 1984. Truth-makers. Philosophy and Phenomenological Research 44: 287-321.
    • Offers a non-maximalist approach to truthmaker theory without resorting to states of affairs that begins by finding truthmakers for atomic facts.
  • Restall, Greg. 1996. Truthmakers, entailment and necessity. Australasian Journal of Philosophy 74: 331-340.
    • Discusses problems (such as that related to the disjunction principle) with treating the truthmaking relation merely as a relation of necessitation.
  • Rhoda, Alan R. 2009. Presentism, truthmakers, and God. Pacific Philosophical Quarterly 90: 41-62.
    • Posits the existence of God’s memories as providing presentist-friendly truthmakers for truths about the past.
  • Rodriguez-Pereyra, Gonzalo. 2006a. Truthmaker Maximalism defended. Analysis 66: 260-264.
    • Defends truthmaker maximalism against Milne’s argument on the grounds that it begs the question.
  • Rodriguez-Pereyra, Gonzalo. 2006b. Truthmakers. Philosophy Compass 1: 186-200.
    • Provides a highly accessible introduction to central issues in truthmaker theory.
  • Rodriguez-Pereyra, Gonzalo. 2006c. Truthmaking, entailment, and the conjunction thesis. Mind (New Series) 115: 957-982.
    • Argues against certain core principles discussed in the truthmaking literature.
  • Russell, Bertrand. 1985. The Philosophy of Logical Atomism. ed. David Pears. La Salle, IL: Open Court.
    • An early work that makes use of truthmaking ideas that gave rise to and inspired future contemporary work on truthmakers.
  • Ryle, Gilbert. 1949. The Concept of Mind. Chicago: University of Chicago Press.
    • Presents Ryle’s behaviorism that becomes a later target of truthmaker theory.
  • Sanson, David and Ben Caplan. 2010. The way things were. Philosophy and Phenomenological Research 81: 24-39.
    • Argues against various defenses of truthmakers for presentism on the ground that such posits are insufficiently explanatory.
  • Sorensen, Roy. 2001. Vagueness and Contradiction. Oxford: Clarendon Press.
    • In the last chapter of this book Sorensen argues that the truthtelling sentence ‘This very sentence is true’ is a deep truthmaker gap: a truth without a truthmaker that depends in no way upon reality.
  • Tallant, Jonathan. 2009. Presentism and truth-making. Erkenntnis 71: 407-416.
    • Discusses various strategies for presentist truthmaking.
  • Vision, Gerald. 2005. Deflationary truthmaking. European Journal of Philosophy 13: 364-380.
    • Discusses the relationship between truthmaker theory and the deflationary theory of truth, and finds the two projects difficult to combine.

 

Author Information

Jamin Asay
Email: asay@hku.hk
University of Hong Kong
Hong Kong

Pejorative Language

Some words can hurt. Slurs, insults, and swears can be highly offensive and derogatory. Some theorists hold that the derogatory capacity of a pejorative word or phrase is best explained by the content it expresses. In opposition to content theories, deflationism denies that there is any specifically derogatory content expressed by pejoratives.

As noun phrases, ‘insult’ and ‘slur’ refer to symbolic vehicles designed by convention to derogate targeted individuals or groups. When used as verb phrases, ‘insult’ and ‘slur’ refer to actions performed by agents (Anderson and Lepore 2013b). Insulting or slurring someone does not require the use of language. Many different kinds of paralinguistic behavior could be used to insult(verb) or slur(verb) a targeted individual. Slamming a door in an interlocutor’s face is one way to insult them. Another way would be to sneer at them. Arguably, one could slur a Jewish person by performing a “Nazi salute” gesture in their presence. This article focuses on the linguistic meaning(s) that pejorative words encode as symbolic vehicles designed by convention to derogate (or harm) their targets.

Furthermore, it is important to delineate the differences between slurring and insulting. The latter is a matter of causing someone to be offended, where offense is a subjective psychological state (Hom 2012, p 397). Slurring, contrastly, does not require offending a target or eliciting any reaction whatsoever. For instance, the word ‘nigger’, used pejoratively at a Ku Klux Klan rally, derogates African Americans even if none are around to be offended by its use.

Table of Contents

  1. Desiderata
    1. Practical Features
    2. Descriptive Features
    3. Embedded Uses
    4. Expressive Autonomy
    5. Appropriation
  2. Content Theories
    1. Pejorative Content as Fregean Coloring
    2. Expressivism
    3. Slurs and Truth-Value Gaps
    4. A Gestural Theory
    5. A Perspectival Theory
    6. Implicature Theories
    7. A Presupposition Theory
    8. Inferentialism
    9. Combinatorial Externalism
  3. A Deflationary Theory
  4. Broader Applications
  5. References and Further Reading

1. Desiderata

This section  focuses on five central features of pejoratives: practical features, descriptive featuresembedded usesexpressive autonomy and appropriation. An explanation of these features is among the desiderata for an adequate theory of pejoratives.

a. Practical Features

There is a family of related practical features exhibited by pejoratives. First, pejoratives have the striking power to influence and motivate listeners. Insults and slurs can be used as tools for promoting destructive ways of thinking about their targets. Calling someone ‘loser’, for example, is a way of soliciting listeners to view them as undesirable, damaged, inferior, and so forth. Racial slurs have the function of propagating racism in a speech community. ‘Nigger’, for example, has the function of normalizing hateful attitudes and harmful discriminatory practices toward various “non-white” groups. Speakers have used the term to derogate African Americans, Black Africans, East Indians, Arabs, and Polynesians (among others). This is not to suggest that the derogation accomplished by means of pejoratives is always highly destructive. In some circumstances, insults like ‘asshole’ can be used to facilitate mild teasing between friends.

Second, some pejoratives tend to make listeners feel sullied. In some cases, merely overhearing a slur is sufficient for making a non-prejudiced listener feel complicit in a speaker’s slurring performance (Camp 2013) (Croom 2011). Third, different pejoratives vary in their levels of intensity (Saka 2007, p 148). For instance, ‘nigger’ is much more derogatory toward Blacks than ‘honky’ is toward Whites. Even different slurs for a particular group can vary in their derogatory intensity (for example, ‘nigger’ is more derogatory than ‘negro’). Further, pejoratives exhibit derogatory variation across time. While ‘negro’ was once used as a neutral classifying term, it is now highly offensive (Hom 2010, p 166). A successful theory of pejoratives will need to account for their various practical features.

b. Descriptive Features

Gibbard (2003) suggests that the notion of a thick ethical concept, due to Williams (1985), can shed light on the meaning of slurs. In comparison with thin ethical concepts (such as right and wrongjust and unjust), thick ethical concepts contain both evaluative and descriptive content. Paradigm examples include cruelcowardly and unchaste. For Williams, terms that express thick ethical concepts not only play a role in prescribing and motivating action; they also purport to describe how things are. To say that a person is cruel, for example, is to say that they bring about suffering, and they are morally wrong for doing so. (For more on the distinction between thick and thin moral terms, see Metaethics.) According to Gibbard,

[r]acial epithets may sometimes work this way: where the local population stems from different far parts of the world, classification by ancestry can be factual and descriptive, but, alas, the terms people use for this are often denigrating. Nonracists can recognize things people say as truths objectionably couched. (2003, p 300)

Although Gibbard’s claim that slurring statements express truths that are “objectionably couched” is controversial, it does seem that slurs classify their respective targets. A speaker who calls an Italian person ‘spic’ does not merely say something offensive and derogatory – said speaker simulateneously makes a factual error in classifying his target incorrectly. Similarly, the insult ‘moron’ appears to both ascribe a low level of intelligence to its targets and evaluate them negatively for it. Additionally, as the following example illustrates, some swear words seem to contain descriptive content:

(1) A:        Tom fucked Jerry for the first time last week.

B:         No, they fucked for the second time last week; the first was two months ago.

Also, consider the following example (Hom 2010, p 170):

(2)        Random fucking is risky behavior.

There appears to be genuine disagreement between A and B in (1) and someone who asserts (2) has surely made a claim capable of being true or false. A successful theory of pejoratives should explain, or explain away, apparent descriptive, truth-conditional features.

c. Embedded Uses

Potts (2007) observes that most pejoratives appear to exhibit nondisplaceability: the use of a pejorative is derogatory even as an embedded term in a larger construction. An indirect report or a conditional sentence are often vehicles of nondisplacibility; direct quotations, however, are excluded. Consider, for example, that Sue has uttered (3) and another speaker attempts to report on her utterance with (4):

(3)        That asshole Steve is on time today.

(4)        Sue said that that asshole Steve is on time today.

As long as the occurrence of ‘asshole’ is not read as implicitly metalinguistic—with a change in intonation or an accompanying gesture indicating that the speaker wishes to distance herself from any negative feelings toward Steve—listeners will interpret the speaker of (4) as making a disparaging remark about Steve, even if the speaker is merely attempting to report on Sue’s utterance.

Like the insult ‘asshole’, the gendered slur ‘bitch’ appears to scope out of indirect reports. Suppose Eric utters (5) and someone tries to report on his utterance with (6):

(5)        A bitch ran for President of the United States in 2008.

(6)        Eric said that a bitch ran for President of the United States in 2008.

It would be difficult to use (6) to give a neutral (non-sexist) report of Eric’s offensive claim. Unless a metalinguistic reading is available for the occurrence of ‘bitch’, anyone who utters (6) in an attempt to report on Eric’s utterance of (5) risks making an offensive claim about women (Anderson and Lepore 2013a, p 29).

Potts claims that one way in which pejoratives are nondisplaceable is that they always tell us about the current utterance situation (2007, p 169-71).  Consider

(7)        That bastard Kresge was late for work yesterday (#But he’s no bastard today, because he was on time)

Despite the fact that ‘bastard’ is within the scope of a tense operator in (7), it would be implausible to read the speaker as claiming that she disliked Kresge only in the past, as the defective parenthetical (indicated by the hash sign) illustrates.

However, not all pejoratives behave the same way when embedded. Consider (8)-(11):

(8)        If Steve doesn’t finish his report by the end of the week, he’s fucked (but I suspect he’ll finish on time.)

(9)        Suppose our new employee, Steve, is a bastard (On the other hand, maybe he’ll be nice)

(10)      Steve is not a bastard (I think he’s a good guy).

(11)      Steve used to be a real fucker in law school (but I like him much better now).

A speaker who utters (8)-(11) need not be said to have made a disparaging claim about Steve. This is because the occurrences of ‘fucked’, ‘bastard’, and ‘fucker’ in (8)-(11) appear to be “narrow-scoping” (Hom 2012, p.387). Thus, at least some embedded uses of pejoratives seem not to commit the speaker to an offensive claim (compare the non-defective parentheticals in these cases with the defective one in (7)).

Slurring words, however, appear to behave differently. As (12) and (13) illustrate, slurs are just as offensive and derogatory when uttered as part of a supposition or embedded in a conditional sentence as when they are used in predicative assertions:

(12)      If the guys standing at the end of my driveway are spics, I’ll tell them to leave (#Fortunately, there is no such thing as a spic, since no one is inferior for being Hispanic)

(13)      Suppose the next job applicant is a nigger. (#Of course that won’t happen, since no one is inferior for being Black.)

Notice the defectiveness of the parentheticals as attempts to cancel the derogatoriness of the preceding sentences.  In general, slurs appear to take wide scope relative to all truth-conditional operators, including negation. Consider the following explicit attempt to reject a racist claim:

(14)      It is not the case that Ben is a kike; he is not Jewish!

(14) fails to undermine the derogatoriness of the slur ‘kike’. Seemingly, the trouble is that it only disavows a derogatory way of thinking about Ben, and so it cannot be used to reject a racist attitude toward Jews in general (Camp 2013). Further, as Saka (2007, p 122) observes, even Tarskian disquotational sentences containing slurs appear to express hostility:

(15)      “Nietzsche was a kraut” is true iff Nietzsche was a kraut.

A successful theory of pejoratives must explain the behavior of embedded pejorative words and phrases, and more specifically, must account for the fact that slurring words and insulting words appear to behave differently within the scope of truth-conditional and intensional operators. A successful theory must also resolve the apparent tension between the putative descriptive features of slurs, and their behavior under embedding.

d. Expressive Autonomy

The expressive power of a pejorative term is autonomous to the extent that it is independent of the attitudes of particular speakers who use the term. Slurring words appear to exhibit derogatory autonomy– their derogatory capacity is independent of the attitudes of speakers who use them (Hom 2008, p426). For instance, a racist who intends to express affection toward Italians by asserting, ‘I love wops; they are my favorite people on Earth’, has still used the slur in a patently offensive manner (Anderson and Lepore 2013a, p33). Likewise, a competent speaker who knows that ‘kike’ is a term of abuse for Jews could not stipulate a non-derogatory meaning by uttering, “What’s wrong with saying that kikes are smart? By ‘kike’, I just mean Jews, and Jews are smart, aren’t they?” (Saka 2007, p 148).

e. Appropriation

Some pejoratives are used systematically to accomplish aims other than those for which they were designed. Appropriation refers to the various systematic ways in which agents repurpose pejorative language.  For certain slurs, the target group takes over the term to transform its meaning to lessen or to eliminate its derogatory force. This is one variety of appropriation known as linguistic reclamation(Brontsema 2004). The term ‘queer’ is a paradigm case. Although ‘queer’ once derogated those who engaged in sexually abnormal behavior, the term ‘queer’ now contains little to no derogatory force as a result of homosexual women and men appropriating the term. Now, non-prejudiced speakers can use the term ‘queer’ in a various contexts. For instance, phrases such as ‘queer studies program’ and ‘queer theory’ do not derogate homosexuals. In contrast, the slur ‘nigger’ -often marked by an alternative spelling ‘nigga’- has been appropriated more exclusively by the target group, and is often used as a means of expressing camaraderie between group members (Saka 2007, p145). Barring a few rare exceptions, targeted speakers can use the term to refer to one another in a non-denigrating way. Appropriated uses of ‘nigger’ are common in comedic performances and satire. The use of ‘nigger’ in a comedy bit designed to mock and criticize racism need not commit the speaker to racist attitudes (Richard 2008, p.12).

Insults are also subject to appropriation. In some contexts, an insult can be used to express something more jocular or affectionate than hateful, such as in the phrase: ‘George is the most lovable bastard I know’. A successful theory of these phenomena need to account for their various appropriated uses.

2. Content Theories

According to content theories, pejorative words are derogatory in virtue of the content they express. This section contains an overview and discussion of several content theories and their merits, followed by standard criticisms.

a. Pejorative Content as Fregean Coloring

For Gottlob Frege, two aspects to the meaning of a term are its reference and its sense. The reference is what the term denotes, while the sense provides instructions for picking out the reference. (For more on this distinction see Gottlob Frege: Language.) Additionally, Frege posited an expressive realm of meaning separate from sense and reference. For Frege, a word’s färbung (often translated as ‘coloring’ or ‘shading’) is constituted by the negative or positive psychological states associated with it that play no role in determining the truth-value of utterances that include it. The terms ‘dog’ and ‘cur’, for example, share the same sense and reference, but the latter has a negative coloring – something like disgust or contempt for the targeted canine (Frege 1892, p.240). Likewise for the neutral term ‘English’ and the slur ‘Limey’, which was once applied exclusively to English sailors, but now targets English people generally:

(16)      Mary is English.

(17)      Mary is a Limey.

For Frege, both (16) and (17) are true just in case Mary is English. However, for most speakers, ‘English’ is neutral in coloring, while ‘Limey’ is associated with negative feelings for English people.

Although the Fregean approach accounts for the descriptive features of pejoratives as well as the behavior of slurs when embedded, most contemporary theorists reject it (see, for example, Hom (2008)). For Frege, “coloring and shading are not objective, and must be evoked by each hearer or reader” (1897, p 155). On his view, a pejorative term’s coloring is not conventional (in any sense of the term); rather, coloring consists only in subjective (non-conventional) associations speakers have with the term. Dummett (1981) diagnoses the problem with positing an essentially subjective realm of meaning: the meaning of a linguistic sign or symbol cannot be in principle subjective, since it is what speakers conveyto listeners. Given the subjective nature of coloring, Fregeans are committed to holding that the derogatory power of slurs is due to subjective associations held by speakers and listeners. As a result, Fregeans will have difficulty accounting for expressive autonomy (Hom 2008, p 421). For instance, Fregeans will have trouble explaining why ‘nigger’ can be just as derogatory in the mouth of a racist as it is when uttered by a non-racist. In reply, Fregeans might offer a dispositional theory of coloring. Consider an analogy with a dispositional theory of color, according to which a thing is yellow, for example, if it disposes normal agents in appropriate conditions to have a qualitative experience of yellow. Similarly, Fregeans might hold that a slur has a negative coloring to the extent that uttering or hearing S disposes speakers and listeners to have derogatory attitudes toward the target. This approach could generalize to other pejorative terms. Consider Frege’s example of ‘cur’. On the revised version of the theory, ‘cur’ has a negative coloring to the extent that competent listeners who hear the term predicated of a dog are disposed to think of the targeted canine as flea-ridden, mangy, and dangerous. Such an account might be promising, but much more would need to be said about how hearing the word disposes listeners to think in derogatory ways. As it stands, the Fregean view does little to explain how pejoratives can be so rhetorically powerful.

b. Expressivism

Another main theory of pejoratives is a descendent of the metaethical view known as expressivism. According to the version of expressivism developed by Ayer (1936), moral and aesthetic statements do not express propositions capable of being true or false, and merely serve to express and endorse the speaker’s own moral sentiments. For Ayer, an assertion of ‘Stealing is wrong’ does not express a truth-evaluable proposition; rather, it merely expresses the speaker’s disapproval of stealing. (For more on expressivism, see Non-Cognitivism in Ethics.)

One might extend Ayer’s expressivism to cover pejoratives. On this view, derogatory statements containing pejoratives do not express propositions capable of being true or false – they merely express a non-cognitive attitude, such as disapproval, of the target group. An expressivist theory of pejoratives is well suited to explain the behavior of slurs under embedding. However, it will have difficulty accounting for their descriptive features. As noted above, a speaker who calls an Italian person ‘spic’ has seemingly made a classificatory error. If slurs lack descriptive content, and merely serve to express non-cognitive attitudes, then it is unclear how they could classify their targets.

Saka (2007) offers an alternative, hybrid expressivist theory of slurs, according to which slurs contain both expressive and descriptive content (see also Kaplan (2004)). Saka denies that there is a single belief or proposition expressed by slurring statements such as ‘Nietzsche was a kraut’. Rather, such statements express an attitude complex, which includes (i) the pure belief that Nietzsche was German, and (ii) a cognitive-affective state toward Germans (Saka 2007, p 143). Saka’s hybrid theory could plausibly account for the descriptive, truth-conditional features of pejoratives.

However, it is not clear that either the pure expressivist theory of pejoratives or Saka’s hybrid theory can extend to all pejoratives. According to a standard objection to metaethical expressivism, the so-called Frege-Geach problem, one can utter a sentence containing a moral predicate (such as ‘good’, ‘evil’, ‘right’, and ‘wrong’) as the antecedent or consequent of a conditional sentence without making a moral judgment. Expressivists about moral terms are unable to account for the sameness of content in both asserted and non-asserted contexts, so the objection goes. For example, as Geach (1965) observed, the following is a valid argument:

(18)      If tormenting the cat is wrong, then getting your brother to do it is wrong.

(19)      Tormenting the cat is wrong.

(20)      Therefore, getting your brother to do it is wrong.

If, as the metaethical expressivist claims, ‘wrong’ merely expresses a speaker’s approval, then it is a mystery how the term ‘wrong’ could carry the same content in (19) and when embedded in the antecedent of the conditional sentence in (18), given that (19) expresses a moral judgment while (18) does not. Hom (2010) argues that expressivist theories of swears face a similar challenge. Consider the following argument:

(21)      If George fucked up his presentation, he will be fired.

(22)      George fucked up his presentation.

(23)      Therefore, he will be fired.

In order for this argument to be valid, the pejorative term ‘fucked’ must have the same semantic content in (21) and (22), despite the fact that (21) does not express a negative attitude about George, while (22) does. It is difficult to see how the pure expressivist theory could account for this. Although Saka’s hybrid theory has the potential to explain the preservation of content between (21) and (22), his view will have difficulty accounting for the fact that (21) expresses no negative attitude about George.

Additionally, one might worry that the non-cognitive attitudes posited by expressivism are too underspecified to account for derogatory variation (‘kraut’ is less derogatory than ‘nigger’ is, and so forth). Do all pejoratives express something like ‘contempt’ or ‘hostility’ or do the negative attitudes differ for each term? Saka claims that derogatory variation among slurs is due to the historical circumstances that led to their introduction and sustain their derogatory power (Saka 2007, p148). But the appeal to historical context here is illicit if the derogatoriness of slurs is to be explained by an attitude complex expressed by speakers who use the term. After appealing to external institutions to explain the derogatory features of slurs, it appears that the posited attitude complex has no remaining explanatory work.

Finally, expressivists need to do more to explain how the expression of negative attitudes relates to the practical features of slurs. In particular, they need to specify a notion of expression that makes it clear how the expression of hostility (or contempt, and so forth) toward a target could motivate listeners to feel similarly.

c. Slurs and Truth-Value Gaps

Richard (2008) holds that slurs express derogatory attitudes toward their targets, but unlike Saka he claims that slurs lack truth-conditional content. Richard is not a pure expressivist, since he does not take the derogatory content of slurs to be a negative affective state. He denies that slurring speech is false by claiming that to apply the term ‘false’ to an utterance is to claim that the speaker made an error that can be corrected by judicious use of negation. Nevertheless, examples like “My neighbor isn’t a chink; she’s Japanese,” suggest that it cannot. Richard also denies that derogatory statements containing slurs can be true. He acknowledges that predicating a slur of someone entails classifying him or her as a member of a particular group, but he denies that correct classification suffices for truth. For instance, Richard holds that the anti-Semite can correctly classify a person as Jewish by calling them ‘kike’, but when a speaker slurs a Jewish person with ‘kike’, they have not simply classified them as Jewish nor have they merely expressed an affective state (like hatred or contempt) – they have misrepresented the target as being despicable for being Jewish. According to Richard, we cannot endorse the classification as true without also endorsing the representation as accurate. On his view, whatever truth belongs to a classification is truth it inherited from the thought expressed in making it, and the thought expressed by the anti-Semite who uses the slur ‘kike’ is the mistaken thought that Jews are despicable for being Jewish (Richard 2008, p. 24).

Although Richard’s view could potentially make sense of the behavior of slurs under embedding, he does not offer a positive theory of how slurs represent their targets. He might hold that the relevant sort of representation is imagistic. Perhaps hearing a slur puts an unflattering image of the target group in the minds of listeners. In any event, Richard offers no help here. Instead, he is interested only in establishing that there are numerous statements – among them, derogatory statements containing slurs – that have a determinate content, yet are not truth-apt. Others include applications of vague predicates to borderline cases and statements that give rise to liar paradoxes. As it stands, nothing in Richard’s view helps us see how misrepresenting a target by means of calling them a pejorative word has the power to motivate listeners to think derogatory thoughts about them. Thus, Richard’s view leaves the practical features of slurs unexplained.

Further, there are reasons to be doubtful of Richard’s claim that slurs always misrepresent their targets. While this claim seems plausible in the case of racial slurs, it is not obviously true of all slurring words. Consider ‘fascist’, which is a slur for officials in an authoritarian political system. On Richard’s view, to call Mussolini and Hitler fascists is to represent them as contemptible for their political affiliation. Presumably, this would not be to misrepresent them. Richard might agree, and respond that the concept of truth is not what we should use when evaluating a slurring performance as accurate or inaccurate. In that case, Richard still owes a positive account of how such words can accurately represent their targets. Absent these details, it is difficult to evaluate Richard’s claims.

d. A Gestural Theory

Hornsby (2001) offers a theory of the derogatory content of slurs, but her view could be extended to cover other pejoratives:

It is as if someone who used, say, the word ‘nigger’ had made a particular gesture while uttering the word’s neutral counterpart. An aspect of the word’s meaning is to be thought of as if it were communicated by means of this (posited) gesture. The gesture is made, ineludibly, in the course of speaking, and is thus to be explicated…in illocutionary terms. (p 140)

According to Hornsby, the gestural content of a slur cannot be captured in terms of a proposition or thought. Rather, “the commitments incurred by someone who makes the gesture are commitments to targeted emotional attitudes” (2001, p140).  Hornsby’s gestural theory has the potential to account for slurs’ expressive autonomy and their offensiveness when embedded. Unfortunately, Hornsby’s central thesis is unclear. On one interpretation, she holds that a speaker who uses a slur actually performs a pejorative gesture in the course of uttering it, although the gesture itself is elliptical. On another interpretation, she is claiming only that using a slur is analogous to performing a derogatory gesture. For either interpretation, there is a lacunae in Hornsby’s theory. If the first interpretation is what Hornsby intends, she owes an account of what the posited gestures are supposed to be. Perhaps she thinks that to call an African American ‘nigger’ is to perform an elliptical “throat slash” in their direction (Hom 2008, p418). Or maybe uttering ‘nigger’ amounts to giving targets “the finger”. If this is what Hornsby intends, she owes an account of how it is possible to perform an elided gesture. On the other hand, if Hornsby is merely claiming that derogatory uses of slurs are analogous to pejorative gestures, she needs to specify how tight the analogy is.

e. A Perspectival Theory

Camp (2013) offers a perspectival theory of slurs. On her view, slurs are so rhetorically powerful because they signal allegiance to a perspective, which is an integrated, intuitive way of thinking about the target group (p335). For Camp, a speaker who slurs some group G non-defeasibly signals his affiliation with a way of thinking and feeling about Gs as a whole (p340). The perspectival account offers an explanation for why slurs produce a feeling of complicity in their hearers, that is, why non-racist listeners tend to feel implicated in a speaker’s slurring performance. Camp describes two kinds of complicity. First, there is cognitive complicity:

The nature of semantic understanding, along with the fact that perspectives are intuitive cognitive structures only partially under conscious control, means that simply hearing a slur activates an associated perspective in the mind of a linguistically and culturally competent hearer. This in turn affects the hearer’s own intuitive patterns of thought: she now thinks about G’s in general, about the specific G (if any) being discussed, and indeed about anyone affiliated with Gs in the slurs’ light, however little she wants to. (p343)

Second, there is social complicity: the fact that there exists a word designed by convention to express the speaker’s perspective indicates that the perspective is widespread in the hearer’s culture, and being reminded of this may be painful for non-prejudiced listeners (Camp 2013, p344; see also Saka 2007, p 142).  Camp’s theory also has the potential to explain linguistic reclamation. When a slur is taken over by its target group and its pejorative meaning is transformed; the derogatory perspective it once signaled becomes detached and the term comes to signal allegiance to a neutral (or positive) perspective on the target.

One might take issue with Camp’s claim that complicity is due to speakers signaling the presence of racist attitudes. In general, merely signaling one’s own perspective is insufficient for generating complicity. For instance, one might signal one’s libertarian political perspective by placing a ‘Ron Paul’ bumper sticker on one’s car, yet this behavior is not likely to make observers feel complicit in the expression of a libertarian perspective. Even signaling one’s racist attitudes need not lead others to feel complicit. For instance, one might overtly signal a racist perspective by refusing to sit next to members of a certain race on a bus or by crossing the street whenever a member of a certain race is walking toward them; however, in most cases, this sort of behavior is not likely to activate a derogatory perspective in observers or make observers feel complicit. Thus, the fact that slurs signal a derogatory perspective, if it is a fact, does not explain why slurs tend to make listeners feel complicit in the expression of a derogatory attitude.

f. Implicature Theories

In some cases, what a speaker means is not exhausted by what she literally says. Grice (1989) distinguishes what a speaker literally says with her words from what she implies or suggests with them. Grice posited two kinds of implicature: conversational and conventional. When a speaker communicates something by means of conversational implicature, she violates (or makes as if to violate) a conversational norm, such as provide as much information as is required given the aim of the conversation. The hearer, working on the assumption that the speaker is being cooperative, attempts to derive the implicatum (that is, what the speaker meant, but did not literally say) based on the words used by the speaker and what conversational norm she has (apparently) violated. Suppose that Professor has written a letter of recommendation for her philosophy student, X, that reads, “Mr. X’s command of English is excellent, and his attendance at tutorials has been regular” (Grice 1989, p 33). The reader, recognizing that A does not wish to opt out, will observe that she has apparently violated the maxim of quantity: seemingly, she has not provided enough information about X’s philosophical abilities for the reader to make an assessment. The most reasonable explanation for A’s behavior is that she thinks X is a rather bad student, but is reluctant to explicitly say so, since doing so would entail saying something impolite or violating some other norm. According to Grice, sometimes the conventional meaning of a term determines what is implied by usage of the word, in addition to determining what is said by it. If a sentence s conventionally implies that Q, then it is possible to find another sentence s*, which is truth-conditionally equivalent to s, yet does not imply that Q. Consider the sentences ‘Alexis is rich and kind’ and ‘Alexis is rich, but kind’. For Grice, these two sentences have the same literal truth-conditions (they are true just in case Alexis is both rich and kind), but only the latter implies that there is a contrast between being rich and kind (in virtue of the conventional meaning of ‘but’). (For more on Grice’s theory of implicature, see Philosophy of Language.)

One might apply Grice’s theory of implicature to pejoratives. A theory that understands pejorative content as conversationally implicated content has little chance of succeeding. First, it seems that the pejorative meaning of a slur needn’t be worked out by the listener in the way that a conversational implicature must be (Saka 2007, p136). Second, conversational implicata are supposed to be cancellable, but the derogatory content of a slur is not (Hom 2008, p434). According to Grice, for any putative conversational implicatureP, it will always be possible to explicitly cancel P by adding something like ‘but not P’ or ‘I do not mean to imply that P’. And it is clear that the derogatory contents of slurs are not explicitly cancellable, as the following defective example illustrates: ‘That house is full of kikes, but I don’t mean to disparage Jewish people’.

Stenner (1981), Whiting (2007, 2013) and Williamson (2009) have argued that the derogatory content of some pejorative words and phrases is best understood in terms of conventional implicature. According to a conventional implicature account of slurs (hereafter, the ‘CI account’), slurs and their neutral counterparts have the same literal meaning, but slurs conventionally imply something negative that their neutral counterparts do not. For instance, ‘Franz was German’ and ‘Franz was a Boche’ are the same at the level of what is said – they have the same literal truth-conditions, that is, they are both true just in case Franz was German. But ‘Franz was a Boche’ conventionally implies the false and derogatory propositionthat Franz was cruel and despicable because he was German. One virtue of the CI account is that it explains the descriptive features of pejoratives as well as expressive autonomy.

One objection to the CI account is that it is controversial whether there is any such thing as conventional implicature. Bach (1999) argues that putative cases of conventional implicature are actually part of what is said by an utterance. Bach devised the indirect quotation (IQ) test for conventionally implicated content. Suppose that speaker A has uttered (24), and speaker B has reported on A’s utterance with (25):

(24)      She is wise, but short.

(25)      A said that she is wise and short.

According to Bach, since B has left out important information in her indirect report, namely information about the purported contrast between being wise and short, that information must have been part of what was said, as opposed to what was implied, by A’s utterance. Hom (2008) uses Bach’s IQ test to undermine the CI account of slurs. Suppose A uttered (26) and reported on A’s utterance with (27):

(26)      Bill is a spic.

(27)      A said that Bill is Hispanic.

According to Hom, since B has misreported A, the derogatory content of the slur must be part of what is said, and so the CI account fails. Notice, however, that Hom’s use of Bach’s test does not show that the derogatoriness of slurs must be part of their literal semantic content, since “what is said” could refer to pragmatically enriched content (see, for example, Bach (1994)).  A more serious objection is that even if Griceans are correct in holding that an utterance of ‘Italians are wops’ carries a negative implicature about Italians, more would need to be said in order to explain how implying something negative about Italians could bring about complicity in listeners, and motivate listeners to discriminate against Italians. Consider a paradigm case of conventional implicature: a speaker who asserts ‘but Q’ commits herself to a contrast between P and Q by virtue of the conventional meaning of ‘but’. However, there is no reason to think that bystanders would automatically feel complicit in the speaker’s claim. Yet listeners often find themselves feeling complicit in the expression of a negative attitude just by overhearing a slur. Even if terms like ‘but’ are capable of triggering a kind of complicity, it is surely not the robust sort of complicity triggered by slurs. Potts (2007) offers a non-propositional version of the CI account. Potts understands pejorative content in terms of expressive indices, which model a speaker’s negative (or positive) attitudes in a conversational context. He offers the following schema for an expressive index:

<a I b>

where and b are individuals, and I is an interval that represents a’s positive or negative feelings for in the conversational context. The more narrow the interval, the more intense the feeling. If I = [-1, 1], then ais essentially indifferent toward b. If I = [0.8, 1], then a has a highly positive attitude toward b. If I = [-0.5, 0], then a has negative feelings for b. For Potts, the conventionally implicated content of a pejorative is a function that alters the expressive index of a conversational context. So, for example, if Bill calls George a ‘spic’, the expressive index might shift from <Bill [-1, 1] George>, where Bill is indifferent to George, to <Bill [-0.5, 0] George>, where Bill has negative feelings toward George. Potts’s theory could potentially account for complicity. He might argue that a feeling of complicity results from taking part in a conversation whose expressive index has been lowered due to the use of a slur. One problem with Potts’s theory is that expressive indices are supposed to measure psychological states of conversation participants, and these can depend on a variety of idiosyncratic features of the participants – their background beliefs, values, and so forth. This makes it difficult to see how the expressive content of pejoratives could be objective and speaker-independent (Hom 2010, p180).

Additionally, Potts’s numerical modeling of attitudes seems too coarse-grained to explain the differences between slurs and other pejoratives. One could shift the expressive index of a conversation by using an insult like ‘asshole’ or even by using non-pejorative language. For instance, Bill might lower the expressive index in a conversation about his colleague, George, by pointing out that George is late for work and that he’s not dressed appropriately for the office. Bill could also lower the index by uttering, ‘Here comes George!’ in a contemptuous tone of voice. If Potts is correct, the pejorative content of slurs like ‘nigger’, ‘chink’, and ‘spic’ should be understood in terms of expressive indices. However, in that case, Potts will have difficulty explaining the distinctively racist nature of these words, which derogate individuals qua members of particular racial groups.

g. A Presupposition Theory

In the philosophical literature, to presuppose a proposition P is to take P for granted in a way that contrasts with asserting that P (Soames 1989, p.553). According to one widely accepted theory, presupposed content is best understood in terms of attitudes and background beliefs of speakers. According to Robert Stalnaker’s theory of pragmatic presupposition, each conversation is governed by a conversational record, which includes the common ground, that is, the background assumptions mutually accepted by participants for the purposes of the conversation. The pragmatic presuppositions of an utterance are the requirements it places on sets of common background assumptions built up among conversational participants (Soames 1989, p.556). Mutually accepted background assumptions are subject to change over the course of a conversation. Lewis (1979) observes that information can be added to (or removed from) the conversational record when necessary in order to forestall presupposition failure and make what is said conversationally acceptable. For instance, if a speaker says, ‘Avery broke the copy machine’ in the course of a conversation, and it was not already mutually understood by the speaker and her listeners that a copy machine was damaged, then it will be assumed for the purposes of the conversation that some salient copy machine was broken. Schlenker (2007) argues that pejorative content is best understood in terms of presupposition. Consider how the presupposition theory covers slurs. Suppose (28) is asked in a conversation:

(28)      Was there a honky on the subway today?

According to Schlenker, if none of the conversation participants dissent, a derogatory proposition (or set of such propositions) – for example, that Caucasians are despicable for being Caucasianthat the speaker and the audience are willing to treat Caucasians as despicable – is incorporated into common ground.

There are several problems with the presupposition theory of pejoratives. First, as Potts (2007), Hom (2010), and Anderson and Lepore (2013a) observe, presuppositions can be cancelled when sentences that trigger them are embedded in an indirect report, but the derogatoriness of embedded slurs cannot be cancelled. Compare (29) with (30):

(29)      Frank believes that John stopped smoking, but John has never smoked.

(30)      #Eric said that a nigger is in the white house, but Blacks are not inferior for being Black.

Ordinarily, an assertion of ‘John stopped smoking’ presupposes that John previously smoked. When embedded in an indirect report, however, the presupposition can be cancelled, as (29) illustrates. In contrast, (30) appears to convey something highly offensive, which cannot be cancelled by the right conjunct. If the presupposition account were correct, we would expect (30) to be inoffensive and non-derogatory. Also, as Richard (2008) has observed, derogation with slurs needn’t be a rational, cooperative effort between speakers. According to Richard,

[a] pretty good rule of thumb is that someone who is using these words is insulting and being hostile to their targets. But there is a rather large gap between doing that and putting something on the conversational record. If I yell ‘Smuck!’ at someone who cuts me off…[a]m I entitled to assume, if you don’t say ‘He’s not a smuck’, that you assume that the person in question is a smuck, or are hostile towards him? Surely not. (2008, pp.21-2)

h. Inferentialism

Inferentialism is the thesis that knowing the meaning of a statement is a matter of knowing the conditions under which one is justified in making the statement; and the consequences of accepting it, which include both the inferential powers of the statement and anything that counts as acting on the truth of the statement (Dummett 1981, p 453). In this view, one knows the meaning of the term ‘valid’, for example, if one knows the criteria for applying ‘valid’ to arguments, and one understanding the consequences of such an application, namely that an argument’s validity provides a basis for accepting its conclusion so long as one accepts its premises.

Dummett (1981) offers an inferentialist account of slurs (see also Tirrell (1999) and Brandom (2000)). Dummett posits two inference rules for slurs: an introduction rule and an elimination rule. The introduction rule gives sufficient conditions for applying the slur to someone and the elimination rule specifies what one commits oneself to by doing so. Consider the slur ‘boche’, which was once commonly applied to people of German origin:

The condition for applying the term to someone is that he is of German nationality; the consequences of its application are that he is barbarous and more prone to cruelty than other Europeans. We should envisage the connections in both directions as sufficiently tight as to be involved in the very meaning of the word: neither could be severed without altering its meaning (454).

Williamson (2009) formalizes Dummett’s inference rules for ‘boche’ as follows:

Boche introduction:

x is a German

Therefore, x is a boche

Boche elimination:

x is a boche

Therefore, x is cruel

Brandom (2000) endorses the inferentialist account of slurs, and notes a sense in which slurs are unsayable for non-prejudiced speakers. On his view, once one uses a term like ‘boche’, one commits oneself to the thought that Germans are cruel because of being German. The only recourse for non-xenophobic speakers, Brandom concludes, is to refuse to employ the concept, since it embodies an inference one does not endorse. The inferentialist theory is well suited to explain the descriptive features of slurs as well as expressive autonomy. The theory also accounts for why a slur is derogatory toward an entire group of individuals, even when a speaker intends only to derogate a single person in a particular context with the term.

However, there are numerous objections to the inferentialist’s treatment of slurs. First, Hornsby (2001) questions whether it is possible to spell out for every slur the consequences to which its users are committed. Further, as Williamson (2009) observes, a speaker might grow up in a community where only the pejorative word for a group is used. For instance, someone may only know Germans as people who are ‘boche’ without knowing the term ‘Germans’. In that case, the speaker could be competent with ‘boche’ (she could know that it is a xenophobic term of abuse) without knowing the word ‘German’. Thus, knowing the ‘boche-introduction’ rule is not necessary for competency with the slur.

i. Combinatorial Externalism

Hom (2008) offers a theory of the semantic content of slurs. According to Hom, the derogatory content of a pejorative term is wholly constituted by its literal meaning. Hom makes use of the semantic externalist framework first developed by Putnam (1975) and Kripke (1980). Semantic externalism holds that the internal state of the particular speaker of a word does not fully determine the word’s meaning, which is instead determined, at least partly, by external social practices of the linguistic community to which the word actively belongs. For more on semantic externalism, see Internalism and Externalism in the Philosophy of Mind and Language. According to Putnam (1975), one can competently use terms like ‘elm’ and ‘beech’ without understanding the complex biological properties of each kind of tree, as long as one stands in the right sort of causal relation to the social institutions that determine their meaning. Similarly, according to Hom, the meaning of a slur is determined by a social institution of racism that is constituted by a racist ideology and a set of harmful discriminatory practices. Hom offers the following formal schema for the semantic content of slurs:

Ought to be subject to p*1 + … + p*n because of being d*1 + … + d*nall because of being NPC*,

where p*1 + … + p*are prescriptions for harmful discriminatory treatment derived from a set of racist practices, d*1 + … + d*are negative properties derived from a racist ideology, and NPC* is the semantic value of the slur’s neutral counterpart (Hom 2008, p.431). Hom calls his view Combinatorial Externalism (CE). On this view, ‘chink’ expresses the following complex, socially constructed property as part of its literal meaning: ought to be subject to higher college admissions standardsexcluded from managerial positions…, because of being slanty-eyed, devious…, all because of being Chinese.

According to Hom, one motivation for CE is that it accounts for the common intuition that slurs have empty extensions. A non-racist might say ‘There are no chinks; there are only Chinese.’ Given that no one ought to be subject to discriminatory practices because of their race, CE predicts that all racial slurs have null extensions. Hom’s semantic analysis also accounts for expressive autonomy, since the social institutions that determine the meanings of slurs are independent of the attitudes of particular speakers. Finally, CE accounts for non-derogatory, appropriated uses of slurs by in-group members. For Hom, when a targeted group appropriates a slur, they create a new supporting social institution for the term which imbues the term with a new (non-pejorative) semantic content.

Hom (2012) extends CE to cover swears. Consider Hom’s analysis of ‘John fucked Mary’:

to say that John fucked Mary is to say (something like) that they each ought to be scorned, ought to go to hell, ought to be treated as less desirable (if female), ought to be treated as more desirable (if male), ought to be treated as damaged (if female), …, for being sinful, unchaste, lustful, impure, … because of having sexual intercourse with each other. (Hom 2012, p 395)

In speech communities wherein ideologies support progressive ideas about sex and reject the meaning of the term, the term will come to have a different semantic content  because the above prescriptions will no longer be a part of the semantic content of ‘fucked’.

CE faces several objections. First, the behavior of embedded slurs poses a problem for CE (see Richard (2008), Jeshion (2013) and Whiting (2013)). According to Hom (2012), derogation requires the actual predication of a slur to a targeted individual. Notice that a speaker who utters (31) has not literally assigned negative properties to anyone or prescribed negative practices for anyone, yet the utterance appears to be highly offensive and derogatory:

(31)      If there are any spics among the job applicants, do not hire them.

If Hom is correct, non-prejudiced speakers should be able to endorse utterances like (31), since they would be true, given their false antecedents (Richard 2008, p17). In response, Hom (2012) suggests that wide-scoping intuitions about pejoratives can be explained by what they conversationally imply. (31)  indicates that the speaker thinks that some Hispanic individuals are inferior and ought to be excluded from employment opportunities. However, if this is correct, there should be contexts where the speaker can felicitously follow up her utterance with ‘not that I mean to imply that Hispanic people are inferior or that they should be discriminated against’, since conversational implicata are cancellable. But the use of the slur in (31) seems non-defeasibly racist and derogatory. As Jeshion (2013) observes, following the utterance up with ‘but I don’t mean to imply anything derogatory’ does not get the speaker off the hook.

Finally, Jeshion (2013) objects that CE’s account of the semantic content of slurs has it backwards. She argues that ideologies and social practices must antedate slurs, and this is a problem because the use of a slur for a particular group often plays a role in the creation and development of such institutions and practices. If so, a social institution could not be the source of a slur’s pejorative content.

3. A Deflationary Theory

Anderson and Lepore (2013a, 2013b) deny that the characteristic features of slurs are due to the contents they express. Their proposal is simply that “slurs are prohibited words; as such, their uses are offensive to whomever these prohibitions matter” (2013a, p.21). Anderson and Lepore note that quotation does not always eliminate the offensiveness of pejoratives (see also Saka 1998, p.122). An utterance of (32), for example, would be offensive despite the quotational use of the slur it contains:

(32)      ‘Nigger’ is a term for blacks.

Anderson and Lepore argue that content theorists will have difficulty accounting for the widespread practice of avoiding the word ‘nigger’ completely (using the locution ‘the N-word’ in place of quoting the term).

Deflationism accounts for the behavior of embedded slurs. However, it faces several objections. First, the theory offers little by way of an explanation of the practical features of slurs. (Croom 2011) Pointing out that slurs are prohibited words does not help us understand how they are such effective vehicles for spreading prejudice. Additionally, Whiting (2013) observes that it is possible for there to be slurs in the absence of taboos or social prohibitions. In a society in which the vast majority of speakers are prejudiced toward a particular racial group, and the targeted group members have internalized racist attitudes, it may be that no one objects to the use of slurs or finds them offensive, yet slurs might still be derogatory. Thus, social prohibitions cannot be all there is to the derogatoriness of slurs.

Finally, by defining slurs as merely prohibited words, Anderson and Lepore rule out a priori the possibility of slurs that are appropriate and morally permissible. One example might be ‘fascist’, which targets individuals based on political affiliation; using this slur to denigrate an authoritarian dictator need not (and perhaps should not) be prohibited.

4. Broader Applications

Since the 1980s, philosophical work on pejoratives has focused primarily on two questions: what (if anything) do pejoratives mean, and how is derogation by means of pejoratives accomplished? Researchers working on these questions would do well to familiarize themselves with empirical literature on pejoratives (for empirical studies on the behavioral and psychological effects of overhearing slurs, see Kirkland and others. 1987, Carnaghi and Maass 2007, and Gadon and Johnson 2009).

Work on slurs in the philosophy of language and linguistics has implications for debates in other disciplines. For instance, in answering the question of whether there should be legal restrictions on hate speech (which may involve the use of slurs), we will need to get clear on how hate speech harms its targets (Hornsby 2001). Legal theorists interested in these issues will want to pay careful attention to the literature discussed in this article. (For a discussion of whether laws against hate speech are justified, see Waldron 2012.)

5. References and Further Reading

  • Anderson, L. and E. Lepore 2013a, “Slurring Words,” Nous 47.1, 25-48
    • [Offers a deflationary theory of slurs]
  • Anderson, L. and E. Lepore 2013b, “What Did you Call Me? Slurs as Prohibited Words: Setting Things Up,” Analytic Philosophy 54.3, 350-363.
    • [Responds to objections to the deflationary theory defended in their 2013a]
  • Ayer, A. J. 1936, Language, Truth and Logic, Dover, New York.
    • [Defends an expressivist theory of moral and aesthetic terms]
  • Bach, K. 1994, “Conversational Impliciture,” Mind and Language 9.2, 124-162.
    •  [Argues that Grice’s distinction between what a speaker literally says and what she implies is not exhaustive, and posits a third, intermediate category]
  • Bach, K. 1999, “The Myth of Conventional Implicature,” Linguistics and Philosophy 22.4, 327-366.
    • [Argues that what is commonly held to be conventionally implicated content is actually part of what is said]
  • Brandom, R. 2000, Articulating Reasons: An Introduction to Inferentialism, Harvard University Press,      Cambridge, MA.
    • [Defends an inferentialist theory of slurs]
  • Brontsema, R. 2004, “A Queer Revolution: Reconceptualizing the Debate over Linguistic   Reclamation,” Colorado Research in Linguistics 17.1, 1-17.
    • [Gives an overview of the notion of linguistic appropriation as it applies to slurs]
  • Camp, E. 2013, “Slurring Perspectives,” Analytic Philosophy 54.3, 330-349.
    • [Defends a perspectival theory of slurs]
  • Carnaghi, A. and A. Maass 2007, “In-Group and Out-Group Perspectives in the Use of Derogatory Group Labels: Gay versus Fag,” Journal of Language and Social Psychology 26.2, 142-156.
    • [A study that measures the effects of slurs on targeted individuals compared with non-targets]
  • Croom, A.M. 2011, "Slurs," Language Sciences 33, 343-358.     
    • [Offers a stereotype theory of slurs]
  • Dummett, M. 1981, Frege: Philosophy of Language 2nd ed., Harvard University Press, Cambridge, MA.
    • [Defends an inferentialist theory of slurs]
  • Frege, G. 1892, “On Sinn and Bedeutung,” in M. Beany (ed.) 1997, The Frege Reader Blackwell, Malden, MA, 151-171.
    • [A classic paper in which Frege defends his theory of sense and reference]
  • Frege, G. 1897, “Logic,” in M. Beany (ed.) The Frege Reader, Blackwell, Malden, MA, 227-250.
    • [Frege explicates his notion of “coloring”]
  • Gadon, O. and C. Johnson 2009, “The Effect of a Derogatory Professional Label: Evaluations of a “Shrink”,” Journal of Applied Social Psychology 39.3, 634-55.
    • [Empirical study on the effects of overhearing a psychologist referred to as a ‘shrink’]
  • Geach, P. 1965, “Assertion,” Philosophical Review 69, 449-465.
    • [Poses the famous Frege-Geach problem]
  • Gibbard, A. 2003, “Reasons Thick and Thin: A Possibility Proof,’ Journal of Philosophy 100.6,  288-304.
    • [Argues that slurs are like thick evaluative terms in that they express both descriptive and evaluative content]
  • Grice, P. 1989, Studies in the Way of Words, Harvard University Press, Cambridge, MA.
    • [A collection of papers on various topics in the philosophy of language]
  • Hom, C. 2008, “The Semantics of Racial Epithets,” Journal of Philosophy 105, 416-440.
    • [Defends a truth-conditional, semantic theory of slurs]
  • Hom, C. 2010, “Pejoratives,” Philosophy Compass 5.2, 164-185.
    • [Gives a general overview of various theories of pejoratives]
  • Hom, C. 2012, “A Puzzle about Pejoratives,” Philosophical Studies 159.3, 383-405.
    • [Extends the semantic theory of slurs developed in his (2008) to swear words]
  • Hornsby, J. 2001, “Meaning and Uselessness: How to Think About Derogatory Words,” Midwest Studies in Philosophy 25, 128-141.
    • [Defends a gestural theory of slurs]
  • Jeshion, R. 2013, “Slurs and Stereotypes,” Analytic Philosophy 54.3, 314-329.
    • [Raises objections to the theories of slurs developed by Hom (2008) and Camp (2013)]
  • Kaplan, D. 2004, “The Meaning of Ouch and Oops” (unpublished transcription of the Howison  Lecture in Philosophy at U.C. Berkeley)
    • [Defends a broadly expressivist theory of pejoratives]
  • Kirkland, S., J. Greenberg, and T. Pyszczynski 1987, “Further Evidence of the Deleterious Effects of Overheard Ethnic Labels: Derogation Beyond the Target,”   Personality and Social Psychology  Bulletin 13.2, 216-227.
    • [Empirical study on how overhearing the slur ‘nigger’ affects evaluations of those targeted by the slur]
  • Kripke, S. 1980, Naming and Necessity, Harvard University Press, Cambridge, MA.
    • [Gives a defense of semantic externalism]
  • Lewis, D. 1979, “Scorekeeping in a Language Game,” Journal of Philosophical Logic 8, 339-359.
    • [Offers a theory of conversational kinematics]
  • Neale, S. 1999, “Colouring and Composition,” in R. Stainton and K. Murasugi (eds.) Philosophy and Linguistics, Westview press, Boulder, CO, 35-82.
    • [Explicates Frege’s notion of coloring]
  • Potts, C. 2007, “The Expressive Dimension,” Theoretical Linguistics 33.2, 255-268.
    • [Offers a non-propositional version of the conventional implicature theory of slurs]
  • Putnam, H. 1975, “The Meaning of Meaning,” in Mind, Language and Reality: Philosophical Papers Volume 2, C.U.P., Cambridge: Cambridge, 215-271.
    • [Offers a defense of semantic externalism]
  • Richard, M. 2008, When Truth Gives Out, Harvard University Press, Cambridge, MA.
    • [Argues that utterances containing derogatory uses of slurs are not truth-apt]
  • Saka, P. 1998, “Quotation and the Use-Mention Distinction” Mind 107, 113-136.
    • [Notes that quotation does not entirely eliminate the offensiveness of swear words]
  • Saka, P. 2007, How to Think About Meaning, Springer, Berlin.
    • [Defends a hybrid expressivist theory of slurs]
  • Schlenker, P. 2007, “Expressive Presuppositions,” Theoretical Linguistics 33.2, 237-245.
    • [Defends a presupposition theory of pejoratives]
  • Soames, S. 1989, “Presupposition,” in M. Gabbay and F. Guenthner (eds.) Handbook of Philosophical Logic, Kulwer, Dordrecht, 553-616.
    • [Explicates the notion of linguistic presupposition]
  • Stenner, A.J. 1981, “A Note on Logical Truth and Non-Sexist Semantics,” in M. Vetterling-Braggin (ed.) Sexist Language: A Modern Philosophical Analysis, Littlefield, Adams and Co, New York, 299-306.
    • [Defends a conventional implicature theory of slurs]
  • Tirrell, L. 1999, “Derogatory Terms,” in C. Hendriks and K. Oliver (eds.) Language and Liberation:   Feminism, Philosophy and Language, SUNY Press, Albany, NY, 41-79.
    • [Defends an inferentialist theory of slurs]
  • Waldron, J. 2012, The Harm in Hate Speech, Harvard University Press, Cambridge, MA.
    • [Makes the case for legal restrictions on hate speech]
  • Whiting, D. 2007, “Inferentialism, Representationalism and Derogatory Words,” International Journal of Philosophical Studies 15.2, 191-205.
    • [Offers a conventional implicature theory of slurs]
  • Whiting, D. 2013, “It’s Not What You Said, It’s the Way You Said It: Slurs and Conventional Implicature,” Analytic Philosophy 54.3, 364-377.
    • [Responds to objections to the conventional implicature theory by Hom (2008) and others]
  • Williams, B. 1985, Ethics and the Limits of Philosophy, Harvard University Press, Cambridge, MA.
    • [Explicates the notion of a thick evaluative term]
  • Williamson, T. 2009, “Reference, Inference and the Semantics of Pejoratives,” in J. Almog and P. Leonardi (eds.) The Philosophy of David Kaplan, Oxford University Press, Oxford, 137-158.
    • [Raises objections to the inferentialist theory of slurs; defends a conventional implicature theory.]

 

Author Information

Ralph DiFranco
Email: ralph.difranco@uconn.edu
University of Connecticut
U. S. A.

Knowledge Norms

Epistemology has seen a surge of interest in the idea that knowledge provides a normative constraint or rule governing certain actions or mental states. Such interest is generated in part by noticing that fundamentally epistemic notions, such as belief, evidence, and justification, figure prominently not only in theorizing about knowledge, but also in our everyday evaluations of each others’ actions, reasoning, and doxastic commitments. The three most prominent proposals to emerge from the epistemology literature have been that knowledge is the norm of assertion, the norm of action, and the norm of belief, though we shall consider other proposals as well.

‘Norm’ here is often, but not always, understood as a rule which is intimately related to the action/mental state type in question, such that this relationship is a constitutive one: the action or mental state is constituted (in part) by its relationship to the rule. Typically such views argue for a norm of permission such that knowledge is required, as a necessary condition, for permissibly acting or being in the relevant mental state: in schematic form, one must: X only if one knows a relevantly specified proposition. Some philosophers also endorse a sufficiency condition as well, so that knowledge is necessary and sufficient for (epistemic) permission to X, such that one must: X if and only if one knows a relevantly specific proposition. Such views put knowledge to work in elucidating normative concepts, practical rationality, and conceptual priorities in epistemology, mind, and decision theory. This article outlines the growing literature on these topics.

Table of Contents

  1. Knowledge Norm of Assertion
    1. Problem Sentences: Moore’s Paradox
    2. Conversational Patterns
    3. Rivals and Objections
    4. Sufficiency
  2. Knowledge Norm of Action
    1. Knowledge and Practical Reasoning
    2. Knowledge and Reasons
    3. Sufficiency and Pragmatic Encroachment
  3. Knowledge Norm of Belief
    1. The Belief-Assertion Parallel
    2. Knowledge Disagreement Norm
  4. References and Further Reading

1. Knowledge Norm of Assertion

Assertion is the speech act we use to make claims about the way things are: in English, asserting is the default speech act for uttering a sentence in the indicative or declarative mood, such as when one tells someone, “John is in his office” (for an overview of assertion, including ways of characterizing it that do not make essential appeal to epistemic norms, see MacFarlane 2011).  The recent literature on the norms of assertion has concentrated on whether there is a rule governing the speech act of assertion which specifies a necessary condition for making the speech act permissible on that occasion; section 1.D below briefly discusses the idea of a sufficient condition for permissible assertion. The view has its roots in the work of philosophers who argued that when one asserts, claims, or declares that p (which are to be distinguished from simply uttering “p”) one somehow thereby represents oneself as knowing that p, even though p itself may not refer to the speaker’s knowledge at all (see Moore 1962: 277; Moore 1993: 211; Black 1952; and Unger 1975: 251ff.). The idea that when one asserts that p one represents oneself as knowing that p—call this position ‘RK’—enabled an explanation of certain problem sentences and conversational patterns.

a. Problem Sentences: Moore’s Paradox

G.E. Moore noted the paradoxical nature of asserted conjunctions where one affirms a proposition but also denies that one believes it or that one knows it. Conjunctions such as (1) and (2), he said, sound “absurd” (Moore 1942: 542-43; 1962: 277):

(1)   Dogs bark, but I don’t believe that they do.

(2)   Dogs bark, but I don’t know that they do.

The order of the conjuncts does not matter to their absurdity, as (3) and (4) make clear (Moore 1993: 207):

(3)   I don’t believe that dogs bark, but they do.

(4)   I don’t know that [whether] dogs bark, but they do.

What captured Moore’s interest about such asserted sentences is that they could be true and yet it seems incoherent to state that truth: “It is a paradox that it should be perfectly absurd to utter assertively words of which the meaning is something which may quite well be true—is not a contradiction” (Moore 1993:  209). Moore’s own diagnosis of their absurdity appeals to something like RK, namely that “by asserting p positively, you imply, though you don’t assert, that you know that p” (1962: 277). So in asserting one of (1) – (4), one asserts, in one conjunct, a proposition and thereby also represents oneself as knowing it; but one also denies, in the other conjunct, that one knows it (or believes it, entailed by knowing it), thus generating a contradiction between what one claims (that one doesn’t know) and what one represents as being the case (that one does know).

b. Conversational Patterns

Peter Unger (1975) pointed to certain conversational patterns which seem to support RK, because RK well-explains them. One of these concerns the common use of the question “How do you know?” in response to someone’s assertion: such a question may be used to elicit clarification about why one is flat-out asserting, but importantly, it also may be used to challenge someone’s assertion. What is more, it is rare that this question is condemned as out of line in response to an assertion. Such questions are appropriate even though an asserter has said nothing at all about knowing what she’s asserted, and an asserter cannot acceptably answer such questions by claiming that she never claimed that she knew it. And an asserter who concedes with “I don’t know,” or modifies her original assertion by moving to “I believe p” or “I think p” or “Probably p” will normally be taken to be retreating from her original outright assertion that p: she has instead replaced her claim with a weaker one. RK explains all these points (Unger 1975: 263-64; cf. also Slote 1979).

Timothy Williamson (1996; 2000, Ch. 11), provides a fuller defense of the view, and pointed to further conversational patterns explained by RK. Williamson’s account replaces RK with the Knowledge Norm of Assertion, sometimes called the ‘Knowledge Account of Assertion’, which says that

(KNA) One must: assert that p only if one knows that p

KNA can be thought of “as giving the condition on which a speaker has the authority to make an assertion. Thus asserting p without knowing p is doing something without having the authority to do it, like giving someone a command without having the authority to do so” (2000: 257). Williamson thinks of KNA as constitutive of the speech act of assertion, conceived of by analogy with the rules of a game: just as the rules of chess are essential to it in that they constitute what the game is and what it is to play chess, so Williamson thinks of the speech act of assertion as constituted by its relation to KNA. In this sense, mastering the speech act of assertion involves implicitly grasping this norm and the practice which it governs (2000: 241); indeed, the speech act plausibly functions to express one’s knowledge (Turri 2011).  If this is correct, KNA would explain RK, a descriptive fact about what speakers who assert represent about themselves: for it is in virtue of engaging in a practice whose norm we all implicitly grasp that one would represent oneself as conforming to that norm. (For helpful discussion of Williamson’s approach to constitutivity, see Turri 2014a; for an account on which the KNA is derived from a more fundamental norm of intellectual flourishing, see Brogaard 2014.)

Williamson notes that in addition to the “How do you know?” question which can be used to implicitly challenge one’s authority to assert, the stronger challenge question “Do you know that?” explicitly challenges one’s authority, and the dismissal “You don’t know that!” rejects one’s authority. KNA explains this range of aggressiveness (Williamson 2000: 253; 2009: 344). Turri (2010) further points out that there is an asymmetry between the acceptability of certain kinds of prompts to assertion:

(5) Do you know whether p?

(6) Is p?

are typically interchangeable as prompts to an assertion, and the flat-out assertion “p” serves to answer each equally well; but certain stronger questions, such as “Are you certain that p?” typically cannot be used, as can (5) and (6), as an initial prompt for assertion, whereas weaker prompts such as “Do you think that p?” or “Do you have any idea whether p?” seem to request something weaker than a flat-out assertion (perhaps a hedged assertion or a prediction), are thereby not interchangeable with (5) and (6). Related to this data is that a standard response when one feels not well-positioned to assert, in reply to a prompt like (6), is to answer “I don’t know.” The appropriateness of the “I don’t know” response is telling given that the query was about p, not about whether one knows that p. Thus KNA seems confirmed by these data.

In addition to prompts and challenges, and our responses to them, there is data from lottery assertions (Williamson 2000: 246-252, Hawthorne 2004: 21-23). Many people find it somehow inappropriate for people to flat-out assert of a particular lottery ticket (before the draw has been announced) that it will lose, even though given a large enough lottery its losing is overwhelmingly probable. Many also find it plausible that one does not know that such a ticket will lose. KNA proponents aim to explain the first point in terms of the second: the reason it is inappropriate for one to make such lottery assertions, absent special knowledge about the lottery being rigged, is that one does not know that the ticket will lose.

Benton (2011) and Blaauw (2012) also point to peculiar facts about the parenthetical positioning of “I know” in assertive sentences, which seem well-explained by KNA. Notice that “I believe” (or “I think,” or “probably”) can occur in assertive constructions to hedge one’s assertion, and syntactically they can occur in prefaced or utterance-initial position (7), parenthetical position (8), or utterance-final parenthetical position (9), with each sounding as good as the other:

            (7) I believe that John is in his office.

            (8) John is, I believe, in his office.

            (9) John is in his office, I believe.

Yet with “I know,” (10) sounds perfectly in order, but (11) and (12), while coherent, can seem oddly redundant:

            (10) I know that John is in his office.

            (11) ?  John is, I know, in his office.

            (12) ?  John is in his office, I know.

KNA is able to explain why: if flat-out assertions express one’s knowledge, or represent one as knowing, it will be expressively redundant to add to it that one knows (where (10) is not redundant because it seems to be the amplified claim that: one knows that John’s in his office). However this explanatory argument from KNA of such data as been critiqued as incomplete or inadequate (see McKinnon & Turri 2013, McGlynn 2014).

Finally, knowledge seems to be connected to assertion in parallel with its connection to showing someone how to do something: in the same way that knowing that p seems to be required for permissibly asserting that p, knowing how to X seems to be required for permissibly showing someone how to X.  In this sense, knowing is the pedagogical norm of showing, for structurally parallel considerations to the linguistic data discussed above (Moorean conjunctions, challenges, prompts, etc.) is available for pedagogical contexts (Buckwalter & Turri 2014).

In short, KNA claims to offer the best explanation of these data from Moorean conjunctions, challenges, prompts, responses to prompts, lottery assertions, parenthetical positioning, and pedagogical norms.

c. Rivals and Objections

Though KNA has been widely defended, its opponents offer substantial criticism and suggest rival accounts requiring other epistemic or alethic conditions: most rival norms of assertion appeal to justified or reasonable or well-supported belief, or that it be reasonable or credible for one to believe, or that one's assertion be true.

Williamson (2000: 244-249) considered a Truth Norm to be the most significant rival to KNA. Because knowledge is factive, the KNA requires its assertions to be true; but according to the Truth Norm, one must assert that p only if p is true (a further norm requiring evidence for p would be derivable from the requirement of truth), and thus is less demanding than the KNA. Weiner (2005) argues for a Truth Norm by noting that cases of prediction and retrodiction seem to be counterexamples to KNA, that is, they are assertions which seem intuitively acceptable even though the propositions affirmed are not known. Weiner further argues that the Truth Norm can rely on Gricean pragmatic resources to explain the data from lotteries and Moorean conjunctions, for the Truth Norm on its own does not predict the inappropriateness of such assertions. While Weiner (2005) and Whiting (2013) argue for truth as necessary and sufficient for the epistemic propriety of assertion, Littlejohn (2012) and Turri (2013b) argue (compatibly with the KNA) that truth is necessary for epistemically proper assertion; Littlejohn’s defense of factivity focuses on the requirement that assertions about what a subject ought to do would have to satisfy the truth requirement to be properly asserted, whereas Turri’s draws on experimental investigation of people’s judgments of false assertions. For criticisms of Weiner’s Truth Norm, see Pelling (2011) and Benton (2012). A related norm is that proposed by Maitra and Weatherson (2010): they argue that a certain class of statements, namely those concerned with what is “the thing for one to do,” form an important exception to the KNA. Their rival norm, the Action Rule, says “Assert that p only if acting as if p is true is the thing for you to do” (2010: 114). They argue that their Action Rule collapses into the Truth Norm for propositions concerning what one should do (“if an agent should do X, then that agent is in a position to say that they should do X,” 2010: 100), though it does not do so for other propositions.

Douven (2006) argues for a Rational Credibility Norm, and Lackey (2007) argues for a Reasonable-to-Believe Norm; for related norms, see also McKinnon’s (2013) Supportive Reasons Norm. These views roughly hold  that to be epistemically acceptable, an assertion that p need not be known, but must be credible or reasonable for the speaker to believe, even if it is not actually believed by the speaker. Douven’s approach argues for that his norm is as adequate as the KNA in explaining most of the linguistic data canvassed above, but that his Rational Credibility norm is a priori simpler than, and so preferable to, the KNA (cf. Douven 2009 which updates some of his arguments). Lackey’s influential discussion argues for this view by suggesting that cases of selfless assertion are intuitively acceptable. Selfless assertions involve cases in which an asserter possesses knowledge-worthy evidence, appreciates the strength of that evidence, yet for non-epistemic reasons fails to believe that p (and asserts that p anyway). Thus on Lackey’s particular account, the speaker need not even believe what is asserted (for criticism of Lackey’s view, see Turri 2014b). Because these norms sanction lottery assertions and Moorean assertions, Douven and Lackey both attempt to explain away the impropriety attending to such assertions.

Kvanvig (2009, 2011a) argues for a Justified Belief Norm; somewhat related is Neta’s (2009) Justified-Belief-that-One-Knows Norm. These norms require, for permissible assertion, a justified belief of some kind, either that the asserter justifiably believe what is asserted, perhaps even knowledge-level justification; or that the asserter hold the higher order justified belief that she knows what she’s asserted (the latter of which will, on many views, itself require that she justifiably believe the asserted proposition). These norms do not actually require an assertion to be true, and thus their proponents have to explain the apparent defect in a false assertion, even if one is largely absolved from blame given that that one was justified in believing what was asserted (for discussion see Williamson 2009: 345). Similarly, Coffman (2014) argues for a Would-Be Knowledge Norm, which is stronger than a justified belief norm in that it requires not only knowledge-level justification, but also that the belief not be Gettiered. This norm also, however, does not require truth, for one might have a false belief which (given one’s knowledge-level justification) would be knowledge if only it were true.

Another rival approach is a context-sensitive norm of assertion which accepts that an epistemic norm governs assertion, but claims that its content can vary according to context. There are different ways of formulating such an account. On Gerken’s (2012) view, the epistemic norm of a central type of assertion is internalist norm of “Discursive Justification,” according to which an asserter must be able to articulate reasons for her belief in the proposition asserted. This approach is context-sensitive in that what counts as adequate reason-giving will vary according to context (for other norms of assertion that impose primarily ‘down stream’ requirements on the speaker, see also Rescorla 2009 and MacFarlane 2009: 90ff.).

Goldberg (2009, 2011) initially applied the KNA to issues in the epistemology of testimony. More recently, Goldberg (2015) formulates and defends a context-sensitive norm on which knowledge is often required for permissible assertion—perhaps knowledge is even the default value—but in other contexts justification or reasonable belief might be enough, and in still other contexts, perhaps something even stronger than knowledge is required (certainty, perhaps). Goldberg draws on Grice’s (1989) maxim of quality, according to which assertions are governed by the first supermaxim and its two submaxims:

Quality: Try to make your contribution one that is true.

    1. Do not say that which you believe to be false.
    2. Do not say that for which you lack adequate evidence. (1989, 27)

Grice’s quality maxim, invoking as it does the notion of ‘adequate’ evidence, would seem to be just such a context-sensitive norm (though see Benton 2014a, for reasons to doubt this). Goldberg’s hypothesis is that there is Mutually-Manifest Epistemic Norm of Assertion (MMENA), which is comprised of a norm (ENA), and the context-sensitive mechanism (RMBS) which fixes the epistemic condition required by ENA:

ENA   S must: assert p, only if S satisfies epistemic condition E with respect to p, i.e., only if S has the relevant warranting authority regarding p.

RMBS  When it comes to a particular assertion that p, the relevant warranting authority regarding p depends in part on what it would be reasonable for all parties to believe is mutually believed among them (regarding such things as the participants’ interests and informational needs, and the prospects for high-quality information in the domain in question) (Goldberg, 2015, Chap. 12)

McKinnon’s (2013) Supportive Reasons Norm is designed to be similarly context-sensitive, and on a natural reading, Lackey’s Reasonable-to-Believe Norm can be understood this way as well; Stone (2007: 100-101) also prefers, but does not develop, a kind of context-sensitive norm opposed to the KNA. Such rival norms have the intuitive benefit of explaining a great range of conversational contexts in which we assert seem to acceptably; however with this flexibility comes the burden of having to provide plausible explanations of the data, considered in sections I.A and I.B above, which invoke knowledge.

Note however that opting for a context-sensitive norm need not mean that one eschews the KNA. DeRose (2002; 2009 Chap. 3) accepts a version of KNA, but regards “know(s)” as semantically context-sensitive. Thus the standard for the truth of “knowledge” ascriptions at a context sets the standard for permissible assertion: for a given speaker S in a conversational context C, the truth conditions for “S knows that p” at C are the assertibility conditions for S to assert that p in C. On this view, knowledge remains the norm of assertion. Relatedly, Schaffer (2008) argues for a contextualist version of KNA which he claims supports contrastivism about knowledge.

Many of the rival norms to KNA are motivated in part by the idea that KNA is just too strong an epistemic requirement on assertion: many KNA opponents find it implausible to think that one has done anything wrong by asserting what one doesn’t know, so long as one’s assertion, or one’s decision to assert p, is supported in the relevant way by adequate evidence or reasons for p (see McGlynn 2014 for a thorough discussion). Some of these objections to KNA come from appeals to intuitions about cases, in particular, cases in which one asserts with strong grounds or evidence, but one is in a Gettier situation, or what one asserts is unluckily false. In general, such cases appeal what are judged to be blameless assertion (for concerns about relying on such judgments of blame, see Turri & Blouw 2014). Some proponents of KNA respond that in such cases one asserts reasonably if one reasonably took oneself to know, even though on KNA, one still asserts impermissibly: its being reasonable is what excuses one for having violated the norm, and the plausibility of calling it an ‘excuse’ suggests that a norm was violated (Williamson 2000: 256; DeRose 2009: 93-95, Sutton 2007: 80, Hawthorne & Stanley 2008: 573, 586); but this excuse maneuver has been heavily criticized for multiplying senses of propriety or for being too general (Lackey 2007, Gerken 2011, Kvanvig 2011a). See also Littlejohn 2012 and 2014 for extensive discussion of the notion of excuse, as related to these norms.

Other opponents of the KNA are particularly motivated to preserve the acceptability of our assertive practices within special contexts which are nevertheless familiar and ones in which it seems that we do assert, such as the philosophy seminar room (see Goldberg, 2015). Still others rely on intuitions about cases and a desire to give a normative role to the hearer of an assertion (see Pelling’s 2013b “knowledge provision” account). Some express skepticism at the very idea of there being a constitutive epistemic norm of assertion in Williamson’s sense, preferring instead the idea that more general norms of cooperation and rationality (perhaps those given by Grice) will suffice to explain any normativity in our practice of saying and asserting (e.g. Cappelen 2011; see Benton, 2014a, and Montgomery 2014 for discussion). Maitra (2011) in particular presents a challenge to Williamson’s way of formulating the notion of constitutive rules on analogy with the rules of a game. Yet the general idea that a constitutive epistemic norm can individuate speech acts has been deployed for other speech acts on the assertive spectrum: Turri (2013) thereby individuates the stronger speech act of guaranteeing, and Benton & Turri (2014) individuate the speech act of prediction.

The final rival to the KNA considered here is a Certainty Norm (Stanley 2008), on which to assert that p one must be (subjectively) certain that p. This norm is motivated in part by the idea that the Moorean conjunction schemas

(13)  p but I’m not certain that p

(14)  p but it is not certain that p

strike many to be just as problematic as the knowledge and belief conjunctions (1)-(4) considered above; a Certainty Norm could explain them, and if certainty is required for knowledge, it could also explain (1)-(4). However, the Certainty Norm inherits the ‘too strong’ objection with which many charge KNA, and as noted above, certainty does not figure in both prompts and challenges to assertions (Turri 2010). Also, it is unclear how the Certainty Norm will handle the truth desideratum insofar as conversational participants generally seem to care about truth, and not just a speaker’s confidence, in assertion.

d. Sufficiency

Even if KNA can seem to impose an overly demanding condition on the propriety of assertion, on first pass it might seem that knowledge at least provides a sufficient condition on epistemically permissible assertion. After all, this idea goes, even if some epistemic/alethic standard weaker than knowledge is necessary for permissible assertion, nevertheless surely having knowledge is sufficient.  Most of the rivals to KNA ought to agree that when one knows, one thereby arguably meets the less stringent standards of: truth, it being reasonable/credible to believe, being justified in believing, and (if the contextually set standards for certainty do not easily come apart from those of knowledge) being certain enough to assert. Thus some of KNA’s defenders (cf. Hawthorne 2004: 23 n. 58, and 87; DeRose 2009: 93), and many of its opponents, could be tempted to endorse a sufficiency direction of the knowledge norm, such as the following: 

(KNA-S)  One is properly epistemically positioned to assert that p if one knows that p.

(As shall be seen below in section 2.c, similar sufficiency principles, tying knowledge to action, undergird pragmatic encroachment views of knowledge.)

But Lackey (2011, 2013) has argued that in fact, KNA-S is false (compare Pelling 2013a for another argument). She appeals to cases of what she calls isolated second-hand knowledge to show that in some settings, particularly those involving experts, asserting even though one knows is epistemically deficient. Consider a case in which an oncologist has referred her patient for lab tests, which arrive back on her day off. She must meet with the patient to provide the diagnosis, if any, and is only able to confer briefly with the oncologist from the lab about what the diagnosis is (that he has pancreatic cancer). The doctor can learn from her colleague’s testimony that her patient has pancreatic cancer, but this knowledge is isolated (she knows no other facts about the test results or the diagnosis), and entirely second-hand (via testimony with the lab oncologist). Given her epistemic situation, Lackey argues, it is intuitively (epistemically) impermissible for the doctor to assert to her patient that he has pancreatic cancer, even though she knows this. More generally, for experts asserting as experts, it seems that asserting with merely isolated second-hand knowledge is (epistemically) improper, because experts ought to engage their expertise first-hand, or ought to have more than isolated knowledge gained entirely through expert testimony. Thus Lackey argues that KNA-S is false. (See Carter & Gordon 2011 for an appeal to the idea that understanding is needed. For a challenge to Lackey’s cases, see Benton 2014b; for her reply, see Lackey 2014.)

2. Knowledge Norm of Action

Knowledge seems intimately connected to our reasons for, and our evaluations of, action. Recently many philosophers have endorsed normative connections between knowledge and action, and have deployed principles according to which knowledge is either necessary, sufficient, or both necessary and sufficient for appropriate action. Some of these discussions are focused on action as the result of practical reasoning, or on the connection between knowledge and reasons, or on knowledge as a sufficient epistemic position for acting on a proposition. We will consider these in turn.

a. Knowledge and Practical Reasoning

Some philosophers have noticed intuitive connections between knowledge, assertion, and practical reasoning (see Fantl & McGrath 2002; Hawthorne 2004, esp. 21-32, and Ch. 4; Stanley 2005; and Hawthorne & Stanley 2008).  Many thus argue that knowledge plays an important normative role in practical reasoning: when one faces a decision over whether to act that depends on the truth of some proposition, then acting without knowing that proposition can seem epistemically suspect and deserving of criticism. We often invoke knowledge when justifying someone’s decision to act, and we often cite their lack of knowledge when censuring others for acting on inadequate grounds; knowledge figures in our appraisals of blame, negligence, and in conditional orders wherein one is commanded to X just in case one knows a specified condition to obtain.

These facts support the idea that one ought only to use known propositions as premises in one’s practical deliberations. For example, if you opt against purchasing very affordable health insurance, on the grounds that you are plenty healthy, you may be criticized by your loved ones precisely because you do not know that you will not soon fall gravely ill. To take another example: suppose that you spent a dollar on a lottery ticket in a 10,000 ticket lottery with a $5,000 prize, and you are deliberating about whether to sell your ticket. Suppose you reason as follows:

The ticket is a loser.

So if I keep the ticket, I will get nothing.

But if I sell the ticket, I will get a penny.

So I should sell the ticket. (Hawthorne 2004: 29, 85)

Such reasoning should strike us as unacceptable and a plausible reason for why is that the first premise isn’t known. Similarly, suppose that someone offered to sell you their ticket in the same lottery for a cent: if you decline on the basis that you know their ticket will lose, that may also strike us as the wrong basis for declining, for it seems (to many) that you don’t know the ticket will lose. Indeed, if you do know the first premise, standard decision theory validates the reasoning; this suggests that only one’s beliefs which amount to knowledge should figure in to shaping one’s decision table (cf. Weatherson 2012).

These kinds of considerations suggest the following necessary direction norm, Action-Knowledge Principle (AKP), which gives a necessary condition on appropriately treating a proposition as a reason for acting:

(AKP)  Treat the proposition that p as a reason for acting only if you know that p (Hawthorne & Stanley 2008: 578)

AKP plausibly lies behind our epistemic evaluations of actions, and also provides a nice diagnosis of some comparative intuitions about low stakes vs. high stakes cases (e.g. Stanley 2005, 9-10).

A parallel debate concerns the idea that there is a common epistemic norm—say, knowledge, or perhaps epistemic ‘warrant,’ or justification—which provides a necessary condition on both appropriate assertion in particular and appropriate action/practical reasoning more generally: see Brown 2011 and 2012, Montminy 2013, Gerken 2013. As we will see in the next section, a structurally similar question concerns whether a common epistemic norm governs practical reason as well as theoretical reason (that is, on what one can appropriately take as a reason for believing).

Some important criticisms of AKP are the following. First, as with the KNA above, it doesn’t license acting on p when one holds a justified belief that p; indeed, one might be Gettiered with respect to p (see Brown 2008, Neta 2009). Acting on p in such cases seems to many to be entirely appropriate, and thus these are counterexamples to AKP. As with the KNA, the reply (Hawthorne & Stanley 2008: 573-74, 586) is that such subjects are blameless for making an excusable mistake, and the need for an excuse is explained by AKP.

Second, it has been objected that AKP doesn’t license acting on subjective probabilities of a proposition, and thus that it can seem in conflict with Bayesian decision theory. Sometimes one is only in a position to treat propositions that are probable for one as reasons for acting; Cresto (2010) argues that when probabilistic talk is interpreted in subjectivist terms, AKP can be violated even though it seems as though one has done nothing wrong. On standard Bayesian decision theory, one plugs one’s probabilities, along with one’s values for possible outcomes, into one’s decision table to discern the act which maximizes expected utility.  If you assign 0.7 probability to (have 0.7 credence in) the proposition that it will rain, and on that basis choose to carry an umbrella on your walk, have you violated AKP? Perhaps not, for if you know that you assign 0.7 probability to it raining, and use this knowledge as your reason for acting, then you do not violate AKP: the proposition that you treat as your reason for so acting is that rain is 0.7 probable (Hawthorne & Stanley 2008: 580-583). Arguably, one’s credences are not always luminous to one, and thus there is still a role for knowledge (and thus AKP) to play. Weatherson (2012) argues that the role for knowledge in decision theory is that it sets the standard for what gets on to one’s decision table; moreover, it might be that one’s credences can constitute knowledge (Moss 2013), and if so there is room for AKP to govern actions based on them. But still, it might be implausible to suppose that every such case of appropriately acting on a probability involves your knowing what your credence is: though your credence in it may be 0.7 on this occasion, this may not be transparent to you. It may be sufficient for you to act on the more coarse-grained probability that it’s more likely than not that it will rain, even if you do not form the belief that it is more likely than not that will rain. On this way of looking at things, the objection remains. For important constructive work adjudicating these issues and proposing some ways in which a knowledge norm for practical reasoning and Bayesian decision theory are compatible, see Weisberg (2013).

b. Knowledge and Reasons

We standardly cite reasons as propositions which ought to make a difference to someone’s decision to act one way or another. Such normative reasons are reasons there are for a particular agent to believe, feel, or act a certain way. (Such reasons are distinguishable from both explanatory reasons—reasons why an agent believed or felt or acted—and from motivating reasons—reasons for which an agent acted in a particular way.)  Normative reasons can be either possessed by an agent or not possessed by an agent: if Iris is at the bar and there is petrol in the glass in front of her, then there is a reason for her not to drink the liquid in her glass, but it will not be a reason Iris possesses unless she is aware that there is petrol in the glass.

A natural way to approach the connection between knowledge and action is by noting that possessing a reason for some action arguably depends on knowing a proposition, and that lacking knowledge can rob one of possessing the relevant reason (see Hyman 1999, Unger 1975, Ch. 5, Alvarez 2010, and Littlejohn 2014). If Iris knows that there is petrol in the glass, then that is a reason she possesses to refrain from drinking it; but if she does not know it, then she does not possess that reason to refrain, even though there is a reason for her to refrain. There being petrol in the glass can only be a reason Iris possesses if she knows it.

This view connects naturally with the above discussion of the normative relation between knowledge and action: where one treats a proposition as one’s reason for action, and then acts for that reason, one only acts properly when one knows that proposition. This is because, on the view being considered, one cannot possess p as a reason to ϕ unless one knows that p. Of course, one’s motivating reason for ϕ-ing might be a falsehood: one might falsely believe that q and thereby take q as one’s reason for ϕ-ing, and one’s belief that q explains why one ϕ’d. On the view being considered then, one cannot in that circumstance have had q as a reason, for one cannot (because q is false) know that q. That is, the reasons one takes to be one’s reasons can come apart from the reasons one in fact possesses. If this is correct, it has consequences for how to understand the normative concept of justification. In particular, knowledge figures importantly in understanding what reasons justify one in believing or in acting, such that the mark of justification is not an internalist or subjectivist notion of rationality but instead an externalist or objectivist notion explicable in terms of facts or knowledge of facts. See Littlejohn (2014) for more.

Some philosophers question the claim, crucial to the above line of reasoning, that one can possess p as a reason, or properly treat p as a reason for acting, only if p is true (and known). Comesaña and McGrath (2014) call this “factualism about reasons-had,” and against it they argue that one can have false reasons (see also Schroeder 2008, Fantl & McGrath 2009: 100-104, and Dancy 2014, among others).  The case for the possibility of having false reasons is built primarily upon two ideas. First, it seems to them that ascribing a reason to someone for their action can be done even if that reason is (or entails) a false proposition. That is, they claim that one could acceptably say of someone that “The reason she turned down the job was that she had another job offer,” even if she did not have another job offer and the speaker knows this. Second, when someone acts on a mistaken belief, there is pressure to claim that she acted for the same reason as she would’ve had her belief in fact been true. On this way of looking at things, there must be the same psychological state that rationalizes Iris’s taking and drinking from the glass with petrol in it as would (counterfactually) rationalize Iris’s taking and drinking from a glass with gin and tonic in it; in other words, such views take what it is that rationalizes to be what it is that provides ones with reasons, both motivating and normative: one has the same normative reasons in both the good and bad cases. Such views are at odds with the standard semantics about schemas such as “S’s reason for X-ing was/is that p” or “The reason S had for X-ing was that p”, which entail that p and so are factive; see Comesaña and McGrath (2014) for ways of handling these semantic issues.

As noted in the last section, there is a parallel question about whether the epistemic norm governing practical reason is the same as that governing theoretical reason. Hawthorne & Stanley’s AKP is a knowledge norm on practical reason, but they also note the analogous principle regarding reasons for belief:

(TKP) Treat the proposition that p as a reason for believing q only if one knows that p. (2008, 577)

Littlejohn (2014) notes a compelling argument that AKP is true just in case TKP, and that more generally, whatever epistemic status norms practical reason must also norm theoretical reason. The argument for it goes thus. Suppose (for reductio) that in fact, the norm for theoretical reason were less epistemically demanding than that for practical reason: for concreteness, suppose that one could treat p as a reason for believing that q only if one were justified in believing that p, but that knowledge still governed practical reason along the lines suggested by AKP.  In that case, if you justifiably believe that this liquid is gin, and you knew that you ought (if you can) to make another round of drinks for your guests, you could take your justified belief that it is gin as your reason for believing that: you can make them another round of drinks. But AKP says that you may treat that latter proposition (that you can make them another round of drinks) as a reason only if you know it; and let’s suppose you don’t know it, because in fact it’s not gin but petrol. In this situation, though it’s proper for you to treat your justified belief as a reason to form another belief, AKP says that you cannot properly treat this new belief as a reason for acting, namely making another round of drinks. If the epistemic norms diverged in this way, they would “demand that you were akratic,” and this seems absurd (Littlejohn 2014: 135-136). Things go similarly if the divergence goes the other way, namely if the norm of theoretical reason were more demanding than the norm of action: together these would permit situations in which one can act on a proposition (say, because one justifiably believes it), but not use it as a premise from which to deduce, and form beliefs in, other propositions. Thus there is a case for the unity thesis that a single epistemic status governs both practical and theoretical reasoning, even if it is not knowledge; for arguments that it is something weaker than knowledge, like justification or warrant, see Gerken (2011).

c. Sufficiency and Pragmatic Encroachment

Though Fantl & McGrath question the necessary direction principles AKP, they and others do endorse and defend sufficiency direction principles on which knowledge of a proposition is sufficient to rationalize acting on that proposition. Hawthorne & Stanley (2008, 578) defend a biconditional principle which adds to AKP a sufficiency direction, given a choice one faces which depends on a particular proposition. Where a choice between options X1... Xn is “p-dependent” just in case the most preferable of X1... Xn conditional on the proposition that p is not the same as the most preferable of X1... Xn conditional on the proposition that not-p, the Reason-Knowledge Principle (RKP) says:

(RKP) Where one’s choice is p-dependent, it is appropriate to treat the proposition that p as a reason for acting just in case one knows that p.

RKP gives necessary and sufficient conditions for appropriately treating a proposition as a reason for acting. Similarly Fantl & McGrath (2002, 2009, 2012) defend at length a variety of sufficiency conditions tying knowledge to action:

(Action) If you know that p, then if the question of whether p is relevant to the question of what to do, it is proper for you to act on p.

(Preference) If you know that p, then you are rational to prefer as if p.

(Inquiry) If you know that p, then you are proper not to inquire further into whether p.

(KJ)         If you know that p, then p is warranted enough to justify you in ϕ-ing, for any ϕ.

On the face of them, these principles can seem exactly right: for example, it might seem obvious that if one knows a proposition, then one is in good enough position to act upon it. But these principles admit of modus tollens as well: if it is not proper for one to act on p, or rational to prefer as if p, or proper to close off inquiry regarding p, or where p is not warranted to enough for one to act, for any action one considers undertaking, then one does not know that p. These principles bear out the intuitive judgments of many about such cases: to the extent that one’s epistemic position in some p seems inadequate when facing a decision that depends on that p, to that same extent we tend to be inclined to deny that one knows that p. That is, in cases where the practical stakes for one make it irrational for one to act on a proposition, such principles entail that one does not know that proposition (even though in other contexts where one faces no such decision, where one has the same evidence or is in the same “epistemic” position, one might know that proposition). Thus such views endorse “pragmatic encroachment” in epistemology (also known as “subject-sensitive invariantism” in Hawthorne 2004: Ch. 4, Brown 2008, and DeRose 2009), for practical considerations can seem to encroach on whether one knows. See Neta 2009 and Kvanvig 2011b for some criticisms, and Fantl & McGrath 2012 for arguments that pragmatic encroachment isn’t only about knowledge.

3. Knowledge Norm of Belief

a. The Belief-Assertion Parallel

Some philosophers (going back to at least Frege, Peirce, and Ramsey) find plausible the idea that belief or judgment amount to a kind of “inner assertion” where (full) belief is the inner analogue to outward (flat-out) assertion. For those inclined to this view who also accept the Knowledge Norm of Assertion, there is a motivation to accept a parallel Knowledge Norm of Belief. Williamson gestures at this idea thus:

It is plausible, nevertheless, that occurrently believing p stands to asserting p as the inner stands to the outer. If so, the knowledge rule for assertion corresponds to the norm that one should believe p only if one knows p. Given that norm, it is not reasonable to believe p when one knows that one does not know p. (2000, 255-56)

Adler (2002: 276ff.) calls this idea the “Belief-Assertion Parallel,” and offers a range of considerations suggesting that belief and assertion are on a par in this way.

Note however, that this Parallel is likewise intuitive should one prefer some kind of evidential or justification norm, rather than a knowledge norm, on both assertion and on belief. If, epistemically speaking, one shouldn’t assert to others that p without some sufficient evidence or justification for p, then one shouldn’t (epistemically speaking) believe that p without some similar sufficient evidence or justification for p; and in reverse, if (epistemically speaking) one shouldn’t believe that p without some sufficient evidence or justification for p, then one shouldn’t (epistemically speaking) assert to others that p without some similar sufficient evidence or justification for p. Thus to the extent that one finds the epistemic standard for assertion to be similar, if not identical, to the epistemic standard for belief, to that extent the Belief-Assertion Parallel will seem intuitive. Only if one takes the standard for one to be higher than the standard for the other will one be motivated to reject the Belief-Assertion Parallel. (For in-depth discussion, see Goldberg 2015, Chs. 6 and 7.)

Though Williamson does not formulate it explicitly, taking a cue from his KNA schema would provide us with a similar formulation for a Knowledge Norm of Belief, which gives a necessary condition for the propriety of belief:

(KNB) One must: believe p only if one knows p.

(Compare Sutton 2005, 2007; for clarification of how best to understand a norm like KNB, see Jackson 2012.) In addition to the inner/outer parallel noted above, Williamson also provides a different consideration in favor of KNB, one that invokes teleological considerations concerning the “aim” of belief:

If believing p is, roughly, treating p as if one knew p, then knowing is in that sense central to believing. Knowledge sets the standard of appropriateness for belief. That does not imply that all cases of knowing are paradigmatic cases of believing, for one might know p while in a sense treating p as if one did not know p—that is, while treating p in ways untypical of those in which subjects treat what they know. Nevertheless, as a crude generalization, the further one is from knowing p, the less appropriate it is to believe p. Knowing is in that sense the best kind of believing. Mere believing is a kind of botched knowing. In short, belief aims at knowledge (not just truth). (Williamson 2000, 47)

Notice that the KNB provides an elegant and unified account of Moore’s Paradox at the level of belief, a desideratum of many approaches to theorizing about Moore’s Paradox (e.g. Sorenson 1988): these authors note that while the sentences (1)-(4), uttered assertively, are absurd, it also seems absurd to believe (the propositions of) any of their conjuncts together. Huemer (2007) argues explicitly for the idea that theorizing about Moorean conjunctions in this way should lead us to accept both KNA and KNB.

Sosa (2010/2011, Chap. 3: 41-53) provides an interesting argument for another version of the Belief-Assertion Parallel, which arrives at norms similar to KNA and KNB, but he does so by explicit appeal to teleological considerations about the aim of belief. Sosa argues for what he calls the Affirmative Conception of Belief (2011: 41; cf. Sosa 2014):

Consider a concept of affirming that p, defined as: concerning the proposition that p, either (a) asserting it publicly, or (b) assenting to it privately.

With this Affirmative Conception in hand, he then applies considerations from the propriety of means-end action in general to the action of assertion as a special case, using the terminology of his virtue-theoretic epistemology (cf. Sosa 2007):

If one asserts that p as means thereby to assert that p with truth, this essentially involves the relevant means-end belief. I mean the belief that asserting that p is a means to thereby assert that p with truth. And this belief is equivalent to the belief that p. Accordingly, if that means-end belief needs to amount to knowledge in order for the means-end action to be apt, then in order for a sincere assertion that p to be apt, the agent must know that p. In this way, knowledge is a norm of assertion. If an assertion (in one’s own person) that p is not to fall short epistemically it must be sincere, and a sincere assertion that p will be apt only if the subject knows that p. This is, moreover, not just a norm in the sense that the subject does better in his assertion that p provided he knows that p. Rather, if his assertion is not apt, it then fails to meet minimum standards of performance normativity. Any performance (with an aim) that is inapt is thereby flawed. … Knowledge is said to be necessary for proper assertion … If knowledge is the norm of assertion, it is plausibly also the norm of affirmation, whether the affirming be private or public. (2011: 48)

Sosa goes on to develop an intriguing argument for the “equivalence” of the knowledge norm of assertion and the value-of-knowledge thesis (2011: 49-52). For a related view tying the norms of belief and assertion to a virtue-theoretic account, see Wright (2014).

Instructive here is Bach’s combination of views (Bach & Harnisch 1979, Bach 2008). Bach holds a Belief Norm of Assertion, on which the only norm fundamental to assertion is that assertions must be sincere (one must outright believe what one flat-out asserts), but he also holds a Knowledge Norm of Belief much like KNB (2008: 77). Because Bach accepts the KNB, he gets a derived version of the KNA: for one must believe only what one knows, but given his Belief Norm of Assertion, one must assert only what one believes; thus one must assert only what one knows, if one is believing as one ought. This combination of views is one which accepts KNB, accepts (the derivative) KNA, but which denies the Belief-Assertion Parallel at the level of what norms are constitutive of assertion and of belief.

An objection to the KNB, similar in spirit to objections to KNA considered above, is that many find it implausible to hold that one is doing epistemically poorly, or doing anything epistemically impermissible, by believing many propositions which we nevertheless do not know, and which we furthermore properly take ourselves not to know. For some important criticisms of KNB, stemming from arguments that there is nothing epistemically problematic or improper about lottery propositions, see McGlynn (2013, 2014). Relatedly, while most find it incoherent or irrational to believe the Moorean conjunction form (1) considered above, many find it unproblematic to believe some conjunctions of the form (2), namely believing a proposition and also believing that one does not know that proposition. Those who object to KNB on these grounds tend to deny a parallel between the epistemic standard for belief and the epistemic standard for knowledge. Couched in evidential terminology, many epistemologists intuitively think of belief in terms of an evidence-threshold model, according to which the evidential threshold which one must meet in order permissibly belief some proposition is lower than the evidential threshold for knowledge: more evidence is required to know than to (permissibly) believe.

b. Knowledge Disagreement Norm

In a spirit related to considerations stemming from endorsement of the KNB, Hawthorne & Srinivasan (2013) argue for a Knowledge Norm of Disagreement. In the growing literature on the epistemology of disagreement, debate ensues over what one should do in the face of disagreement about some proposition, particularly when those disagreeing with one are regarded as one’s intellectual or evidential peers. Typically such cases of peer disagreement are formulated such that you have formed a belief or a judgment on (or assigned a credence to) some proposition p, and have done so on the basis of some evidence: perhaps it is a judgment about which of two horses won a very close race, and the evidence is visual; or perhaps it is a judgment about what you and your friend each owe from calculating your share of a restaurant bill which you are splitting, in which case the evidence is intellectual and inferential. Many philosophers writing on such cases of disagreement are “conciliationists” of one sort or another, that is, they endorse the idea that in some such disagreements, one does something improper or irrational if one does not either suspend judgment on p, or reduce one’s credence in p. Opposed to conciliationists are “dogmatists” who advocate the idea that in face of such disagreements, it is sometimes appropriate or rational for one to hold steadfast or “stick to one’s guns” by retaining one’s belief or one’s credence.  (See essays in Feldman & Warfield 2010, and Christensen & Lackey 2013 for more.)

Hawthorne & Srinivasan (2013: 11-12), drawing on a knowledge-centric epistemology which takes knowledge to be the central goal of our epistemic activity, articulate a position which is in some ways a middle ground between these two views. They argue for the following Knowledge Disagreement Norm:

(KDN)  In a case of disagreement about whether p, where S believes that p and H believes that not-p:

(i) S ought to trust H and believe that not-p iff were S to trust H, this would result in S’s knowing not-p

(ii) S ought to dismiss H and continue to believe that p iff were S to stick to her guns this would result in S’s knowing p, and

(iii) in all other cases, S ought to suspend judgment about whether p.

KDN’s ‘ought’ clauses are motivated by a ranking of actions according to their counterfactual outcomes: according to KDN’s clause (i), one should be ‘conciliatory’ in the face of disagreement just in case trusting one’s disagreeing interlocutor would result in one gaining knowledge, whereas according to clause (ii), one should be ‘dogmatic’ in the face of disagreement just in case would lead to retaining one’s knowledge. Finally, in cases where neither party knows whether the proposition under dispute is true, each should suspend judgment.

Notice that KDN, formulated in the terminology of knowledge and outright belief, is neutral on the matter of how to respond when the ‘disagreement’ concerns divergences in credences toward a proposition: its clause (iii) is capable of accommodating many different approaches here. Further, KDN is fully general in that it does not hold only for cases of peer disagreement: its clauses (i) and (ii) are designed to capture the appropriateness of occasions in which someone defers to an expert or someone in a better evidential position, and thereby can come to know by trusting them. If it is plausible to suppose that becoming apprised of peer disagreement can defeat one’s knowledge, then such cases may be subsumed to clause (iii) (2013: 13-14, 21ff). Finally, KDN has the merit that, if followed, knowledge will tend to be maximized for all parties to a disagreement: if we disagree, but by trusting you, I can come to know what you believe, I ought to do so.

It may be objected that KDN is not easily followed, precisely because knowledge is a non-luminous condition, that is, one is not always in a position to know when one knows; and this is particularly pressing in the case of disagreement, for it is clear that (at least) one of the disagreeing parties doesn’t know, and it can be utterly unclear to most such disputants which one (if any) knows. This objection, and similar objections that are occasionally pressed against the norms of assertion and practical reasoning covered in earlier sections, seems to assume that norms must be perfectly operationalizable, that is, they must be such that one is always in a position to know whether one is complying with them (Williamson 2008). On this idea, a norm N, which requires that one X in circumstances C, will be perfectly operationalizable just in case S can know she is in C, and is thus in a position to reason that, given that she is in C, and could X by A-ing, and that N says she ought to X in C, that S ought to A. But it is a substantive question whether norms are or must be perfectly operationalizable; and given that many such conditions of epistemological interest are arguably non-luminous (see Williamson 2000: Ch. 4), one might reconsider the worth of that assumption. For more discussion of this issue and how it relates to the hypological categories of praise and blame, see Hawthorne & Srinivasan (2013: 15-21).

4. References and Further Reading

  • Adler, Jonathan. 2002. Belief’s Own Ethics. Cambridge: MIT Press.
  • Alvarez, Maria. 2010.  Kinds of Reasons. Oxford University Press.
  • Bach, Kent, and R. Michael Harnish. 1979. Linguistic Communication and Speech Acts. Cambridge: MIT Press.
  • Bach, Kent. 2008. “Applying Pragmatics to Epistemology.” Philosophical Issues 18: 68-88.
  • Benton, Matthew A. 2011. “Two More for the Knowledge Account of Assertion.” Analysis 71: 684-687.
  • ­­­­­Benton, Matthew A. 2012. “Assertion, Knowledge, and Predictions.” Analysis 72: 102-105.
  • Benton, Matthew A. 2014a. “Gricean Quality.” Noûs.
  • Benton, Matthew A. 2014b. “Expert Opinion and Second-Hand Knowledge.” Philosophy and Phenomenological Research.
  • Benton, Matthew A. and John Turri. 2014. “Iffy Predictions and Proper Expectations.” Synthese 191: 1857-1866.
  • Blaauw, Martijn. 2012. “Reinforcing the Knowledge Account of Assertion.” Analysis 72: 105-108.
  • Black, Max. 1952. “Saying and Disbelieving.” Analysis 13: 25–33.
  • Brogaard, Berit. 2014. “Intellectual Flourishing as the Fundamental Epistemic Norm.” In Clayton Littlejohn and John Turri (eds.), Epistemic Norms: New Essays on Action, Assertion, and Belief. Oxford: Oxford University Press.
  • Brown, Jessica. 2008. “Subject-Sensitive Invariantism and the Knowledge Norm for Practical Reasoning.” Noûs 42: 167-189.
  • Brown, Jessica. 2010. “Knowledge and Assertion.” Philosophy and Phenomenological Research 81: 549-566.
  • Brown, Jessica. 2011. “Fallibilism and the Knowledge Norm for Assertion and Practical Reasoning.” In Jessica Brown and Herman Cappelen (eds.), Assertion: New Philosophical Essays. Oxford: Oxford University Press.
  • Brown, Jessica. 2012. “Assertion and Practical Reasoning: Common or Divergent Epistemic Standards?” Philosophy and Phenomenological Research 84: 123-157.
  • Buckwalter, Wesley and John Turri. 2014. “Telling, Showing, and Knowing: A Unified Theory of Pedagogical Norms.” Analysis 74: 16-20.
  • Cappelen, Herman. 2011. “Against Assertion.” In Jessica Brown and Herman Cappelen (eds.), Assertion: New Philosophical Essays. Oxford: Oxford University Press.
  • Carter, J. Adam and Emma Gordon. 2011. “Norms of Assertion: The Quantity and Quality of Epistemic Support.” Philosophia 39: 615-635.
  • Christensen, David and Jennifer Lackey (eds.). 2013. The Epistemology of Disagreement: New Essays. Oxford: Oxford University Press.
  • Coffman, E.J. 2014. “Lenient Accounts of Warranted Assertability.” In Clayton Littlejohn and John Turri (eds.), Epistemic Norms: New Essays on Action, Assertion, and Belief. Oxford: Oxford University Press.
  • Comesaña, Juan and Matthew McGrath. 2014. “Having False Reasons.” In Clayton Littlejohn and John Turri (eds.), Epistemic Norms: Assertion, Action, and Belief. Oxford: Oxford University Press.
  • Cresto, Eleonora. 2010. “On Reasons and Epistemic Rationality.” Journal of Philosophy 107: 326-330.
  • Dancy, Jonathan. 2014. “On Knowing One’s Reason.” In Clayton Littlejohn and John Turri (eds.), Epistemic Norms: Assertion, Action, and Belief. Oxford: Oxford University Press.
  • DeRose, Keith. 2002. “Assertion, Knowledge, and Context.” Philosophical Review 111: 167-203.
  • DeRose, Keith. 2009. The Case for Contextualism. Oxford: Oxford University Press.
  • Douven, Igor. 2006. “Assertion, Knowledge, and Rational Credibility.” Philosophical Review 115: 449-485.
  • Douven, Igor. 2009. “Assertion, Moore, and Bayes.” Philosophical Studies 144: 361-375.
  • Fantl, Jeremy and Matthew McGrath. 2002. “Evidence, Pragmatics, and Justification.” Philosophical Review 111: 67-94.
  • Fantl, Jeremy and Matthew McGrath. 2009. Knowledge in an Uncertain World. Oxford: Oxford University Press.
  • Fantl, Jeremy and Matthew McGrath. 2012. “Pragmatic Encroachment: It’s Not Just about Knowledge.” Episteme 9: 27-42.
  • Feldman, Richard and Ted Warfield (eds.). 2010. Disagreement. Oxford: Oxford University Press.
  • Goldberg, Sanford C. 2009. “The Knowledge Account of Assertion and the Nature of Testimonial Knowledge.” In Patrick Greenough and Duncan Pritchard (eds.). Williamson on Knowledge. Oxford: Oxford University Press.
  • Goldberg, Sanford C. 2011. “Putting the Norm of Assertion to Work: The Case of Testimony.” In Jessica Brown and Herman Cappelen (eds.), Assertion: New Philosophical Essays. Oxford: Oxford University Press.
  • Goldberg, Sanford C. 2015. Assertion: The Philosophical Significance of a Speech Act. Oxford: Oxford University Press.
  • Gerken, Mikkel. 2011. “Warrant and Action.” Synthese 178: 529-547.
  • Gerken, Mikkel. 2012. “Discursive Justification and Skepticism.” Synthese 189: 373-394.
  • Gerken, Mikkel. 2013. “Same, Same but Different: The Epistemic Norms of Assertion, Action, and Practical Reasoning.” Philosophical Studies 168: 725-744.
  • Grice, Paul. 1989. Studies in the Way of Words. Cambridge: Harvard University Press.
  • Hawthorne, John. 2004. Knowledge and Lotteries. Oxford: Oxford University Press.
  • ­­Hawthorne, John and Jason Stanley. 2008. “Knowledge and Action.” Journal of Philosophy 105: 571-590.
  • Hawthorne, John and Amia Srinivasan. 2013. “Disagreement Without Transparency: Some Bleak Thoughts.” In David Christensen and Jennifer Lackey (eds.), The Epistemology of Disagreement: New Essays. Oxford: Oxford University Press.
  • Huemer, Michael. 2007. “Moore’s Paradox and the Norm of Belief.” In Susana Nuccetelli and Gary Seay (eds.), Themes from G.E. Moore: New Essays in Epistemology and Ethics. Oxford: Clarendon Press.
  • Hyman, John. 1999. “How Knowledge Works.” Philosophical Quarterly 49: 433-451.
  • Jackson, Alexander. 2012. “Two Ways to Put Knowledge First.” Australasian Journal of Philosophy 90: 353-369.
  • Kvanvig, Jonathan L. 2009. “Assertions, Knowledge, and Lotteries.” In Patrick Greenough and Duncan Pritchard (eds.), Williamson on Knowledge. Oxford: Oxford University Press.
  • Kvanvig, Jonathan L. 2011a. “Norms of Assertion.” In Jessica Brown and Herman Cappelen (eds.), Assertion: New Philosophical Essays. Oxford: Oxford University Press.
  • Kvanvig, Jonathan L. 2011b. “Against Pragmatic Encroachment.” Logos & Episteme 2: 77-85.
  • Lackey, Jennifer. 2007. “Norms of Assertion.” Noûs 41: 594-626.
  • Lackey, Jennifer. 2011. “Assertion and Isolated Second-Hand Knowledge.” In Jessica Brown and Herman Cappelen (eds.), Assertion: New Philosophical Essays. Oxford: Oxford University Press.
  • Lackey, Jennifer. 2013. “Deficient Testimonial Knowledge.” In Tim Henning and David P. Schweikard (eds.), Knowledge, Virtue, and Action: Putting Epistemic Virtues to Work. New York: Routledge.
  • Lackey, Jennifer. 2014. “Assertion and Expertise.” Philosophy and Phenomenological Research.
  • Littlejohn, Clayton. 2012.  Justification and the Truth-Connection.  Cambridge University Press.
  • Littlejohn, Clayton. 2013. “The Russellian Retreat.” Proceedings of the Aristotelian Society 113: 293-320.
  • Littlejohn, Clayton. 2014. “The Unity of Reason.” In Clayton Littlejohn and John Turri (eds.), Epistemic Norms: Assertion, Action, and Belief. Oxford: Oxford University Press.
  • MacFarlane, John. 2011. “What is Assertion?” In In Jessica Brown and Herman Cappelen (eds.), Assertion: New Philosophical Essays. Oxford: Oxford University Press.
  • Maitra, Ishani. 2011. “Assertion, Norms, and Games.” In Jessica Brown and Herman Cappelen (eds.), Assertion: New Philosophical Essays. Oxford: Oxford University Press.
  • Maitra, Ishani and Brian Weatherson. 2010. “Assertion, Knowledge, and Action.” Philosophical Studies 149: 99-118.
  • McGlynn, Aidan. 2013. “Believing Things Unknown.” Noûs 47: 385-407.
  • McGlynn, Aidan. 2014. Knowledge First?  Basingstoke: Palgrave-Macmillan.
  • McKinnon, Rachel. 2013. “The Supportive Reasons Norm of Assertion.” American Philosophical Quarterly 50: 121-135.
  • McKinnon, Rachel and John Turri. 2013. “Irksome Assertions.” Philosophical Studies 166: 123-128.
  • Montgomery, Brian. 2014. “In Defense of Assertion.” Philosophical Studies. [published online Early View]
  • Montminy, Martin. 2013. “Why Assertion and Practical Reasoning Must Be Governed by the Same Epistemic Norm.” Pacific Philosophical Quarterly 94: 57-68.
  • Moore, G.E. 1942. “A Reply to My Critics.” In Paul Arthur Schilpp (ed.), The Philosophy of G.E. Moore, The Library of Living Philosophers. La Salle: Open Court Press. 3rd edn.: 1968.
  • Moore, G.E. 1962. Commonplace Book: 1919–1953. London: George Allen & Unwin.
  • Moore, G.E. 1993. “Moore’s Paradox.” In Thomas Baldwin (ed.), G.E. Moore: Selected Writings, 207–212. London: Routledge.
  • Moss, Sarah. 2013. “Epistemology Formalized.” Philosophical Review 122: 1-43.
  • Neta, Ram. 2009. “Treating Something as a Reason For Action.” Noûs 43: 684-699.
  • Pelling, Charlie. 2011. “A Self-Referential Paradox for the Truth Account of Assertion.” Analysis 71: 688.
  • Pelling, Charlie. 2013a. “Paradox and the Knowledge Account of Assertion.” Erkenntnis 78: 977-978.
  • Pelling, Charlie. 2013b. “Assertion and the Provision of Knowledge.” Philosophical Quarterly 63: 293-312.
  • Rescorla, Michael. 2009. “Assertion and its Constitutive Norms.” Philosophy & Phenomenological Research 79: 98-130.
  • Schaffer, Jonathan. 2008. “Knowledge in the Image of Assertion.” Philosophical Issues 18: 1-19.
  • Schroeder, Mark. 2008. “Having Reasons.” Philosophical Studies 139: 57-71.
  • Slote, Michael. 1979. “Assertion and Belief.” In Jonathan Dancy (ed.), Papers on Language and Logic. Keele University Library, pp. 177-90. Repr. in Slote, Selected Essays. New York: Oxford University Press, 2010.
  • Sorensen, Roy. 1988. Blindspots. New York: Oxford University Press.
  • Sosa, Ernest. 2007. A Virtue Epistemology: Apt Belief and Reflective Knowledge, volume 1. Oxford: Clarendon Press.
  • ­­­Sosa, Ernest. 2010. “Value Matters in Epistemology.” Journal of Philosophy 107: 167-190.
  • Sosa, Ernest. 2011. Knowing Full Well. Princeton: Princeton University Press.
  • Sosa, Ernest. 2014. “Epistemic Agency and Judgment.” In Clayton Littlejohn and John Turri (eds.), Epistemic Norms: Assertion, Action, and Belief. Oxford: Oxford University Press.
  • Stanley, Jason. 2005. Knowledge and Practical Interests. Oxford: Oxford University Press.
  • Stanley, Jason. 2008. “Knowledge and Certainty.” Philosophical Issues 18: 35-57.
  • Stone, Jim. 2007. “Contextualism and Warranted Assertion.” Pacific Philosophical Quarterly 88: 92-113.
  • Sutton, Jonathan. 2005. “Stick to What You Know.” Noûs 39: 359-396.
  • Sutton, Jonathan. 2007. Without Justification. Cambridge: MIT Press.
  • Turri, John. 2010. “Prompting Challenges.” Analysis 70: 456-462.
  • Turri, John. 2011. “The Express Knowledge Account of Assertion.” Australasian Journal of Philosophy 89: 37-45.
  • Turri, John. 2013a. “Knowledge Guaranteed.” Noûs 47: 602-612.
  • Turri, John. 2013b. “The Test of Truth: An Experimental Investigation of the Norm of Assertion.” Cognition 129: 279-291.
  • Turri, John. 2014a. “Knowledge and Suberogatory Assertion.” Philosophical Studies 167: 557-567.
  • Turri, John. 2014b. “You Gotta Believe.” In Clayton Littlejohn and John Turri (eds.), Epistemic Norms: Assertion, Action, and Belief. Oxford: Oxford University Press.
  • Turri, John and Peter Blouw. 2014. “Excuse Validation: A Study in Rule-Breaking.” Philosophical Studies.
  • Unger, Peter. 1975. Ignorance: The Case for Skepticism. Oxford: Clarendon Press. Reissued 2002.
  • Weatherson, Brian. 2012. “Knowledge, Bets, and Interests.” In Jessica Brown and Mikkel Gerken (eds.), Knowledge Ascriptions. Oxford: Oxford University Press.
  • Weiner, Matthew. 2005. “Must We Know What We Say?” Philosophical Review 114: 227-251.
  • Weisberg, Jonathan. 2013. “Knowledge in Action.” Philosophers’ Imprint 13: 1-23.
  • Whiting, Daniel. 2013. “Stick to the Facts: On the Norms of Assertion.” Erkenntnis 78: 847-867.
  • Williamson, Timothy. 1996. “Knowing and Asserting.” Philosophical Review 105: 489-523.
  • Williamson, Timothy. 2000. Knowledge and its Limits. Oxford: Oxford University Press.
  • Williamson, Timothy. 2008. “Why Epistemology Cannot be Operationalized.” In Quentin Smith (ed.), Epistemology: New Philosophical Essays. Oxford: Oxford University Press.
  • Williamson, Timothy. 2009. “Replies to Critics.” In Patrick Greenough and Duncan Pritchard (eds.). Williamson on Knowledge. Oxford: Oxford University Press.
  • Wright, Sarah. 2014. “The Dual-Aspect Norms of Belief and Assertion: A Virtue Approach to Epistemic Norms.” In Clayton Littlejohn and John Turri (eds.), Epistemic Norms: Assertion, Action, and Belief. Oxford: Oxford University Press.

 

Author Information

Matthew A. Benton
Email: matthew.benton@philosophy.ox.ac.uk
University of Oxford
United Kingdom

Philosophy of Mental Illness

The Philosophy of Mental Illness is an interdisciplinary field of study that combines views and methods from the philosophy of mind, psychology, neuroscience, and moral philosophy in order to analyze the nature of mental illness. Philosophers of mental illness are concerned with examining the ontological, epistemological, and normative issues arising from varying conceptions of mental illness.

Central questions within the philosophy of mental illness include: whether the concept of a mental illness can be given a scientifically adequate, value-free, specification; whether mental illnesses should be understood as a form of distinctly mental dysfunction, and whether mental illnesses are best identified as discrete mental entities with clear inclusion/exclusion criteria or as points along a continuum between the normal and the ill. Philosophers critical of the concept of mental illness argue that it is not possible to give a value-neutral specification of mental illnesses. They argue that that our concept of mental illnesses is often used to disguise the ways in which mental illness categories enforce pre-existing norms and power relations. Questions remain about the relationship between the role that values play within the concept of mental illness and how those values relate to concepts of illness more generally. Philosophers who consider themselves a part of the neurodiversity movement claim that our concept of mental illness should be revised to reflect the diverse forms of cognition that humans are capable of without stigmatizing individuals that are statistically non-normal.

There are also epistemological issues concerning the relationship between mental illness and diagnosis. Historically, the central issue centers on how nosologies (or classification-schemas) of mental illness, especially the Diagnostic and Statistical Manual of Mental Disorders (the DSM), relate mental dysfunctions with observable symptoms. Mental dysfunction, on the DSM system, is identified via the presence or absence of a set of symptoms from a checklist. Those critical of the use of behavioral symptoms to diagnose mental disorders argue that symptoms are useless without a theoretically adequate conception of what it means for a mental mechanism to function poorly. A minimal constraint on a diagnostic system is that it must be able to distinguish a person with a genuine mental illness from a person suffering from a problem with living. Critics argue that the DSM, as currently constituted, cannot do this.

Lastly, there are a host of questions surrounding the relationship between mental illness and normativity. If mental illness undermines rational agency, then there are questions about the degree to which the mentally ill are capable of autonomous decision-making. This bears on questions regarding the degree of moral and legal responsibility that the mentally ill can be assigned. Further questions about agency arise over bioethical questions about the standing of the demands made on healthcare professionals by the mentally ill. For example, individuals with Body Integrity Identity Disorder (BIID) request that surgeons amputate their healthy limbs in order to restore a balance between their internal self-representation and their external body image. Bioethicists are divided over whether the requests of patients with BIID are genuinely autonomous and deserving of assent.

Table of Contents

  1. Conceptions of Mental Illness
    1. Alienism and Freud
    2. DSM I – II
    3. The Bio-psycho-social Model DSM III – 5
  2. Criticisms of the Bio-psycho-social Model
    1. Mental Illness as Dysfunction
    2. Neurobiological Eliminitivism
    3. The Role of Value
    4. Szasz's Myth of Mental Illness
  3. Neurodiversity
    1. Motivation
    2. Autism, Psychopathy
  4. Responsibility and Autonomy
    1. Psychopathy
    2. Body Integrity Identity Disorder and Gender Dysphoria
  5. References and Further Reading

1. Conceptions of Mental Illness

a. Alienism and Freud

Although there are many conceptions of madness found throughout the ancient world (demon possession, divine revelation or punishment, and so forth), the conception of a distinctly mental form of illness did not fully begin to crystallize, at least in the West, until the latter half of the nineteenth-century with the creation and rise of mental asylums. Individuals who were housed in asylums were thought to be psychotic or insane. Psychotic inmates were seen as distinctly different from the non-psychotic population and this justified the creation of special purpose institutions for the containment of psychotic individuals. Psychotics were construed as suffering from distinct and localizable organic brain disorders and were treated by medical professionals known as Alienists (Elliott 2004, 471). Writing at the time, German psychiatrist Emil Kraepelin’s nosology divided psychoses into one of two types: mood disorders and demtia praexcox (Kraepelin 1896a, 1896b). All other forms of distress were though to fall outside of the province of the asylum and of medical treatment.

Non-psychotic individuals who were unhappy with their lives, who felt intense anxiety, or who might vacillate between periods of high and low-motivation were not thought to have psychotic problem. These individuals were not treated or seen by alienists but instead sought help from their family, friends, or clergy (Horwitz 2001, 40). Non-psychotic dysphoria (unhappiness) was, in this context, understood not as a distinctly medical problem but instead in a variety of other forms: a typically social problem with living, a character flaw, or simply as a different way of life. The solution for the unhappiness that many individuals suffered was not found within the asylum but instead from the family, god, or other social institutions. There was, at this time, a clear distinction between medical problems resulting in psychosis and social problems that caused suffering.

Sigmund Freud grew up in the alienist tradition and received his medical degree in 1881. Freud's theory of the mental and of mental illness would revolutionize western understanding of psychology and would become the dominant paradigm in the psychological sciences until the middle of the twentieth-century. Where the alienists saw mental illnesses as manifestations of rather discrete brain dysfunctions, Freud would come to understand the distinction between normal persons and the mentally ill as arising from a conflict in psychological mechanisms that were a part of the normal human repertoire (Freud 1915/1977; Ghaemi 2003, 4). Where the alienist understood non-psychotic unhappiness as a problem to be solved by individuals and their support networks, Freud understood problems in living as the domain of the psychotherapist. Paul Roazen famously quotes Freud as claiming that “[t]he optimum conditions for (psychoanalysis) exist where it is not needed—that is, among the healthy” (Roazen 1992, 160).

Crucial to Freud's reorientation of mental disorder was his view of the relationship between observable behavioral symptoms and underlying psychological disorder. Unlike Kraepelin, who understood psychotic behavioral symptoms as closely tied to specific underlying brain dysfunction, Freud did not believe that behavioral symptoms could be tied to unique disorders. The underlying source of human psychological suffering, as Freud understood it, stemmed from universal childhood experiences that if poorly resolved or understood, could manifest in adulthood as neurosis. Freud saw repression, for example, as a normal part of development from child to adult. An individual could fail to properly apply repressive techniques. If this occurs then poorly repressed trauma can manifest itself in a myriad of ways from obsessive cleaning, chronic gambling, melancholia, and so forth. (Freud 1915/1989; Horwitz 2001, 43). Simply noting melancholia in a patient would not be enough for a psychoanalyst to understand the source of repressive dysfunction.

Because a client troubled by chronic gambling and another client troubled by hysteria could, in principle, be suffering from the same underlying repressive dysfunction, any diagnostic manual based on Freud's conception of mental disorders would not hold symptoms as fundamentally important to the diagnostic process. Instead, Freud claimed that the only way to truly understand a patient's underlying psychological dysfunction is to acquire detailed information about a person, including his or her dreams, in order to uncover repressed sexual urges (Freud 1905/1997).

The first two editions of the DSM were largely based on Freud's underlying theory of repression and mental disorder. This nosology would dominate western thinking about the mentally ill until the 1960s.

b. DSM I – II

When the  first edition of the Diagnostic and Statistical Manual of Mental Disorders was published in 1952, psychodynamic theorists dominated the clinical and academic landscape. Nearly 2/3 of the chairs of psychology departments in American universities were chaired by psychoanalysts and the emerging DSM strongly reflects their theoretical assumptions (Strand 2011, 277). By this point, psychiatry was seen as an extension of medical practice. This required the creation of a nosology, a catalogue of disorders for clinical practice (Graham 2010, 5).

The first-edition of the DSM represented a revolutionary change in the conception and treatment of mental illness. Given the expansive notion of mental illness proposed by Freud and his students, the first two editions of the DSM conclude that many individuals that, prior to this point,  were not  seen as mentally ill, would benefit from therapy. Because symptoms were only weakly correlated with underlying illness on the psychodynamic view, only repeated, and  intensive, conversations with a qualified analyst could help a person get to the root cause of his problems (Horwitz 2002, 45; Grob 1991, 425). The first-edition of the DSM devotes a significant proportion of its 145 pages to a classification of mental illness concepts and terms (American Psychiatric Association 1952, 73-119). Unlike future editions of the manual, illnesses are not identified in terms of a series of symptoms but instead in terms of the underlying psychological conflict responsible. For example, the manual defines Psychoneurotic Disorder as:

[T]hose disturbances in which “anxiety” is a chief characteristic, directly felt or expressed, or automatically controlled by such defenses as depression, conversion, dissociation, displacement, phobia formation, or repetitive thoughts and acts…a psychoneurotic reaction may be defined as one in which the personality, in its struggle for adjustment to internal and external stresses, utilizes the mechanisms listed above to handle the anxiety created (American Psychiatric Association 1952, 12-13).

Yet, The presence of anxiety is not sufficient to diagnose psychoneurotic disorder. Anxiety must result from an underlying conflict between the personality and other stressors. It is the role of the analyst , in this context, to discover whether this underlying conflict is present. This cannot be done by merely observing symptoms; only psychodynamic therapy can discover the true cause of a patient’s anxiety (Grob 1991, 423).

Dissent against this system of classification and diagnosis arose from many groups both external to psychiatry and internal to the psychiatric discipline; these criticisms solidified in the 1960s. The emerging “anti-psychiatry” movement would come to challenge the assumptions that had grounded psychiatric practice in the first half of the 20th century. Conceptions of mental illness, the underlying assumptions behind the process of diagnosis, and even the status of psychiatry as a science were all subject to sustained critiques. Several of the most vocal critics of psychiatry were themselves clinical psychiatrists: R.D. Laing, David Rosenhan, and Thomas Szasz. The latter’s critique of psychiatric practice and the conceptions of mental illness are outlined in detail below in section 2(b).

Rosenahn conducted a pair of famous studies that would radically undermine the scientific legitimacy of clinical diagnosis, especially in the eyes of the public. In his initial study, Rosenhan, along with seven other volunteers, attempted to have themselves admitted several mental health institutions (Rosenhan 1973, 179-180). Rosenhan instructed his collaborators to claim that they heard a voice which said only two words: “thud” and “hollow.” For all other questions, Rosenhan instructed his subjects to answer honestly. The words ‘thud’ and ‘hollow’ were chosen specifically because they did not correspond to a known pattern of neurosis in the DSM II. Rosenhan, and all of his confederates, were admitted to mental institutions; all but one of Rosenhan’s subjects were admitted under a diagnosis of schizophrenia (Rosenhan 1973, 180). Once admitted, subjects took as long as 52 days before they were released, despite the fact that they did not play-act any symptoms of any mental illness. Rosenhan noted that once he and his confederates had been admitted, everyday behavior began to be interpreted as a sign of their underlying mental illness. Subjects who were taking notes for later use, for example, were noted as engaging in unusual “writing behavior;” subjects speaking with a psychiatrist about their childhood and family were construed as having telltale neurotic early-childhood issues (Rosenhan 1973, 183). Since these subjects were not otherwise in distress, Rosenhan claimed that the diagnostic process was not representing an underlying ‘mental illness’ in any of the pseudopatients but instead that the diagnostic process was unscientific and unfalsifiable.

Once Rosenhan publicized the results of his initial study, several institutions challenged his results by re-asserting the validity of the diagnostic process. They claimed that their institutions would not have fallen for Rosenhan’s ruse and challenged him to send pseudopatients to them for analysis. Rosenhan agreed and, despite the fact that no psuedopatients were actually sent, these institutions suspected at least 41 of their new patients (more than 20% of new patients over a three month period) of being pseudopatients sent by Rosenhan (Rosenhan 1973, 181). Again it seemed as if the diagnostic process was incapable of accurately separating the mentally ill from the healthy. In part resulting from critiques of the diagnostic process like Rosenhan’s studies, the diagnostic model of psychiatry would be radically altered. Beginning as early as 1974, the American Psychiatric Association would assign a taskforce to prepare for the publication of the next edition of the DSM. The DSM III that would result from this process, published in 1980, would represent a rejection of the psychodynamic assumptions built into the previous versions of the manual and provide a framework for all future editions of tDSM.  

c. The Bio-psycho-social Model DSM III – 5

The most recent edition of the Diagnostic and Statistical Manual of Mental Disorders, the DSM 5, was published in 2013. This edition does not substantially modify the conception of mental disorder that has been offered by the manual since its third edition, first published in 1980. In comparison with the first edition of the DSM, the DSM 5 includes diagnostic criteria for more than 400 individual disorders. The conception of mental disorders used in the DSM 5 presents them as biological, psychological, or social dysfunctions in an individual; this model has, unsurprisingly come to be called the Bio-psycho-social model.  It represents the current consensus view of mental disorder among psychological researchers and clinical practitioners. Psychologists disagree about whether to understand this definition conjunctively or disjunctively (Ghaemi 2007, 8). The Biopsychosocial model states:

A mental disorder is a syndrome characterized by clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the   psychological, biological, or developmental processes underlying mental functioning. Mental disorders are usually associated with significant distress or disability in social, occupational, or other important activities. An expectable or culturally approved response to a common stressor or loss, such as the death of a loved one, is not a mental disorder. Socially deviant behavior (e.g., political, religious, or sexual) and conflicts that are primarily between the individual and society are not mental disorders unless the deviance or conflict results from a dysfunction in the individual, as described above (American Psychiatric Association 2013, 20).

From this characterization we can extract four criteria that serve to a genuine mental disorder from other sorts of issues (problems in living, character flaws, and so forth). In order for a disturbance to be classified as a mental disorder it must:

  1. Be a clinically significant disturbance in cognition, emotion regulation, or behavior
  2. Reflect a dysfunction in biological, psychological, or developmental processes
  3. Usually cause distress or disability
  4. Not reflect a culturally approved response to a situation or event
  5. Not result purely from a problem between an individual and her society

All of the criteria, with the exception of the 'distress' criterion, are individually necessary and jointly sufficient for the classification of a patient's symptoms as stemming from a mental disorder. Prior to the seventh printing of the DSM II, homosexuality had been included as a mental disorder. The revisions to the text that took place between the DSM II and the DSM III were meant to make clear that homosexuality (“an interest in sexual relations or contact with members of the same sex”), does not satisfy the criteria for a mental disorder so long as it is not accompanied by clinically significant dysphoria (American Psychiatric Association 1973, 2). However, an individual who feels dysphoria as a result of their homosexuality can be diagnosed with an Unspecific Sexual Dysfunction in the DSM 5 (American Psychiatric Association 2013, 450).

The third, 'distress,' criterion is neither necessary nor sufficient to qualify a mental disturbance as a disorder. This can be seen by examining the process for the diagnosis of the 'cluster B' personality disorders (histrionic, anti-social, borderline, and narcissistic personality disorders). Subjects with cluster B disorders often do not suffer as a result of their condition. Indeed, those with Antisocial Personality Disorder, for example, may not see themselves as disordered and may even approve of their condition. This has led some individuals with personality disorders to align with the emerging Neurodiversity movement (see section 3 below). The patterns of behavior manifested by those with cluster B personality disorders are, nonetheless, understood as reflecting clinically significant disturbances in cognition, emotion regulation, and behavior. They form a distinct class of mental disorders in the DSM (American Psychiatric Association 2013, 645-684). Some philosophers have argued that the cluster B personality disorders should not be understood as mental disorders but instead that they are better understood as distinctly moral disorders. Louis Charland argues for this conclusion. He claims that, unlike the cluster A and C personality disorders, the only treatment for the cluster B disorders is distinctly moral improvement; because this fact about the treatment of cluster B personality disorders uniquely distinguishes them from all other mental disorders in the DSM. Thus Charland concludes that they reflect moral (as opposed to value-neutral) dysfunction (Charland 2004a, 67).

Since the publication of the DSM III, mental disorders have been defined as being caused by a clinically significant dysfunction of a mental mechanism. Because the definition of mental illness invokes the concept of dysfunction, it is often subject to critique (see the following section). Although the general definition of mental disorder used by the DSM invokes the concept of dysfunction, the diagnostic criteria for particular mental illnesses do not. It is instructive to provide an example of how particular disorders are defined within the manual. Anorexia Nervosa, for example, is defined by the presence of three clusters of behavioral symptoms (American Psychiatric Association 2013, 338-339):

A: Restriction of energy intake relative to requirements, leading to a significantly low body weight in the context of age, sex, developmental trajectory, and physical health.

B: Intense fear of gaining weight or of becoming fat, or persistent behavior that interferes with weight gain, even though at a significantly low weight

C: Disturbance in the way in which one's body weight or shape is experienced, undue influence of body weight or shape on self-evaluation, or persistent lack of recognition of the seriousness of the current low body weight

Importantly, this characterization of Anorexia Nervosa presents the disorder as a distinct, specifiable, condition that is present in the person and that the underlying dysfunction is uniquely picked out by the presence of the behavioral symptoms identified as A and C; “B” symptoms are seen as common but not essential to diagnosis (American Psychiatric Association 2013, 340). Given the underlying conception of mental disorder offered by the authors of the DSM, Anorexia Nervosa cannot simply be the result of a conflict between the individual and society. It also must not result from an individual accurately trying to adopt social norms about beauty or appearance or diet. It must instead result from a combination of biological, psychological, and/or social dysfunctions however, the diagnostic criteria do not indicate what this underlying dysfunction consists in nor does it offer any evidence that the symptoms associated with the disorder are caused by the same underlying dysfunction.

Stemming in part for reasons of this sort, both the general bio-psycho-social model of mental disorder and the uses of the model to characterize particular disorders, like Anorexia Nervosa, have been subject to repeated criticism by philosophers.

2. Criticisms of the Bio-psycho-social Model

The definition of mental disorder that stems from the bio-psycho-social model has been subject to several criticisms. Philosophical critiques of the definition of disorder have ranged from calling for revision and specification of the concept of disorder to abandonment of the concept altogether. Many of the 400+ disorders that appear in the DSM have also been criticized. In some cases, these critiques are internal: the disorders do not appear to match the criteria of mental disorder offered in the DSM itself; in other cases, as with some critics of schizophrenia, the aim is to undermine both the existence of the disorder and the conception of mental disorder that results in its inclusion (Bentall 1990).

Many members of the antipsychiatry movement described in section 1b were responsible for setting the stage for the criticisms of the bio-psycho-social model. Although in part political, this movement saw the rise of several alternative conceptualizations of human function and dysfunction that have come to challenge the DSM’s conception of a mental disorder. Chief among these were Thomas Szasz’s influential arguments that mental illness is a ‘myth’ and the rise of ‘positive psychology’ as a viable alternative psychological ideology.

a. Mental Illness as Dysfunction

Nassir Ghaemi has criticized the current conception of mental disorder as resting on an unscientific political compromise between factions within clinical and research psychologists and to stave off the looming threat of neurobiological eliminitivism (see section 2b). Ghaemi argues that many psychologists view the Bio-psycho-social conception of mental illness disjunctively and focus predominantly on their preferred method for understanding a disorder depending on their own assumptions of dysfunction (Ghaemi 2003, 10). Although this compromise presents the appearance of consensus, Ghaemi argues that it is an illusion. He advocates for a form of integrationism about mental disorder that has become popular in some circles (Ghaemi 2003, 291; Kandel 1998, 458). A true integration of biology and psychology requires solving the currently unresolved issue over consciousness and how consciousness is realized by the brain. Because this question does not appear to be resolvable in the near-term, integrationists of Ghaemi’s stripe have offered a placeholder for a replacement to the Bio-psycho-social model instead of a true alternative to current models.

Philosophers have also criticized the DSM conception of mental disorder for its lack of a unified theory of dysfunction. The current DSM requires that mental disorders reflect a dysfunction of biological, psychological, or social mechanisms though the text itself is silent on what it would mean for a mechanism to be dysfunctional and does not provide any evidence that the symptoms used for clinical diagnosis of a disorder are caused by a single underlying dysfunction.

Philosophers have appealed to at least three distinct senses of dysfunction to craft a unified theory of mental disorder: etiological, propensity, and normative dysfunction. Etiological function (and dysfunction) is construed in evolutionary terms. A mechanism is functioning, in the etiological sense, if it evolved to serve a specific purpose and if it is, currently, serving its evolved purpose. In order to discover the function of a mental mechanism, it is necessary to discover its evolved function. Dysfunction can then be construed relative to this purpose (Wakefield 1999, 374; Boorse 1997, 12). A mechanism is dysfunctional if it is not fulfilling its evolutionary purpose. Depression, for example, may, in some cases, represent a dysfunction of a mechanism evolved for affective regulation. However, evolutionary psychological theories of mental function are still in their early stages. Furthermore, some philosophers want to allow for the possibility that many of our mental mechanisms may not have evolved to serve the functions to which we currently put them to use.

A propensity function is not constrained by past selective pressures but instead defines function and dysfunction based upon current and future selective success. Male aggression, for example, may have been adaptive in our ancestral environment and hence may represent a case of proper functioning on the etiological theory. On the propensity view, however, male aggression may not be adaptive for life in modern societies even if it was fitness-enhancing in our ancestral environments. Male aggression might therefore, on a propensity account of function and dysfunction, represent a dysfunctional mechanism and hence a mental disorder (Woolfolk 1999, 663). As with the evolutionary view, propensity function conceptions of mental dysfunction have the advantage of appealing to descriptive evidence in order to determine whether or not a specific pattern of behavior is fitness-enhancing in its current context (Boorse 1975, 52). However, crafting a theory of function and dysfunction in terms of present-day fitness appears to allow some conditions to count as mental disorders that we may be averse to label mental illnesses. One major issue with appealing to propensity function is that it appears to resurrect defunct mental illness. Drapetomania, the mental illness that was applied to runaway slaves in the nineteenth century, would appear to satisfy the definition of a propensity dysfunction. Dysphoria caused by the conditions of slavery and a strong desire to abandon one’s current condition are arguably not fitness-enhancing, in a strictly evolutionary sense, and therefore appear to satisfy the criteria for a propensity dysfunction (Woolfolk 1999, 664).

Purely normative accounts of dysfunction have not garnered much favor within the psychological or philosophical disciplines. On a purely normative account of dysfunction, a person is said to be mentally ill based upon whether or not the behavior fits within the context of a larger normative network. Whether we choose to call a person mentally ill or merely ‘bad’ may depend on whether or not we believe agents like this should be held morally responsible and the concept of responsibility may not be reducible to non-normative elements (Edwards 2009, 78). On such conceptions, it is impossible to avoid invoking evaluative concepts when describing what a mental illness is or why a particular set of behaviors is best understood as an illness (Fulford 2001, 81).

George Graham argues for what he calls an unmediated defense of realism about mental illness; Graham's defense in unmediated in the sense that he does not believe that it must be shown that mental illnesses are natural kinds or result from brain-disorders in order to qualify as legitimate classification-independent kinds (Graham 2014, 133-134). Instead, he argues that “the very idea of a mental disorder or illness is the notion of a type of impairment or incapacity in the rational or reasons-responsive operation of one or more basic psychological faculties or capacities in persons” (Graham 2014, 135-136; see also Graham 2013a and 2013b). These capacities could be described or analyzed at various levels of implementation according to Graham though their malfunction is understood in normative terms.

Perhaps the most influential theory of dysfunction within the philosophical literature is offered by Jerome Wakefield. Wakefield’s conception of mental disorder attempts to bridge the gap between purely objective conceptions of disorder and subjective or normative views. On Wakefield’s view, a mental disorder arises only when a ‘harmful dysfunction’ is present. This combines two different types of concepts: a concept of dysfunction and a concept of harm. Wakefield’s conception of dysfunction is etiological. A mechanism is dysfunctional if it fails to perform the purpose that it evolved to perform. Etiological function is objective in the sense that etiological functions are pan-cultural: they are not dependent on cultural conceptions of function or value. They are, instead, a set of universally shared facts about human nature. The ‘harmfulness’ criterion, on the other hand, is sensitive to cultural context. (Wakefield 1992, 381; Wakefield 1999, 380). As Wakefield understands it, a person is harmed by a disorder if the disorder causes a “deprivation of benefit to a person as judged by the standards of the person’s culture” (Wakefield 1992, 384). In order to be diagnosed with a mental illness, it must be true that an agent’s behavior is caused by a malfunction of an evolutionary mental mechanism and, furthermore, it must also be true that this dysfunction, in the context of that individual’s culture, deprives her of a benefit.

Wakefield, and others like him, argue that it is crucial to distinguish between mental disorders and other sources of distress (Horwitz 1999). The crucial factor in determining proper treatment for a person’s dysphoria, these philosophers argue, is a proper identification of the cause of his or her distress. Mental disorders are caused by harmful mental dysfunctions. Other sources of distress are better understood as problems in living. Many types of unhappiness that are typically diagnosed as depression, on this view, are better understood not as stemming from depression but instead by an examination of the larger social factors that may be causing unhappiness. Because the DSM’s conception of mental disorder is cause-insensitive and identifies depression only via symptoms, it fails to distinguish between these two forms of unhappiness. The danger, these philosophers argue, is that mental disorders are construed as being problems that reside within an agent and that treatments are therefore focused only on, usually pharmaceutically aided, symptom relief. If distress has an underlying social cause, if it is a problem in living, then treatment unhappiness should have a radically different focus. For example, the symptoms described by Betty Friedan as caused by “the problem that has no name” fit relatively easily within the rubric of depression (Friedan 1963, 17). However, Wakefieldian views would resist this diagnosis. The underlying cause of the distress Freidan describes is social and the best treatment of this form of distress is social change. Sadness that is caused by patriarchal or misogynist cultures does not represent a malfunction in the evolved mechanisms in a person (it may represent just the opposite). On the DSM model, treatment may merely mask these depressive symptoms pharmacologically and would only serve to maintain the unjust social situations that give rise to it. The best understanding for “the problem that has no name” is to identify it as a problem in living stemming from misogynist assumptions about the roles available to women in a culture. Wakefield's view is realist in the sense that its conception of mental dysfunction is independent of our acts of classification (Graham 2014, 125). Because function is grounded on etiology, there is a culturally-independent fact-of-the-matter regarding the presence or absence of a dysfunction in a person.

Wakefield’s harmfulness criterion allows for different cultures to come to different conclusions about which evolutionary dysfunctions will rightfully count as a mental disorder. On Wakefield’s view, homosexuality may represent a genuine evolutionary dysfunction (in the sense that exclusive homosexual behavior threatens the propagation of genes into future generations) but homosexuality is not harmful in a contemporary broadly Western cultural-context. Because it is not harmful in this cultural-context, it is a mistake to think of homosexuality as a disorder. This leaves open the possibility that the harmfulness criterion would allow homosexuality to be a legitimate mental disorder in other cultural-contexts.

Other critics have assailed Wakefield’s appeal to etiological dysfunction. Aside from the general epistemological problem that results from identifying the evolutionary function of psychological mechanisms, there are two problems that arise with an appeal to etiological dysfunction. First, some have argued that depression is an evolved response and hence could not be construed as a mental disorder on Wakefield’s view (Bentall 1992, 96; Woolfolk 1999, 660). Second, some have argued that many of our mental mechanisms may not have arisen as a result of evolutionary selection pressures. They may be evolutionary “spandrels” in Stephen Gould’s sense. The white color of bones necessarily results from the composition of bone but is itself not a property explicitly selected in an evolutionary sense. A spandrel cannot dysfunction in Wakefield’s terms because it lacks an evolutionary cause for its existence. Although spandrels can confer adaptive advantages, they are importantly not themselves traits that are selected for. If any of our mental mechanisms are spandrels then Wakefield’s view cannot explain disorders arising from their use (Gould and Lewontin 1979, 581; Woolfolk 1999, 664, Zachar 2014, 120). Famously, some philosophers have argued that complex human abilities, like our capacity for language may themselves be evolutionary spandrels (Chomsky 1988; Lilienfeld and Marino 1995, 413). Furthermore, recent critics have suggested that too much of the recent work on mental illness has focused exclusively on elucidating the concept of illness or dysfunction and have neglected to consider how advances within the philosophy of mind and the cognitive sciences might change our conception of the ‘mental’ component of mental illness (Brülde and Radovic 2006, 99).

Philosophers who are critical of attempts to define a distinctly mental conception of disorder have been motivated, in part due to the arguments above, to move in two different directions. Some have proposed that we replace the concept of mental disorder with a strictly neurological conception of dysfunction. Doing so, they argue, would place disorders on a clearer and more scientific footing.

b. Neurobiological Eliminitivism

The transition from the DSM II to DSM III brought with it the adoption of the biomedical model for diagnosis. Unlike the psychodynamic model, which saw symptoms as providing little insight into the underlying cause of distress, the biomedical model afforded symptoms pride of place in diagnosis. For much of the 20th century, the biomedical model of diagnosis understood the symptoms that a patient brought to her clinician as providing insight into the underlying disorder(s) that caused her patient to consult the clinician in the first place.

Psychology, as a therapeutic discipline, adopted this model of diagnosis and, in the process, began to categorized patient symptoms into discrete groupings, each caused by a specific mental disorder. However, some philosophers have noted that the biomedical model itself has changed rapidly in the 21st century and that this has created a dilemma for clinical psychological models of diagnosis. Patient reports, in current biomedical models of diagnosis, have lost their pride of place as the key markers for diagnosis. In their place clinicians turn to laboratory test results to determine the true illness responsible for a patient’s suffering. One motivation for this change, within general clinical practice, is that symptoms underdetermine diagnosis. Adopting this new biomedical model for mental illnesses, however, has been seen by some as presenting an eliminitivist threat to mental disorders (Broome and Bortolotti 2009, 27).

Eliminative materialism arose in the 20th century in order to challenge to views about the mind that assign mental states explanatory/causal roles. The views targeted by the eliminitivist were grounded in common-sense or “folk” ideas about everyday mental states like beliefs and desires. These views situated mental states as entities belonging to proper scientific explanation. Eliminitivists argued that folk psychological theories of the mind would fare no better than our folk biological or physical theories and that the folk mental states should be eliminated from scientific explanations (Churchland 1981). Mature cognitive and neuro-sciences do not need to make reference to folk psychological states like beliefs and desires in order to explain human behavior; furthermore, the neural architecture of the brain itself does not appear to house discrete localizable states like beliefs and desires that are assumed by folk psychology (Ramsey, Stich and Garon 1990). Folk psychological theories tell us that the best explanation of human behavior (including mental illness) should be given in terms of dysfunctional mental states (delusions, compulsive desires, etc.). The eliminitivist, on the other hand, undermines this view by claiming that nothing in the brain corresponds to these folk-psychological states and that we are better off without appealing to them.

Eliminitive materialism has arisen as a challenge to the DSM construal of mental disorders in the form of cognitive neuropsychology. “This process may start as a process of reduction (from the disorder behaviorally defined to its neurobiological bases), but in the end psychiatry as we know it will not just be given solid scientific foundations by being reduced to neurobiology; it will disappear altogether” (Broome and Bortolotti 2009, 27). Just as biomedical diagnosis has shifted away from patient report toward more direct assessments using bio-physiological metrics, the eliminitivist argues that the same process should occur with mental disorders. Neurological dysfunction should supplant folk psychological discussions of mental dysfunction. In much the same way as Alzheimer’s disease is understood as a neurological brain disorder; the eliminitivist claims that a mature cognitive neuroscience will replace contemporary classifications of mental disorders with neurological dysfunction (Roberson and Mucke 2006, 781).

Philosophers who resist the eliminitivist reduction of the mental to the neurological argue that at least some types of mental disorders cannot be understood without appealing to mental states. Plausible candidates for this type of disorder include delusions (Broome and Bortolotti 2009, 30), personality disorders (Charland 2004a 70) and various sexual disorders (Soble 2004, 56; Goldman 2002, 40). Personality disorders, especially those falling under the category of ‘Cluster-B’ disorders, appear to require that individuals have acquired bad characters in order to accurately explain why the behavior stemming from the illness is disordered. If normative competence necessarily makes reference to belief-forming mechanisms (having knowledge about moral concepts, recognition of the agency of other persons, etc.) then Cluster-B personality disorders cannot be fully reduced to their neurobiological underpinnings without a meaningful loss of the disordered element of the disorder (Pickard 2011, 182).

On a related note, philosophers have attempted to resist the purely mechanistic neuro-scientific explanations of psychology. Jeffrey Poland and Barbara Von Eckardt argue that the DSM's bio-psycho-social model relies on a mechanistic model of mental illness but that purely mechanistic models fail to explain the representational aspects of a mental illness; in their words “[a]ny such account will extend well beyond what one would naturally assume to be the mechanism of (or the breakdown of the mechanism of) the cognition or behavior in question” (Von Eckardt and Poland 2004 982). Peter Zachar argues for a view he calls the Imperfect Community Model. This model is based on a rejection of essentialism grounded in pragmatism; Zachar argues that mental illnesses are united as a class despite lacking any necessary and sufficient conditions to define them; mental disorders bear a prototypical or family resemblance to one another, however, that suggests a rough unity to the concept (Zachar 2014, 121-8).

c. The Role of Value

There are related questions that arise about the nature and role of value and mental illness. The first has to do with whether mental illness is a value-neutral concept. Nosologies of mental illness attempt to create value-neutral definitions of the disorders they contain. In the ideal, the concepts picked out by manuals like the DSM are supposed to reflect an underlying universal human reality. The mental disorders contained therein are, with only minor exception, not meant to represent culturally relative normative value judgments onto the domain of the mental.

The DSM includes a “cultural formulation” section meant to distinguish culturally specific, explicitly normative disorders from the supposed pan-cultural, value-neutral disorders that make up the bulk of the manual (American Psychiatric Association 2013, 749). In part this approach stems from the idea that psychologists adhering to the bio-psycho-social model of mental disorders view their project as being on par with nosologies of non-mental disorders. There are two questions worth raising here. The first is whether or not this “likeness argument” has any merit, the second is whether or not the biomedical illness concept is, itself, value-neutral (Pickering 2003, 244). A heart attack, for example, is a disorder, on this model, no matter the time or location of the infarction. Heart attacks are, in this sense, natural kinds and proper objects for scientific study. A heart attack represents a particular form of cardiovascular dysfunction that is agnostic about the cultural or moral values of a particular community. Despite the fact that heart attacks may not present the same symptoms across different sufferers (some may grab their left arms, some may scream, some may fall to the ground, etc), what unites these heterogeneous seeming symptoms is an underlying causal story that explains them (Boyd 1991, 127). Mental disorders are thought to operate on the same principle. On the one hand, the view that psychological symptoms are united by a common cause may result from pre-theoretical assumptions about mental states (Murphy 2014 111-121).  Critics of the bio-psycho-social model argue that values are an essential component of the concept of mental illness. If values are an ineliminable part of the concept of mental illness, we should be led to ask what kinds of values are invoked by the concept

Michel Foucault was an early critic of mental illness and mental health institutions. In his Madness and Civilization: A History of Insanity in the Age of Reason, Foucault argued that asylums, being institutions where ‘the mad’ were separated from the rest of society, emerged historically by the application of models of rationality that privileged individuals already in power. This model served to exclude many members of society from the circle of rational agency. Asylums functioned as a place for society to house these undesirable persons and to reinforce pre-existing power relations; cures, when available, represented conformity to existing power structures (Foucault 1961/1988). Foucault’s critique of mental disorder inspired a generation of psychologists, many of which see themselves as part of a new counter-movement from within the discipline: the Positive Psychology movement. The constructivist and value-laden interpretation of the DSM’s bio-psycho-social model of mental disorder has led some within this movement to call for the abandonment of the model. There is an intrinsic problem, they argue, with viewing individuals as, primarily, vehicles of dysfunction. Those within the positive psychology movement argue that a new, openly value-laden, conception of human beings should supplant the manual: “[t]he illness ideology's conception of “mental disorder” and the various specific DSM categories of mental disorders are not reflections and mappings of psychological facts about people. Instead, they are social artifacts that serve the same sociocultural goals as our constructions of race, gender, social class, and sexual orientation—that of maintaining and expanding the power of certain individuals and institutions and maintaining social order as defined by those in power” (Maddux 2001, 15).

Hybrid views, like those of Jerome Wakefield, which attempt to delineate a value-neutral and a value-laden component to the concept of mental illness have also been subject to criticism for the role they assign value. Richard Bentall, for example, has argued that the supposedly objective components of these theories contain value-laden assumptions. Bentall argues that happiness satisfies the objective criteria for mental dysfunction (happiness is a rare mental state, it impairs judgment and decision making, and its neural correlates are at least partially well-understood); however, happiness is not viewed as a dysfunction (and consequently is not categorized as a mental illness) because we value the state for its own sake (Bentall 1999, 97). This view is echoed by constructivists about mental illness.

Constructivists about mental illness can hold a variety of positions about where the concept of social construction operates with regards to mental illness. At the least radical level, constructivists can hold that cultures impose models of ideal agency that are used to label sets of human behaviors as instances of ordered and disordered agency; behavioral syndromes, on this view, can be more or less pan-cultural though each culture develops a theory of ideal agency that renders some of these syndromes ‘illnesses’ while other cultures may group the syndromes differently according to different values (Sam and Moreira 2012). A more thorough-going constructivism understands these packages or syndromes of behavior as themselves objects of constructivism; for example, the set of behaviors currently associated with depression would not be seen as a natural (categorization-independent) grouping of properties. Instead, the set of behaviors we call 'depressive' exist only because they have been grouped together by clinicians (for any number of reasons) (Church 2001, 396-397). This form of constructivism claims that the only way to explain why a set of behaviors, feelings, thoughts, and so forth, are grouped into a syndrome is that clinicians have created this grouping. Unlike the set of behaviors characteristic of a heart attack, for which we have a readily available causal story that unifies them, mental illnesses lack a clinician-independent explanation for their grouping. On this view, syndromes are akin to what Ian Hacking has called “interactive kinds” (Hacking 1995, Hacking 1999). For Hacking, while natural kinds represent judgment-independent groupings in the world, an interactive kind “when known, by people or those around them, and put to work in institutions, change the ways in which individuals experience themselves—and may even lead people to evolve their feelings and behaviors in part because they are so classified” (Hacking 1999, 103). To think of mental illnesses, like multiple personality disorder (now Dissociative Identity Disorder), as an interactive kind is to say that multiple personality disorder is not a basic fact about human neurology discoverable by the neuroscientist; instead, once the concept of multiple personality disorder is identified, once a set of behaviors has come to be seen as a manifestation of the condition and clinicians have been trained to identify and treat it, then individuals will begin to understand themselves in terms of the new concept and behave accordingly. Some have argued that many paraphilias and personality disorders are best understood on the interactive kind model (Soble 2004, 60; Charland 2004a, 70).

Critics will note that the natural kind -the socially constructed kind- distinction does note exhaust the alternatives. According to Nick Haslam, the natural kind distinction is tacitly invoked by realists of mental illness; this distinction, however, masks several possible alternative accounts of mental illness that allow for intermediate, less essentialist, even pluralist views (Haslam 2014, 13-20; see also Murphy 2014, 109).

d. Szasz's Myth of Mental Illness

Perhaps the best-known critic of mental illness to arise out of the anti-psychiatry movement of the 1960’s is Thomas Szasz. He published The Myth of Mental Illness in 1961 initiating a wide-ranging discussion of how best to understand the concept of a mental illness and its relation to physical illnesses. Szasz’s work was (and continues to be) the subject of significant discussion and debate. Szasz’s main claim is that the psychiatric field, and its concomitant conception of a mental illness, rests “on a serious, albeit simple, error: it rests on mistaking or confusing what is real with what is simulation; literal meaning with metaphorical meaning; medicine with morals...mental illness is a metaphorical disease” (Szasz 1974/1962, x). Mental illness should be understood as a metaphorical disease, according to Szasz, because it results from clinicians making a kind of category mistake. It involves the use of concepts derived from one disciplinary body, medicine and the natural sciences, and applying them to a realm where they do not rightfully apply: human agency (Cresswell, 24).

According to Szasz, the proper world-view of the natural sciences is to construe its objects of study as law-like and deterministic. All knowledge in this domain is thought to be reducible to, and explainable in terms of, physicalism. Medicine, being a branch of science, understands medical illness on this model. A malfunctioning heart-valve has characteristic physical discontinuity with a functional one, it has typical effects on the function of the valve, and these effects are identifiable independent of patient symptoms. The treatment for medical illnesses relies on a thoroughly physicalist picture of the workings of the human body. Szasz believed that adopting the concept of a physical illness into the realm of mental illness is fundamentally incompatible with our concept of human agency. This results from two lines of argument. The first is that mental illnesses, unlike physical ones, are not typically reducible to biophysical causes (Szasz 1979, 22). If biological dysfunction cannot be used as a basis for delimiting mental illness then the only option left is to appeal to non-normative behavior. Szasz’s second concern is similar to the worries of neurobiological elimintivism mentioned in section 2(b). Szasz argues that the eliminitivist’s picture of human agency is, at best, incomplete. The root of the problem stems from the fact that Szasz believes that we must view agents as necessarily free, capable of choice, and as responsible; “in behavioral science the logic of physicalism is patently false: it neglects the differences between persons and things and the effects of language on each” (Szasz 1974, 187). Szasz’s argument here is sometimes construed as an appeal to dualism. The physical world is deterministic but the mental world must necessarily be free. Because the bio-psycho-social model uses concepts derived from natural sciences in a realm where they do not rightfully apply (that is, human agency) mental illness, as a concept derived from the natural sciences, is a myth resulting from this category mistake. To say that mental illness is a myth, however, is not meant as a denigration of individuals who suffer. It is, instead, meant to more accurately categorize their suffering as resulting from a failure to conform to social, legal, or ethical norms (Pickard 2009, 85).

Szasz’s critics have responded along several lines. Some do not take issue with his underlying understanding of the illness concept but disagree with his claim that it is not applicable to mental phenomena. Mental illnesses, according to these critics, have been (or will soon be) reducible to neurological or neurochemical dysfunction. They argue that advances in neuroscience give us reason for thinking that the prospect for finding the neurological or neurochemical correlates for at least some of our mental illnesses categories is high (Bentall 2004, 307). Other critics have argued instead in the other direction and attacked Szasz’s construal of physical illness. Szasz’s arguments have been taken, by some, to imply that physical illness itself is a deeply evaluated category reflective of value-judgments in much the same way mental illness is meant to on Szasz’s account (Fulford 2004; Kendell 2004). Still others have aimed to preserve Szasz’s primary claim that the overarching category of ‘mental illness’ will prove to be a non-natural interactive-kind, reflective of our values and practices, while simultaneously maintaining that “particular kinds of mental illnesses may yet constitute valid scientific kinds” (Pickard 2009, 88).

3. Neurodiversity

Human cognitive and physical functions range widely across the species. Although most individuals fall within a statistically normal range in terms of their abilities in all of these arenas, statistical normalcy has long been criticized as a normative marker (Daniels 2007, 37-46). Advocates for what has come to be known as the ‘neurodiversity movement’ have begun, in part stemming from the criticisms of psychiatry and the DSM begun in the 1960’s, to push for widespread acceptance of the  forms of cognition beyond the “neuro-normal” that individuals operate with (Hererra 2013, 11). Members of the neurodiversity movement understand it as “associated with the struggle for the civil rights of all those diagnosed with neurological or neurodevelopmental disorders (Fenton and Krahn, 2007, 1). Forms of cognition currently seen as dysfunctional, ill, or disordered are better understood as representing diverse ways of seeing and understanding the space of reasons. Proponents of neurodiversity claim that agents on the autism spectrum, those with personality disorders, attention deficit and hyperactivity disorder, dyslexia, and perhaps even those with psychopathic traits should not suffer from the stigma associated with the illness label. Individuals to whom these label apply often demonstrate profound capabilities (artistic, mathematic, and scientific) that are inseparable from the condition underlying their illness-label (Glannon 2007, 3; Ghaemi 2011). Pluralism about forms of human agency should be encouraged once we fully understand the problematic ways in which norms have come to influence illness categories.

a. Motivation

Applying the label “mentally ill” or “disordered” can have long-term negative effects not only by  affecting how individuals to whom we apply the label view themselves (Charland 2004b, 338-340; Rosenhan 1973, 256) but also by affecting how others view and treat them (Didlake and Fordham 2013, 101). Often, the decision to create a new mentally ill class is decided without the consultation of the groups involved. Homosexuality, for example, had been labeled a mental disorder in the first two editions of the DSM until social and political movements, largely headed by homosexuals themselves, caused the American Psychiatric Association to re-assess its stance (Bayer and Spitzer 1982, 32). The effects that being labeled mentally ill or disordered have on persons are wide-ranging and durable enough to warrant caution; those in the neurodiversity movement argue, from various perspectives, that clinicians continue to mistake diverse forms of cognition (variations from the neuro-normal) with mental illness because of the assumption, which advocates argue is mistaken, that deviation from statistically-normal neural-development and function constitutes disorder. Advocates for neurodiversity typically argue along two lines. The first is to argue that our current concepts of mental dysfunction are in need of revision because they contain one or more of the problems described in section 2 of this entry. This line of argument focuses especially on issues over the role of power and value in the construction of mental illness categories. The second line of argument is “firmly grounded in motivations of an egalitarian nature that seek to re-weight the interests of minorities so that they receive just consideration with the analogous interests of those currently privileged by extant social institutions” (Fenton and Krahn 2007, 1). Any resulting account of neurodiversity must aim to preserve useful categories of illness or mental disorder (if only for the purposes of treatment).

Perhaps the most forceful arguments from the neurodiversity perspective target the status of autism as a form of mental disorder. Much controversy has followed the APA’s decision to fold the diagnosis of Asperger’s syndrome into the more general category of Autism Spectrum Disorder.

b. Autism, Psychopathy

Autism Spectrum Disorder is the diagnosis applied to a wide-ranging number of individuals who have demonstrated persistent difficulty with social understanding and communication and whose symptoms emerge quite early in development. For example, the DSM-5 lists “[i]mpairment of the ability to change communication to match context or the needs of the listener,” “[d]ifficulties following rules for conversation and storytelling,” and “[d]ifficulties understanding what is not explicitly stated (e.g., making inferences) and nonliteral or ambiguous meanings of language” as diagnostic for ASD (American Psychiatric Association 2013, 50-51). Advocates for neurodiversity argue that it is unjust to attempt to force those with ASD to modify their behavior in order to more closely match neurotypical behavior especially as a form of treatment for a disease or disorder. For example, efforts to “change the diets of people with ASD, force them to inhale oxytocin, and expose children to countless hours of floor time or social stories to try to make persons with ASD more like neurotypicals” fail to realize that these attempts at changing individual cognition imposes a narrow conception of proper functioning as a form of treatment. Furthermore, treatments whose aim is to reduce ASD symptoms, some argue, resemble arguments made by those wishing to eradicate other minority-cultures defined by functioning (that is, deaf-communities) (Barnbaum 2013, 134). Some individuals with ASD argue that they constitute their own unique culture that deserves respect (Glannon 2007, 2). Advocates for neurodiversity argue that conceptions of mental illness that include ASD assume that deviation from neurotypical function is evidence of mental dysfunction rather than a sign of the forms of neurodiversity present in any human population. Autistic flourishing must be understood as being different from (though not a degenerate form of) neurotypical flourishing. Equally important within the call to neurodiversity is the project to identify and articulate the ways that social institutions are built around and advantage persons of “neurotypical” function over others (Nadesan 2005, 30). Given the proper account of functional agency, many individuals with ASD should be seen as functional and not disordered or mentally ill. Although not as common, similar arguments are sometimes advanced for other mental disorders including psychopathy.

Psychopathy is a controversial construct. As currently understood, it is a spectrum-disorder and is diagnosed using the revised version of what is known as the “Psychopathy Checklist” (PCL-R). Importantly, psychopathy does not appear in any version of the DSM as a distinct disorder. In its place, the DSM offers Antisocial Personality Disorder (ASPD). ASPD is intended as an equivalent diagnosis, though there is significant evidence that ASPD and Psychopathy are distinct (Gurley 2009, 289; Ramirez 2013, 221-223). Psychopathy, discussed in more detail in section 4a, is characterized by an inability to feel empathic distress (to find the suffering of others painful) along with a pronounced difficulty in understanding the differences between norms that are purely conventional versus other types of norms (Dolan and Fullam 2010, 995). Beyond these symptoms, however, psychopathy is characterizable as a distinct form of agency that raises concern about neurodiversity. Some psychopaths are ‘successful’ in the sense that they avoid incarceration while satisfying PCL-R diagnostic criteria. Psychopaths of this sort are much more likely to be found in corporate and other institutional settings (academia and legal, medical, or corporate professions) (Babiak 2010, 174). In these contexts, some have argued that psychopathic personality traits should be seen as virtues (Anton 2013, 123-125). A more contextual understanding of psychopathy as a distinct way of relating to reasons, persons, and situations may lead us to appreciate the distinct contributions persons with these traits can make. Psychopathy, especially the effects that psychopathy has on emotional and moral competence, has raised challenges to traditional theories of moral responsibility.

4. Responsibility and Autonomy

Accounts of mental illness are closely tied to accounts of agency and responsibility. It is not unusual, following an especially horrific crime, for public discourse to include questions about a suspect’s mental health history and whether a suspect's alleged mental illness should excuse them from responsibility. Eric Harris, one of the teens responsible for the Columbine High School massacre, was called a psychopath by psychologist Robert Hare (Cullen); media commentators noted that Adam Lanza, the man responsible for killing 26  at Sandy Hook Elementary School in Connecticut had been diagnosed with autism and raised questions about the role this may have played (Lysiak and Hutchinson). One reason why discussions like these happen so quickly after a crime likely has to do with the relationship between mental illness and the effects that mental illness are thought to have on responsibility. One view on the matter states that “[t]o diagnose someone as mentally ill is to declare that the person is entitled to adopt the sick role and that we should respond as though the person is a passive victim of the condition. Thus, the distinguishing features of dysfunction that we should look for are not a universally consistent set of exclusive qualities, but things that provide the grounds for the normative claim made by applying the label ‘mental illness’” (Edwards 2009, 80). A more careful analysis of the relationship between mental illness and theories of moral responsibility indicates that several factors are often thought to matter when it comes to holding a person with a mental illness responsible for what s/he has done.

a. Psychopathy

Philosophical theories of moral responsibility often make a distinction between two different aspects of responsibility: attributability and accountability (Watson 1996, 228). Attributability refers to all of the capacities that someone must have in order to be responsible. One minimal condition may be that an action is attributable to a person if it stems from her agency in the right sort of way. Accidental muscle spasms, for example, are not typically attributable to an agent.

If we are dealing with an agent that has satisfied these attributability conditions, we can ask further questions about how we should treat this person after she has acted. This is a question about accountability. Some philosophers have claimed that there are many different forms of accountability, each requiring its own justification (Fischer and Tognazzini 2012, 390). It is one thing to make sure that I intentionally made the rude comment at dinner, it is another to decide what should be done to me as a result. The former is a question about attributability, the latter is a question about accountability.

Emotional capacities form an important component of many theories of moral responsibility (Fischer and Ravizza 1999; Strawson 1962; Wallace 1994; Brink and Nelkin 2013). Reactive attitude theories give moral emotions a central location within a conception of attributability and accountability. The term 'reactive attitude' was originally coined by Peter Strawson as a way to refer to the emotional responses that operate in the context of responding (that is, reacting) to what people do (Strawson, 1962). Resentment, indignation, disgust, guilt, hatred, love, and shame (and potentially many others) are reactive attitudes. For Strawson, and philosophers who have followed him, to respond to a person's action with one of these reactive attitudes is to simultaneously hold him accountable. A theory of moral attributability could be derived, in principle, via an examination of the conditions under which we believe it to be appropriate to respond to someone with a reactive attitude.

Reactive attitudes focus on the quality of their target's will. What this means is that our reactive emotions are sensitive to facts about an agent's intentions, desires, her receptivity to reason, and so forth. Philosophers refer to this as the Quality of Will Thesis. Reactive attitude theorists explain excuses and an exemption from responsibility by analyzing how an agent’s will affects our attitudes. Legitimate excuses, for example, lead us to believe that we should extinguish our reactive response to a person. Excuses, in effect, show us that we were wrong about the quality of a target's will (Wallace 1994, 136-147). If you push me and I fall, I might resent you; however, if I realize that you pushed me in order to save me from oncoming traffic, my attitude will be modified. My resentment will have been extinguished and the pushing has been excused. Excuses inform us that we were mistaken about what action was done. Excuses are singular events, they do not cast doubt on a person's agency, their attributability, but instead inform us that we were wrong about what intention/purpose we attributed to them. Agents that appear to be universally excused are more traditionally said to be exempt from responsibility.

An exemption occurs when we are led to question whether a person meets our attributability requirements. Imagine again that I am knocked over except this time I learn that the person who pushed me suffers from significant and persistent psychotic delusions. She believed, in that moment, that I was a member of the reptilian illuminati and pushing me would get the grey aliens to repossess her hated neighbor's house. Unlike a case involving excuse, a person whose agency is hampered by delusions as severe as these is not a proper target for our reactive attitudes at all (Strawson, 1962; Broome and Bartolotti, 2009, 30). Agency as abnormal as this is better seen as exempt from judgments of attributability or accountability. Exempt agents are not true sources of their actions because exempt agents lack the ability to regulate their behavior in an intelligibly rational way (Wallace 1994, 166-180). It would not be appropriate to resent these agents.

The logic of excuses and exemptions has been thought to show that responsible agency requires that a responsible agent have epistemic access to moral reasons along with the ability to understand how these reasons fit together (Fischer and Ravizza 1997). Furthermore, some have proposed that an agent must have the opportunity to avoid wrongdoing (Shoemaker 2011, 6). Psychopaths seem to be rational and mentally ill at the same time; because of these features, they create difficulty for many theories of responsibility.

Perhaps the most notable diagnostic feature shared by psychopaths is an inability to feel empathic distress. You feel empathic distress when you are pained by the perception of others in pain. The processes that ground empathic distress are not thought to be under conscious control. Psychopaths do not respond as most people do when exposed to signs of others in pain (Patrick, Bradley and Lang 1993) Although the degree to which someone can have the capacity for empathic distress varies, psychopaths are significantly different from non-psychopaths (Flor et.al., 2002).

Furthermore psychopaths have significant difficulty distinguishing between different types of norms. Psychologists have noted that most people are readily able to note the difference between a violation of moral norms from violations of conventional norms (Dolan and Fullam 2010). Normal persons tend to characterize moral norms as serious, harm-based, not dependent on authority, and generalizable beyond their present context; conventional norms are characterized as dependent on authority and contextual (Turiel 1977). Children began to mark the distinction between moral and conventional norms at around two years of age (Turiel 1977). Psychopaths, on the other hand, fail to consistently or clearly note the differences between them. Most psychopaths tend to treat all norms as norms of convention.  Non-psychopaths note a difference between punching someone (a paradigmatic moral norm violation) and failing to respond in the third-person to a formal invitation (a violation of a conventional norm).  Although there is significant controversy about how much we can infer from the psychopath's inability to mark the 'moral / conventional' distinction, the inability, along with their previously noted empathic deficit, has led some philosophers to argue that psychopaths cause problems for traditional theories of moral responsibility(Turiel 1977).

Reactive attitude theorists have argued that psychopaths should be exempt or excused from moral responsibility on both epistemic and fairness grounds. Given their difficulty distinguishing between moral and conventional norms, many reactive attitude theorists conclude that psychopaths are not properly sensitive to moral reasons and cannot be fairly held accountable (Fischer and Ravizza 1998; Wallace 1994; Russell 2004). It would be unfair to hold someone morally responsible if they cannot understand moral reasons; it is therefore inappropriate to express reactive attitudes at psychopaths (Fischer and Ravizza 1998, 78-79). However, some have argued that psychopathic agency can ground accountability ascriptions.

David Shoemaker, for example, has argued that: “[a]s long as [the psychopath] has sufficient cognitive development to come to an abstract understanding of what the laws are and what the penalties are for violating them, it seems clear that he could arrive at the conclusion that [criminal] actions are not worth pursuing for purely prudential reasons, say. And with this capacity in place, he is eligible for criminal responsibility” (Shoemaker 2011, 119). Although Shoemaker's claim about legal responsibility has struck many as correct, the larger debate is over whether psychopaths are morally responsible for their choices given what we know about psychopathic agency.

If moral responsibility requires the capacity to understand moral reasons as distinctly moral and if, as many philosophers have supposed, this capacity is grounded on the ability to empathize with others, then psychopaths cannot understand moral reasons and should be excused. This puts pressure on Shoemaker’s characterization of psychopathic responsibility. If a psychopath’s understanding of moral reasons can be gauged by, for example, their poor ability to distinguishing moral norms from conventional norms then this also appears to be evidence for their lack of receptivity to moral reasons. Some philosophers have excused psychopaths for just this reason: “[c]ertain psychopaths...are not capable of recognizing...that there are moral reasons...this sort of individual is not appropriately receptive to reasons, on our account, and thus is not a morally responsible agent” (Fischer and Ravizza 1998, 79). Others, like Patricia Greenspan, have argued that psychopaths do have a form of moral disability, stemming from their emotional impairments, but that this form of disability should serve to mitigate, not extinguish, their responsibility (Greenspan 2003, 437).

Some philosophers note the consequences of psychopathic moral receptivity on the quality of will thesis. If reactive attitudes are sensitive to the quality of an agent's will, then psychopaths cannot express immoral wills if they do not understand morality. If psychopaths cannot act on a will that merits reactive accountability then they lack attributability altogether. Jay Wallace has argued that “[w]hat makes it appropriate to exempt the psychopath from accountability...is the fact that psychopathy...disables an agent's capacities for reflective self control” (Wallace 1994, 178).

Others argue that psychopaths may be held accountable by appealing to non-moral reactive attitudes like hatred, disgust or contempt. These attitudes, they claim, can be targeted at the quality of a psychopath’s will even if it is granted that they cannot act on immoral wills (Talbert 2012, 100). This is true even if the psychopath cannot appreciate that we have moral reasons for caring about our status as agents. Insofar as the psychopath can make judgments like these, then, in the words of Patricia Greenspan, “[the psychopath] is a fair target of resentment for any harm attributable to his intention to the extent that the reaction is appropriate to his nature and deeds. He need not be “ultimately” responsible in the sense that implies freedom to escape blame” (Greenspan 2003, 427). Because psychopaths are incapable of understanding moral reasons it is unfair to hold them morally responsible but there are forms of accountability and reactive address that are outside the moral sphere that may remain appropriate to direct at them.

Shame, in particular, appears to be a normatively significant reactive attitude that psychopaths have access (Ramirez 2013, 232). Shame grounds a family of retributive forms of accountability and has been though to serve as another way to hold psychopaths accountable even if it can be established that psychopaths are not capable of feeling or understanding moral reactive attitudes. If psychopaths are susceptible to shame then they can be fairly held accountable on shame-based grounds.

It is fair to hold psychopaths accountable in these non-moral (shame-based) ways based if they are able to feel the emotion being levied against them and can express a quality of will that these attitudes are sensitive to. More importantly, although psychopaths do not understand the distinctiveness and weight of moral reasons, their judgments can still express condemnable attitudes about those reasons. Greenspan notes that all of us have “blind spots” about certain narrow classes of reasons and we stand to those reasons in the same relation that psychopaths stand to moral reasons; these blind spots don't excuse us from accountability (Greenspan 2003, 435).

b. Body Integrity Identity Disorder and Gender Dysphoria

Conceptions of mental illness, and mentally impaired agency, factor prominently over questions regarding the best way to treat a disorder. In 1997, Robert Smith, a surgeon at the Falkirk and District Royal Infirmary in Scotland, amputated one of this patient’s limbs at this patient's request. The limb itself was healthy. There did not exist any medical justification for the amputation. In 1999, Smith amputated another patient’s healthy limb, again at the request of the patient, and was scheduled to perform a third amputation (on a third patient) before the hospital’s board of directors forbade him from amputating any more healthy limbs. Smith’s patients came to him with a set of symptoms that do not correspond to any particular disorder in the DSM. Smith’s patients were not under the delusion that their limbs did not belong to them; they did not see their limbs as disfigured or disgusting. Instead, his patients claimed that, from a young age, they had not thought of the limb as part of their authentic selves. They were, the patients claimed, never meant to be born with the limb and were seeking surgery to allow their inner representation of their bodily identity to match their external body presentation. The only way to do this was to amputate their healthy limb.

Patients who seek to radically alter their body via repeated surgeries or extreme dieting are ordinarily (barring other symptoms) diagnosed with Body Dysmorphic Disorder (BDD). BDD, however, requires that patients seek to modify their bodies because they find a specific part of their body disgusting or revolting or flawed. Patients with BDD also tend to engage in obsessive behaviors related to the body-part’s appearance (grooming, ‘mirror checking,’ etc) (APA 2013, 248). Smith’s patients, although they claimed to experience significant dysphoria because of their condition, did not do so because they found their limbs revolting or disfigured. They identified themselves as having a different condition: Body Integrity Identity Disorder. Like psychopathy, BIID is not a disorder cataloged in the DSM. Although BIID is not a DSM disorder, the APA does recognize that it appears distinct from BDD. “Body Integrity Identity disorder (apotemnophilia)...involves a desire to have a limb amputated to correct an experience of mismatch between a person's sense of body identity and his or her actual anatomy. However, the concern does not focus on the limb's appearance, as it would be in body dysmorphic disorder” (APA 2013, 246-247). Vilayanur Ramachandran and Paul McGeoch claim that they have discovered several of the neural correlates of BIID and these appear distinct from BDD; specifically, they claim that the disorder arises in part from a dysfunction of the right parietal lobe (Ramachandran and McGeoch 2007, 252).

Apart from the conceptual question over whether BDD and BIID are underlying manifestations of the same mental illness, individuals who claim to suffer from BIID raise significant ethical questions over the nature of mental illness, autonomy, and surgical treatments for dysphoria. Patients with BIID request that surgeons recognize and grant their request for surgical intervention to cure psychological suffering. Although the case of BIID has not received widespread philosophical attention, several different approaches have been advanced with regards to BIID patient requests for amputation. The purpose of these amputations is, they claim, to correct what they see as a mismatch between their inner and outer selves. Some philosophers have raised doubts about the ability of BIID patients to act on genuinely autonomous decisions (Mueller 2009, 35). One worry about challenging the autonomy of otherwise rational agents is that, in other domains, we appear to allow individuals significant freedom to modify their bodies for many reasons (aesthetic, political, self-expression, and so forth) without thereby questioning their status as autonomous agents (Bridy 2004). The right to bodily autonomy is typically construed as one of the guiding values in biomedical decision-making. Furthermore, BIID sufferers who have their requests for amputation denied often resort to self-harm. Many will harm their limbs to the point where amputation becomes medically necessary. Some have argued that it is morally permissible to grant BIID requests for amputation on the basis of harm-prevention (Bayne and Levy 2005, 78). Others have expressed concern over the use of surgical treatments for mental illnesses (if it is granted that BIID is a mental illness), given that the surgery persons with BIID are requesting involve the permanent removal of a capacity typically thought to important (Johnston and Elliot 2002, 430).

Given that BIID patients appear to have a locatable dysfunction in their temporal lobes (an area where internal body representations are thought to be located), some philosophers have argued that surgical treatments are unjustified if a non-surgical solution can be found. That is, if BIID results from the suffering that is caused by a mismatch between a patient’s internal representation of herself and her outer presentation, then if it possible to change the inner representation, and thereby evade surgery, and thus we have an obligation to ought to do so (Johnston and Elliot 2002, 432). This approach, however, forces us to confront philosophical responses to other conditions that involve mismatches between a person’s inner representation of their bodies and their external bodily presentation. In particular, patients with BIID argue that their condition is analogous to the suffering faced by those with gender dysphoria. These individuals often seek sexual reassignment surgery to alleviate their perceived embodiment mismatch (Bayne and Levy 2005, 80). Individuals who are suffering as a result of their assigned sex/gender and who exhibit a strong desire to alter their sex and gender characteristics can be diagnosed with Gender Dysphoria (APA 2013, 451-459). Unlike other patients desiring surgical body modification (for self-expression, to meet unrealistic gender ideals, and so forth), individuals with BIID or Gender Dysphoria both report that their desires for surgical alteration of their body presentation originate at a young age. Both groups seek to have their request for surgical alteration respected by those around them as a recognition of their autonomy and of the value that gender (or bodily integrity) play in the formation of an authentic self (Lombardi 2001, 870).

The discussion of BIID, its status as a mental disorder, and the ethics of granting a person’s request for amputation are all relatively new and hotly debated topics within the Philosophy of Mental Illness and Bioethics generally. This debate is, however, connected to a larger, better established, questions concerning patient autonomy and what it means for an agent to make autonomous choices. At the moment there does not exist a clear-consensus on the status of BIID as disorder or a received view on how to treat BIID requests for amputation.

5. References and Further Reading

  • American Psychiatric Association. (1952). Diagnostic and statistical Manual of Mental Disorders Washington, DC.
  • American Psychiatric Association. (1973). “Homosexuality and Sexual Orientation Disturbance: Proposed Change in DSM-II, 6th Printing, page 44 POSITION STATEMENT (RETIRED).” Arlington VA.
  • American Psychiatric Association. (2013). Diagnostic and statistical Manual of Mental Disorders 5th ed. Washington, DC.
  • Anton, Audrey L. (2013) “The Virtue of Psychopathy: How to Appreciate the Neurodiversity of Psychopaths and Sociopaths Without Becoming A Victim.” Ethics and Neurodiversity Cambridge Scholars Publishing: Newcastle upon Tyne: 111-130.
  • Babiak P., Neumann C., and Hare R.D. (2010). “Corporate Psychopathy: Talking the Walk.” Behavioral Sciences and the Law 28(2): 174-193.
  • Barnabaum, Deborah. (2013). “The Neurodiverse and the Neurotypical: Still Talking Across an Ethical Divide.” Ethics and Neurodiversity Cambridge Scholars Publishing: Newcastle upon Tyne: 131-145.
  • Bayer,Ronald and Robert L. Spitzer. (1983). “Edited correspondence on the status of homosexuality in DSM-III.”  Journal of the History of the Behavioral Sciences Vol. 18(1): 32–52.
  • Bayne, Tim and Neil Levy. (2005). “Amputees By Choice: Body Integrity Identity Disorder and the Ethics of Amputation.” Journal of Applied Philosophy 22(1): 75-86.
  • Bentall, Richard. (1990). “The Syndromes and Symptoms of Psychosis: Or why you can’t play ‘twenty questions’ with the concept of schizophrenia and hope to win.” Reconstructing Schizophrenia Routledge: London.
  • Bentall, Richard. (1992). “A Proposal to Classify Happiness as A Mental Disorder.” Journal of Medical Ethics 18(2): 94-98.
  • Bentall, Richard. (2004). “Sideshow? Schizophrenia construed by Szasz and the neoKrapelinians.” In J.A. Schaler (Ed.) Szasz under Fire: The Psychiatric Abolitionist Faces His Critics. Peru, Illinois: Open Court.
  • Boorse, C. (1975). “On the distinction between disease and illness.” Philosophy and Public Affairs, 5: 49-68.
  • Boorse, C. (1997). “A rebuttal on health.” In J.M. Humber and R.F. Almeder (eds.), What Is Disease? Totowa N.J.: Humana Press: 1-134.
  • Boyd, Richard. (1991). “Realism, antifoundationalism, and the entuhusiasm for natural kinds.” Philosophical Studies 61: 127-148.
  • Broome, Matthew and Lisa Bortolotti. (2009). “Mental Illness as Mental: In Defense of Psychology Realism.” Humana Mente 11: 25-44.
  • Bridy, A. (2004). “Confounding extremities: Surgery at the medico- ethical limits of self-modification.” Journal of Law, Medicine and Ethics 32(1): 148–158.
  • Brink, David and Dana Nelkin. (2013). “Fairness and the Architecture of Responsibility.” In David Shoemaker (Ed). Oxford Studies in Agency and Responsibility Volume 1. Oxford University Press.
  • Brülde, B., and F. Radovic. (2006). “What is mental about mental disorder?” Philosophy, Psychiatry, & Psychology 13(2): 99–116.
  • Charland, Louis. (2004a). “Character Moral Treatment and Personality Disorders.” Philosophy of Psychiatry. Oxford University Press: 64-77.
  • Charland, Louis. (2004b). “A Madness for Identity: Psychiatric Labels, Consumer Autonomy, and the Perils of the Internet.” Philosophy, Psychiatry, and Psychology 11(4): 335-349.
  • Chomsky, Noam. (1988). Language and Problems of Knowledge: The Managua Lectures. Cambridge, Mass. / London, England: MIT Press (Current Studies in Linguistics Series 16).
  • Church, Jennifer. (2004). “Social Constructionist Models” The Philosophy of Psychiatry Oxford University Press: 393-406.
  • Churchland, P. M., (1981). “Eliminative Materialism and the Propositional Attitudes,” Journal of Philosophy 78: 67–90.
  • Cresswell, Mark. (2008). “Szasz and His Interlocutors: Reconsidering Thomas Szasz’s “Myth of Mental Illness” Thesis” Journal for the Theory of Social Behavior 38(1): 23-44.
  • Cullen, Dave. (2004). “The Depressive and the Psychopath: At last we know why the Columbine killers did it.” Slate. Web. April 2004.
  • Daniels, Norman. (2007). Just Health: Meeting Health Needs Fairly. Cambridge University Press: NY.
  • Dolan, M.C., Fullam, R.S. (2010). “Moral/conventional Transgression Distinction and Psychopathy in Conduct Disordered Adolescent Offenders.” Personality and Individual Differences Vol. 49: 995–1000.
  • Edwards, Craig. (2009). “Ethical Decisions in the Classification of Mental Conditions as Mental Illness.” Philosophy, Psychiatry, and Psychology 16(1): 73-90.
  • Elliott, Carl. (2004). “Mental Illness and Its Limits” The Philosophy of Psychiatry Oxford University Press: 426-436.
  • Fenton, Andrew and Tim Krahn. (2007). “Autism, Neurodiversity and Equality Beyond the 'Normal'” Journal of Ethics in Mental Health 2(2): 1-6.
  • Fischer J.M., Ravizza M. (1998). Responsibility and Control: A Theory of Moral Responsibility. New York: Cambridge University Press.
  • Fischer J.M., Tognazzini N.A. (2011). “The Physiognomy of Responsibility.” Philosophy and Phenomenological Research 82(2): 381-417.
  • Freud, Sigmund. (1905/1997). Dora: An Analysis of a Case of Hysteria. Simon and Schuster: NY.
  • Freud, Sigmund. (1915-1917 / 1977). Introductory Lectures on Psychoanalysis. W.W. Norton and Company: NY.
  • Friedan, Betty. (1963). The Feminine Mystique. W.W. Norton and Company: NY.
  • Foucault, Michel. (1961/1988). Madness and Civilization: A History of Insanity in the Age of Reason. Random House: NY.
  • Fulford, K.W.M. .(2001). “What is (mental) disease?: An open letter to Christopher Boorse.” Journal of Medical Ethics 27(2): 80–85.
  • Fulford, K.W.M. (2004). “Values Based Medicine: Thomas Szasz’s Legacy to Twenty-First Century Psychiatry.” In J.A. Schaler (Ed.) Szasz under Fire: The Psychiatric Abolitionist Faces His Critics. Peru, Illinois: Open Court.
  • Ghaemi, Nassir. (2003). The Concepts of Psychiatry Johns Hopkins University Press.
  • Ghaemi, Nassir. (2011). A First Rate Madness. Penguin Press: NY.
  • Glannon, Walter. (2007). “Neurodiversity” Journal of Ethics in Mental Health 2(2): 1-5.
  • Goldman, Alan. (2002). “Plain Sex.” In Alan Soble (ed.), The Philosophy of Sex: Contemporary Readings, 4th ed. Lanham, MD: Rowman and Littlefield: 39-55.
  • Graham, George. (2010). The Disordered Mind: An Introduction to the Philosophy of Mind and Mental Illness. Routledge: NY.
  • Graham, George. (2013a). The Disordered Mind: An Introduction to the Philosophy of Mind and Mental Illness. Routledge: NY.
  • Graham, George. (2013b). “Ordering Disorder: Mental Disorder, Brain Disorder, and Therapeutic Intervention” in K. Fulford (ed) Oxford Handbook of Philosophy and Psychiatry. Oxford UP.
  • Graham, George. (2014). “Being a Mental Disorder” in Harold Kincaid & Jacquieline A. Sullivan (eds.) Classifying Psychopathology: Mental Kinds and Natural Kinds: 123-143.
  • Greenspan, Patricia. (2003). “Responsible Psychopaths” Philosophical Psychology 16(3): 417-429.
  • Grob, G.N. (1991). Origins of DSM-I: a study in appearance and reality.American Journal of Psychiatry 148(4): 421-431.
  • Gurley, Jessica. (2009). “A History of Changes to the Criminal Personality in the DSM” History of Psychology 12(4): 285-304.
  • Hacking, Ian. (1995). Rewriting the Soul: Multiple Personality and the Science of Memory. Princeton, NJ: Princeton University.
  • Hacking, Ian. (1999). The Social Construction of What? Cambridge: Harvard University Press.
  • Hansen, Jennifer. (2004). “Affectivity: Depression and Mania” Philosophy of Psychiatry Oxford University Press: 36-53.
  • Hare, R.D., Clark D., Grann M., Thornton D. (2000). “Psychopathy and the Predictive Validity of the PCL-R: An International Perspective.” Behavioral Sciences and the Law 18(5): 623-45.
  • Haslam, Nick. (2014). “Natural Kinds in Psychiatry: Conceptually Implausible, Emprically Questionable, and Stigmatizing” in Harold Kincaid & Jacquieline A. Sullivan (eds.) Classifying Psychopathology: Mental Kinds and Natural Kinds: 11-28.
  • Herrera, C.D. (2013).“What’s the Difference?” Ethics and Neurodiversity Cambridge Scholars Publishing: Newcastle upon Tyne: 1-17.
  • Horowitz, Allan V. (2001). Creating Mental Illness. University of Chicago Press.
  • Johnston, Josephine and Carl Elliott. (2002). “Healthy limb amputation: ethical and legal aspects” Clinical Medicine 2(5): 431-435.
  • Kandel, Eric. (1998). “A new intellectual framework for psychiatry.” American Journal of Psychiatry 155: 457-469.
  • Kendell, R.E. (2004). “The Myth of Mental Illness.” In J.A. Schaler (Ed.) Szasz under Fire: The Psychiatric Abolitionist Faces His Critics. Peru, Illinois: Open Court.
  • Kraepelin, Emile. (1896a) Psychiatrie (8th edn). Reprinted (1971) in part as Dementia Praecox and Paraphrenia (trans. R. M. Barclay). Huntington, NY: Robert E. Kreiger.
  • Kraepelin, Emile. (1896b) Psychiatrie (8th edn). Reprinted (1976) in parts as Manic—Depressive Insanity and Paranoia (trans. R. M. Barclay). Huntington, NY: Robert E. Kreiger.
  • Levy, Neil. (2007). “The Responsibility of the Psychopath Revisited” Philosophy, Psychiatry, and Psychology: 129-138.
  • Lilienfeld, S.O. and L. Marino. (1995). “Mental disorder as a Roschian concept: a critique of Wakefield's "harmful dysfunction" analysis.” Journal of Abnormal Psychology 104(3): 411-20.
  • Lombardi, E. (2001). “Enhancing Transgender Care.” American Journal of Public Health 91(6): 869-872.
  • Lysiakm M. and Bill Hutchinson. (2013). “Emails show history of illness in Adam Lanza's family, mother had worries about gruesome images.” New York Daily News. Web. April 2013.
  • Maddux, James. (2001). “Stopping the Madness.” The Handbook of Positive Psychology: 13-25.
  • Mueller S. (2009). “Body integrity identity disorder (BIID)-Is the amputation of healthy limbs ethically justified?” American Journal of Bioethics; 9: 36–43.
  • Murphy, Dominic. (2014). “Natural Kinds in Folk Psychology and in Psychiatry.” in Harold Kincaid & Jacquieline A. Sullivan (eds.) Classifying Psychopathology: Mental Kinds and Natural Kinds: 105-122.
  • Nadesan, M.H. (2005). Constructing Autism. Milton Park, Oxfordshire: Routledge.
  • Nichols, Shaun and Manuel Vargas. (2007). “How to Be Fair to Psychopaths.” Philosophy, Psychiatry, and Psychology 14(2): 153-155.
  • Philips Katharine, et. al. (2010). “Body Dysmorphic Disorder: Some Key Issues for DSM-V”. Depression and Anxiety 27:573-591.
  • Pickard, Hannah. (2009). “Mental Illness is Indeed A Myth” Psychiatry as Cognitive Neuroscience: 83-101.
  • Pickard, Hannah. (2011). “What is Personality Disorder?” Philosophy, Psychiatry, and Psychology Vol. 18 (3): 181-184.
  • Pickering, Neil. (2003). “The Likeness Argument and the Reality of Mental Illness” Philosophy, Psychiatry, and Psychology 243-254.
  • Ramachandran, V., and McGeoch, P. (2007).” Can vestibular caloric stimulation be used to treat apotemnophilia?” Medical Hypotheses 8: 250–252.
  • Ramirez Erick. (2013). “Psychopathy Moral Reasons and Responsibility Ethics and Neurodiversity Cambridge Scholars Publishing: Newcastle upon Tyne: 217-237.
  • Ramsey, W., Stich, S. and Garon, J., (1990). “Connectionism, Eliminativism and the Future of Folk Psychology,” Philosophical Perspectives 4: 499–533.
  • Robertson, Erik D. and Lennart Mucke. (2006). “100 Years and Counting: Prospects for Defeating Alzheimer's Disease.” Science: Vol. 314 no. 5800 pp. 781-784.
  • Rosenhaun, David. (1973). “On Being Sane in Insane Places” Science: 250-258.
  • Sam, David and Virginia Moreira. (2012). “Revisiting the Mutual Embeddedness of Culture and Mental Illness” Online Readings in Psychology and Culture.
  • Soble, Alan. (2004) “Desire Paraphilia and Distress in DSM IV.” Philosophy of Psychiatry Oxford University Press: NY: 54-63.
  • Strawson, P.F. (1962). “Freedom and Resentment.” Proceedings of the British Academy 48: 1-25.
  • Szasz, Thomas. (1961/1984). The Myth of Mental Illness Harper Perennial.
  • Szasz, Thomas. (1979). Schizophrenia: The Sacred Symbol of Psychiatry. Oxford: Oxford University Press.
  • Talbert, Matthew. (2012) “Moral Competence, Moral Blame, and Protest.” Journal of Ethics 16(1): 89-109.
  • Vargas, Manual and Shaun Nichols. “Psychopaths and Moral Knowledge” Philosophy, Psychiatry, and Psychology 2007: 157-162.
  • Von Eckardt, Barbara and Jeffrey Poland. (2005). “Mechanism and Explanation in Cognitive Neuroscience” Proceedings of the Philosophy of Science Association: 972: 984.
  • Wakefield, Jerome. (1992). “The Concept of Mental Disorder: On the Boundary Between Biological Facts and Social Values” American Psychologist: 373-388.
  • Wakefield, Jerome. (1999). “Evolutionary versus prototype analyses of the concept of disorder.” Journal of Abnormal Psychology 108: 374-399.
  • Wakefield, Jerome. (2006). “What Makes A Mental Disorder Mental?” Philosophy, Psychiatry, & Psychology 13(2): 123-131.
  • Wallace, R.J. (1994). Responsibility and the Moral Sentiments. Cambridge, Mass: Harvard University Press.
  • Watson, Gary. (1996). “Two Faces of Responsibility.” Philosophical Topics 2: 227-248.
  • Woolfolk, Robert. (1999). “Malfunction and Mental Illness” The Monist 82(4): 658-670.
  • Zachar, Peter. (2014). A Metaphysics of Psychopathology, MIT Press: Cambridge Massachusetts.

 

Author Information

Erick Ramirez
Email: ejramirez@scu.edu
Santa Clara University
U. S. A.

Intentionality

If I think about a piano, something in my thought picks out a piano. If I talk about cigars, something in my speech refers to cigars. This feature of thoughts and words, whereby they pick out, refer to, or are about things, is intentionality. In a word, intentionality is aboutness.

Many mental states exhibit intentionality. If I believe that the weather is rainy today, this belief of mine is about today’s weather—that it is rainy. Desires are similarly directed at, or about things: if I desire a mosquito to buzz off, my desire is directed at the mosquito, and the possibility that it depart. Imaginings seem to be directed at particular imaginary scenarios, while regrets are directed at events or objects in the past, as are memories. And perceptions seem to be, similarly, directed at or about the objects we perceptually encounter in our environment. We call mental states that are directed at things in this way ‘intentional states’.

The major role played by intentionality in affairs of the mind led Brentano (1884) to regard intentionality as “the mark of the mental”; a necessary and sufficient condition for mentality. But some non-mental phenomena seem to display intentionality too—pictures, signposts, and words, for example. Nevertheless, the intentionality of these phenomena seems to be derived from the intentionality of the mind that produces them. A sound is only a word if it has been conferred with meaning by the intentions of a speaker or perhaps a community of speakers; while a painting, however abstract, seems only to have a subject matter insofar as its painter intends it to. Whether or not all mental phenomena are intentional, then, it certainly seems to be the case that all intentional phenomena are mental in origin.

The root of the word ‘intentionality’ reflects the notion that it expresses, deriving from the Latin intentio, meaning ‘directed at’. Intentionality has been studied since antiquity and has generated numerous debates that can be broadly categorized into three areas that are discussed in the following sections:

Section 1 concerns the intentional relation: the relation between intentional states and their objects. Here we aim to answer the question “What determines why any given intentional state is about one thing and not another?” For example, what makes a thought about a sheep about that sheep? Does the thought look like the sheep? Or does it perhaps have a causal origin in an encounter with the sheep?

Section 2 explores the nature of the objects of intentional states. Are these objects independent of us, or somehow constituted by the nature of our minds? Do they have to exist, or can we have thoughts about non-existent objects like The Grinch?

Section 3 explores the nature of intentional states themselves. For example, are intentional states essentially rational states, such that only rational creatures can have them? Or might intentional states be necessarily conscious states? And is it possible to give a naturalized theory of intentionality that appeals only to facts describable in the natural sciences?

This article explores these questions, and the dominant theories that have been designed to answer them.

Table of Contents

  1. The Intentional Relation
    1. Formal Theories of Intentionality
    2. Problems for Forms, and the Causal Alternative
  2. Intentional Objects
    1. Intentional Inexistence
    2. Thinking About Things that Do Not Exist
    3. Direct versus Indirect Intentionality
  3. Intentional States
    1. Intentionality and Reason
    2. Intentionality and Intensionality
    3. Intentionality and Consciousness
    4. Naturalizing Intentionality
  4. References and Further Reading

1. The Intentional Relation

If I am thinking about horses, what is it about my thought that makes it about horses and not, say, sheep? That is, in what relation do intentional states stand to their objects? This is the question “What is the intentional relation?” There have been many answers proposed to this question, and a broad division can be discerned in the history of philosophy between what can be called ‘formal’ and ‘causal’ theories.

a. Formal Theories of Intentionality

One answer to the question is that mental states refer to the things they do because of the intrinsic features of those mental states. The earliest version of this theory is based on Plato’s theory of forms. Plato held that apart from the matter (hyle) they are composed of, all things have another aspect, which he called their ‘form’ (morphê). All horses, for example, although individually made of different material, have something in common – and this is their form. The exact meaning of Plato’s ‘form’ is a controversial issue. On one reading, two things have the same form or are ‘conformal’ if they share the same shape; on a broader interpretation, two things are conformal if there is a one-to-one mapping between the essential features of the two—as there is between a building and an architect’s blueprint for the building. Plato held that when we think about an object, we have the form of the object in our mind, so that our thought literally shares the form of the object. Aristotle further developed this theory, arguing that in perception (sensu) the form of an object perceived is transmitted from the object to the mind of the perceiver. In the Middle Ages Thomas Aquinas defended and elaborated Aristotle’s theory, and in the Early Modern period the theory finds an heir in the work of the ‘British Empiricists’ Locke and Hume. Locke and Hume argued that ‘ideas’, which they considered to be the fundamental components of thought, refer to their objects because they are images of those objects, impressed on the mind through the action of the perceptual faculties.

Although images or shapes may play a role in thought, it is generally accepted that they cannot provide a complete account of intentionality. The relation between an image and its object is a relation of resemblance. But this presents a difficulty that was first raised against the formal theory by Ockham in the Middle Ages (King, 2007). The problem is that the relation of resemblance is ambiguous in a way that the intentional relation cannot be. An image of a man walking up a hill also resembles a man walking backwards down a hill (Wittgenstein, 1953), whereas a thought about a man walking up a hill is not also a thought about a man walking backwards down a hill. Similarly, while an image of Mahatma Gandhi resembles Mahatma Gandhi, it also resembles everyone who resembles Mahatma Gandhi (Goodman, 1976). Thoughts about Mahatma Gandhi on the other hand, are not thoughts about anyone who looks like Mahatma Gandhi.

An alternative formal model that seems to avoid this problem appeals to descriptions (Frege 1892, Russell 1912). This view holds that if I am thinking about something, then I must have in mind a description that uniquely identifies that thing. Descriptions seem to avoid the problem of ambiguity faced by images. There may be many people who resemble Mahatma Gandhi, but probably only one person that satisfies the description ‘the Indian Nationalist leader assassinated on the 30th of January 1948’. Since the ‘descriptivist’ account takes concepts to refer to their objects by describing them, so that the features of a concept somehow correspond to the features of its object, the descriptivist theory is arguably also a formal theory of intentionality.

In addition to answering the question why an intentional state refers to one object and not another, the formal approach is also helpful in explaining how thinkers understand what it is they are thinking about. One thing that we seem to be able to do when we have mental states that are directed at particular things objects is to reflect upon different aspects of those objects, reason about them, describe them, and even make reliable predictions about them. For example, if I understand what horses are, and what sheep are, I ought to be in a position to tell you about their differences, and perhaps make good predictions about their behavior. If intentional states are conformal with their objects, we have some explanation for how such understanding is possible, since the form of the object the intentional state is directed at should be available to me if I reflect upon my own thoughts.

And we have another reason still for expecting that thoughts have a formal component. Frege (1892) observed that we can have multiple thoughts about the same thing, without realizing that we are thinking of the same thing in each case. The Ancient Greeks believed that Hesperus and Phosphorus (two Greek names for Venus) were two different stars in the sky, one of which appeared in the morning, while the other appeared in the evening. As a result they believed that Hesperus rises in the evening while simultaneously believing that Phosphorus does not. Of course Hesperus and Phosphorus, as it turns out, are the same object – the planet Venus, which rises both in the morning and in the evening. And so the Ancient Greeks had two contradictory beliefs about Venus, without realizing that both beliefs were about the same thing. The upshot is that it is possible for us to have distinct concepts that pick out the same thing without our knowing.

Frege proposed as an explanation that our concepts must vary in more ways than in what they refer to. They also vary, he proposed, in what he called their ‘sense’, so that two concepts could refer to the same object while differing in sense. He described the sense as the ‘mode of presentation’ of the object that a concept picks out. It would appear that by ‘mode of presentation’ he meant something like a description of the object. So, while the reference of someone’s hesperus and phosphorus concepts might be the same, the sense of hesperus might be ‘the star that appears in the evening’, while the sense of phosphorus could be ‘the star that appears in the morning’. Since it is perfectly rational to suppose that the object that satisfies the description ‘the star that appears in the morning’ might not be the same as the object that satisfies the description ‘the star that appears in the evening’, we now have an explanation for how one could have two concepts that pick out the same thing without knowing.

Supposing that the intentional relation is one of conformality, then, allows us to explain i) why a thought refers to what it does, (ii) how we can have introspective knowledge of the things we think about, and (iii) how two or more of our concepts could pick out the same thing without our knowing. But there are problems facing the formal approach, which have lead many to look for alternatives.

b. Problems for Forms, and the Causal Alternative

The formal theory of intentionality faces two major objections.

The first objection, sometimes called ‘the problem of ignorance and error’, is that the descriptions we have at our disposal of the objects we think about might be insufficient to uniquely identify those objects. Putnam (1975) articulated this objection using a now famous thought-experiment. Suppose that you are thinking of water. If the descriptive theory is right, for example, you must have at your disposal a description that uniquely distinguishes water from all other things. For most of us – chemists aside – such a description will amount to something like ‘the clear drinkable liquid in the rivers, lakes, and taps around here’. But suppose, suggests Putnam, that there is another planet far away from here, which looks to its inhabitants just like Earth looks to us. On that planet, let’s call it Twin-Earth, there is a clear drinkable liquid that the inhabitants of the planet refer to (coincidentally) as ‘water’, but that is in fact a different chemical substance; rather than H2O, it has a different chemical composition—let’s call it XYZ. If this were true, we should expect that the description most people here on Earth are in a position to give of what we call ‘water’ will be just the same as the description the inhabitants of the other planet give of what they call ‘water’. But, by hypothesis, when we think about water we are thinking of the substance on our planet, H2O, and when they think of what they call ‘water’, they are thinking of a different thing—XYZ. As a result, it would seem that descriptions are not sufficient to explain what we are thinking of, since a member of either of these groups will give the same description for what they call ‘water’, even though their thoughts pick out different substances. This is the ‘ignorance’ part of the problem—we often don’t have enough descriptive knowledge of the things we think about to uniquely identify those things. The ‘error’ part is that it often turns out to be the case that our beliefs about the things we think about are false. For example, many people believe tomatoes are vegetables not fruit; and as a result, the description they will give of ‘tomato’ will include the claim that tomatoes are vegetables. If these people are indeed thinking of tomatoes, so the argument goes, it cannot be as a result of their being in possession of a description that picks out tomatoes, since no tomato truly falls under the description ‘fruit’.

The second difficulty for the formal accounts, specifically directed at the descriptive account, is that descriptions do not identify the essential nature of the things they pick out, whereas many words and concepts do (Searle 1958, Kripke 1980). The description someone might offer of Hesperus could be ‘the brightest celestial object in the evening sky’. But it is perfectly coherent to suppose that Hesperus could have existed without having been visible in the evening. It could have drifted into a different orbital pattern, or have been occluded by a belt of asteroids, and therefore never have been visible in the evening. This description does not, therefore, capture an essential feature of Hesperus. The term ‘Hesperus’ in our thoughts, on the other hand, does pick out an essential feature of Hesperus—being Hesperus. That this is an important difference can be seen when we realize that concepts and descriptions seem to behave differently in thoughts about counterfactual possibilities—or, alternative ways the world could have turned out. For example, the thought ‘Hesperus could have failed to have been the brightest celestial object in the evening sky’, is clearly true—this could have been the case had it drifted into a different orbital pattern. But the thought ‘Hesperus could have failed to have been Hesperus’, is not true: there is no way the world could have turned out such that Hesperus could have failed to have been itself. The name ‘Hesperus’ therefore identifies the essence of Hesperus—what it couldn’t fail to be; but the description does not. So now we have a further reason for thinking that concepts are not cognitively equivalent to descriptions—since they behave differently in thoughts about counterfactual possibility.

As an alternative to descriptions, images, or forms of any sort, Putnam (1975) and Kripke (1980) propose a ‘causal’ model of intentionality. On this alternative model, our concepts do not have intrinsic formal features that determine what they refer to. Rather, a concept picks out the thing that originally caused it to occur in the mind of a thinker, or the thing it is causally related to in the mind-independent world. On this view, if I have a concept that picks out horses, this concept must have initially been caused to occur in me by a physical encounter with horses. If I have a concept that picks out water, the concept must have been caused to occur in me by a causal interaction with water. And if I have a concept that picks out Hesperus, this concept must have a causal origin in my apprehension of Hesperus, perhaps by seeing it in the sky.

We can see how the causal theory can be used to address the two major objections to the formal theory. Firstly, on the causal account, the ‘water’ thoughts of those on Earth can be distinguished from the ‘water’ thoughts of those on Twin-Earth: the substance Earthlings are causally interacting with when they have ‘water’ thoughts is H2O, while the substance that Twin-Earthlings are causally interacting with is XYZ—explaining why the thoughts of each thinker refer to different things, even though the descriptions they might offer of those things are identical. Similarly, I can causally interact with water, or tomatoes, even if I have false beliefs about these things, so the causal model allows that the descriptions I might offer of the things I think about can be false. The causal model therefore seems to handle the problem of ignorance and error. Secondly, if we reject that my hesperus concept is cognitively equivalent to a description, the worry that the description fails to identify the essence of the object simply doesn’t arise. The causal model therefore also seems to handle the problem concerning reference to essential properties (sometimes called the ‘modal problem’).

However, the causal model has trouble explaining some of the things the formal model was designed to explain (see last paragraph of Section 1a above). Firstly, the causal model has trouble explaining (ii), how we can reflect on the objects of our thoughts, and say something about them. If concepts have no formal component that somehow describes their objects this becomes mysterious. The causal model also fails to explain (iii), how we can have multiple thoughts about the same thing without realizing. While formal models can explain this by holding that different concepts can be cognitively equivalent to different descriptions of the same thing, the causal model has trouble explaining this. Since the thoughts of an Ancient Greek about hesperus, and the thoughts of an Ancient Greek about phosphorus have a causal origin in the same object, namely Venus, the causal relation that stands between these concepts and their object is identical in each case; as a result, there ought to be no difference between the concepts on the causal model.

The formal and causal models therefore each provide good explanations for one set of phenomena, but run into trouble in explaining another.

Perhaps the best account of the intentional relation will be one that draws on aspects of both theories—something that so-called ‘two-dimensional’ accounts of intentionality aim to do (Chalmers 1996, 2006, Lewis 1997, Jackson 1998). On this approach, although it is necessary to know what environment a thinker is causally connected to in order to know what her thoughts refer to, this need not rule out that her concepts also have a formal component. The trick is to find a formal component that does not run into the problems raised by the causal theorist. To deal with the problem of error, for example, it has been proposed that the formal component of a concept might be a description of the appearance of the object the concept refers to (Searle 1983). Although I can be wrong that the things my tomato concept picks out are vegetables, it would seem that I cannot be mistaken that they are apparently red shiny edible objects—since I cannot be wrong about how the world appears to me. Such content would therefore avoid the problem of error—these descriptions couldn’t turn out to be false. To deal with the problem of ignorance, where my descriptive knowledge fails to uniquely determine which thing I am thinking of, it has been proposed to write the causal origin of my experience into the formal component. So, my concept water might be cognitively equivalent not just to ‘the apparently clear drinkable liquid in the lakes and rivers’, which fails to distinguish the water on Earth from the water on Twin-Earth, but to ‘the stuff causing my current experiences of an apparently drinkable liquid in the lakes and rivers’ (Searle 1983). This description, it would seem, does indeed distinguish water from Twin-Earth water, since only water is the causal source of my experiences (because I am on Earth, not Twin-Earth). And to get descriptions to behave the same way as concepts in thoughts about counterfactual possibility, it has been proposed to include the specification ‘actual’ in the descriptive content of a concept (Davies and Humberstone 1980). Although it is true that ‘the brightest celestial object in the evening sky could have failed to have been Hesperus’, it seems not to be true that ‘the actual thing that is the brightest celestial object in the evening sky could have failed to have been Hesperus’. By including ‘actual’ in the description, we can therefore get the description to behave in the same way as the concept in counterfactual thoughts. In sum, the descriptive content of a concept like water would be something like ‘the actual stuff causing my experience of an apparently clearly drinkable liquid in the lakes and rivers’. Such content, it is hoped, can account for the phenomena formal models explain without running into the difficulties faced by earlier formal accounts. Whether these modifications really succeed in handling the problems raised by the causal theorist is, however, a topic of ongoing controversy (see Soames 2001, 2005 and Recanati 2013 for recent defenses of the causal approach; see Chalmers 2006 for a defense of the two-dimensional approach, and an advanced overview of the debate).

2. Intentional Objects

Having seen some of the layout of the debate about what determines the object of any intentional state, we can now consider issues that arise when we consider the objects themselves. Do they all have something in common that makes them appropriate as objects of intentional states? Might there be non-existent intentional objects? Do our thoughts connect directly with these objects or only indirectly, via our senses?

a. Intentional Inexistence

Franz Brentano has been mentioned already in this article, in part because his work set the tone for much of the debate over intentionality in the 20th century. One of his claims was that the objects of intentional states have a special type of existence, which he called ‘intentional inexistence’. Whether he meant by that a special sort of existence ‘in’ the intentional, or that intentional objects do not exist, is debated. Supposing that intentionality is always directed at objects that do not exist, however, is particularly problematic, and we’ll look at the difficulties it raises in the next section. So first I’ll explore the possibility that Brentano supposed that intentional objects have a special sort of existence as objects of intentional states.

This idea had a particularly strong influence on the work of Edmund Husserl, who founded a branch of philosophy of mind known as phenomenology, which he conceived of as the study of experience. Husserl emphasizes that the objects of thought have a particular character insofar as they are objects of thought. First, they have to be related to other concepts and ideas in the mind of the thinker in a coherent way, a feature he refers to as their ‘noematic’ character. If our ideas of the objects we encounter in experience conflict too severely with the constraints that our understanding of how the world works, those ideas will disintegrate (something he calls ‘noematic explosion’). Visual illusions present a good example of this. If we are presented with an object that appears to be a cube sitting on a flat surface, we will approach the object with certain expectations, for example that if we turn our heads to one side we will see the side of the cube now out of view, if we grab a hold of it our grasp will be resisted, and so on. If the object turns out to be an image painted in such a way that it only appears as a cube from a certain angle, when we discover this by trying to pick it up, for example, the idea we are working with of the object will disintegrate. It is in this sense that Husserl at least took the objects of thought to have a special sort of existence as objects of thought (Føllesdal 1992, Mooney 2010).

Husserl (1900) proposed that we can study the nature of the constraints that the character of our mind places on the possible objects of thought through a method he calls ‘phenomenological reduction’, which involves uncovering the conditions of our awareness of objects through reflection on the nature of experience. The approach inherits a great deal from Kant’s transcendental idealism, since in both cases we are required to recognize that the nature of our minds may impose a very specific character on objects as we encounter them in experience – a character that we should not be tempted to assume is imposed on our experience by facts about the external world. The idea that the nature of our minds imposes constraints on the way we experience the world is in fact a claim that is increasingly widely accepted, and phenomenology has become an area of particular interest for the emerging field of cognitive science (see for example Varela, Thompson and Rosch 1991).

b. Thinking About Things that Do Not Exist

The second possible interpretation of Brentano’s claim – that intentional objects do not exist – is particularly problematic. Whether or not all objects of thought are non-existent, it certainly seems that many are, including those that are obviously fictitious (The Grinch, Sherlock Holmes) or likely non-existent even if many people believe in them (Faeries, Hell). But deep puzzles arise when we consider what it means to say something about a non-existent object. Can we, for example, coherently state that Santa Claus has flying reindeer? If he does not exist, how can it be true that he has flying reindeer?  Can we indeed even coherently state that Santa Claus does not exist? If he does, our statement is false. But if he does not exist, then it seems that our claim is not about anything – and hence apparently meaningless. Another way of putting the puzzle involves definite descriptions. It seems reasonable to say the following:

(1)     The fairy king does not exist

But upon further consideration 1) is quite puzzling, because the appearance of the definite article ‘the’ in that statement seems to presuppose that there is such a thing as the fairy king to which we refer.

Russell proposed a famous solution to this puzzle. It involves first analyzing definite descriptions to show how we can use these to express claims about things that do not exist, and second to show that most terms that we use to make negative existential claims are actually definite descriptions in disguise. The first move is accomplished by Russell’s analysis of the logical structure of definite descriptions. He takes definite descriptions to have the logical form ‘a unique thing that has the properties F and G’. So, the definite description ‘the fairy king’ in 1) on Russell’s reading is logically equivalent to the description ‘a unique thing that is both a king and a fairy’. Notably, this eliminates the term ‘the’ from the description, and with it the presupposition that there is a fairy king. And rather than being meaningless, the claim that such a thing does not exist is true, if no unique thing exists that is both a king and a fairy:

(2)     There is no unique thing that is a king and a fairy

And, of course, false if there is a unique thing that is a king and a fairy. The second step of Russell’s solution is to hold that most referring terms in ordinary language are actually disguised definite descriptions. The term ‘Santa Claus’ on this view is actually a sort of shorthand for a description, perhaps ‘the man with the flying reindeer’. And this description is in turn to be analyzed as Russell proposes, so that the claim ‘Santa Claus does not exist’ in fact amounts to the denial that a unique individual that has the properties of being a man and having flying reindeer exists. And that seems to be perfectly coherent.

Are there any terms, in language or thought, on this account, that are not descriptions? Russell’s view is that the simplest terms in thought, out of which definite descriptions are composed, are not descriptions but singular terms, whose meaning is simply the object they refer to. These are demonstrative terms like ‘that’ and ‘this’, and our concepts of sensible properties like colors, sounds and smells. The meaning of these terms are fixed by what Russell called ‘acquaintance’ – they are conferred with meaning as a result of a direct interaction between the thinker and thing referred to, for example when we point at a color and simply think to ourselves ‘that’. These terms are only meaningful if in fact there are objects in the world to which they refer. Notice that on this view the second interpretation of Brentano’s claim – that in general the objects of thought do not exist – will become impossible to maintain. Since the descriptions that can pick out non-existent objects are composed of terms that are only meaningful if they refer to existing things, the objects of at least singular terms must exist for the view to make any sense.

c. Direct versus Indirect Intentionality

Even supposing that many objects of thought do exist, a further question arises as to whether the objects that we encounter in experience are products of our minds, or mind-independent objects. The view that the objects of experience are mind-dependent can be motivated by two complementary considerations. First, it seems reasonable to suppose that two different persons’ experiences in the same environment can be different. A color-blind person and a person with perfect color vision might have visually very different experiences in the same environment. Conversely, it seems that one person’s experiences in two very different environments could be the same. When I look at an oasis in the desert, I have a visual experience that might seem to be identical to the experience I have when faced with a mirage, even though these two environments are very different.

These considerations have lead many to argue that our experiences – even those of ordinary objects – are mediated by what have been called ‘sense data’. According to the sense-data theorist, what we immediately experience are not mind-independent objects, but sense-data that are produced at least partly by our minds. This allows us to explain the two puzzles considered above. If what we encounter in experience are sense data and not mind-independent objects, then two people could have very different experiences in the same mind-independent environment, and correlatively, one person could have two indistinguishable experiences in two very different mind-independent environments. Note that these sense-data may correspond very closely to the way things stand in the mind-independent world around us, so the view need not imply that our interactions with the world should be dysfunctional.

This ‘indirect’ theory of perception, however, raises worries about our knowledge of the world. When we say of the ketchup before us that it is red, are we saying this about the ketchup, or about the sense-data that we experience as a result of looking at the ketchup? If we really only experience the sense-data, this would suggest that most of the beliefs we have about the world around us are false. We believe our intentional states are directed at mind-independent objects, but the indirect theory suggests that they are not. We believe we’ve seen red ketchup, but this theory suggests that in fact we’ve only seen sense-data of red ketchup. And if we only have experience of sense-data produced by our minds, this seems to imply that we have never really had any direct experience with the world. It suggests that we’ve never seen waterfalls, smelled flowers, or heard the voices of our friends, but have only experienced sense-data of these things.

An early reply to these concerns involves jettisoning the indirect-theory of perception, and adopting the view that there are no sense-data or any other kind of representations mediating our experiences of the objects around us – a view sometimes called ‘naive realism’, and associated with Moore (1903). But on this approach, explanations of hallucinations or variations between different individuals’ experiences of the same objects are strained. An interesting middle ground is known as ‘disjunctivism’ (Hinton 1967, Snowdon 1981, McDowell 1994, Martin 2002). The disjunctivist holds that the argument for the indirect theory of perception based on hallucinations is fallacious. Although the experiences of the oasis and the mirage might well be indistinguishable for the subject of the experience, this need not imply that the experiences are really the same. Rather, since one experience is the product of an encounter with an oasis, and the other is not, there is a difference between the experiences—it is just one that the subject is unable to identify. As a result, the disjunctivist holds that when we have veridical experiences, we have direct encounters with objects in the world, and when we have hallucinations, what we experience are sense-data produced by our mind. The disjunctivist view, then, at least allows us to see that we might not be forced into the indirect theory of perception by the existence of hallucinations.

3. Intentional States

So far we have looked at the question what determines the object of any given intentional state, and the question what is the nature of the objects of intentional states. What we have not examined is whether there are broad conditions for a state to count as intentional in the first place. Are only rational creatures capable of intentional states? Are intentional states essentially conscious states? Can we provide an account of intentional states in natural terms?

a. Intentionality and Reason

The centrality of reason to the intentional is an important strand in Kant’s famous Critique of Pure Reason (1787), and has informed an influential line of thinking taken up in the work of Sellars (1956), Strawson (1959) and Brandom (1996). Kant argues that in the apprehension of any object, an individual must have a range of concepts at her disposal that she can use to rationally assess the nature of the object apprehended. In order to apprehend a material object, for example, a thinker must understand what causation is. If she does not understand what causation is, she will not understand that if the material object were to be pushed, it would move. Or if it were picked up and thrown against a wall, it would not go straight through the wall or disappear, but would be caused by the solidity of the wall to bounce backward.  Without having the capacity to understand any of these issues, Kant argued, it would not be true to say that an individual apprehends the material object.

The appeal to the necessity of reason for concept-possession often goes hand in hand with the claim that our intentional states are all interdependent. Since I cannot have the concept material object, without the concept cause, then the two concepts depend on one another—and this may be the case for all our concepts, leading to a view known as ‘concept holism’. This raises a puzzle, however, that many think undermines the view. The concern is that if our concepts are interdependent in this way, then if any of my concepts change, all the others change with it. If for example I can only grasp the concept horse if I have the concept animal, then if my animal concept changes in some way, my horse concept will change along with it. If we couple this with the observation that our beliefs about the world are almost constantly being updated, as our day to day experience progresses, then the worry arises that we could literally never have the same thought twice. Any time my beliefs about the world change they will change at least one of my concepts, and if all of my concepts are interdependent, then whenever any of my beliefs change, they will all change. As a result, although it might seem to me that I had thoughts about horses both yesterday and today, this would not be true since the concept that occurred in my thoughts yesterday would not be the same concept as occurred in my thoughts today. Some who think this is an intolerable result adopt the view known as ‘concept atomism’, which holds that our concepts do not stand in essential relations to one another, but only to the external objects they refer to (Fodor and Lepore 1992). Atomism, however, seems to be committed to the claim that I could possess the concept horse without knowing what an animal is, and to the holist that seems as intolerable as concept holism seems to the atomist.

b. Intentionality and Intensionality

Another feature of intentional states that is sometimes thought to be essential is what is called ‘intensionality’ (with an ‘s’). This is the phenomenon whereby the objects of thought are presented to a thinker from a certain point of view—what Frege called a ‘mode of presentation’. We already encountered one of the puzzles that motivate this idea above discussing Frege’s puzzle, where the answer to the question why two concepts can be co-referential without a thinker knowing is proposed to be the fact that a thinker’s concepts pick out an object under a particular mode of presentation.

The potentially essential connection between intentionality and intensionality can be seen when we try to describe someone’s intentional states without bearing in mind their point of view. Recall the beliefs that Lois Lane has about Superman. Lois Lane believes she loves Superman, but does not believe she loves her colleague Clark Kent, not knowing that Superman is Clark Kent. 1) seems like a true description of Lois Lane’s belief about Superman:

(1)     Lois Lane believes that she loves Superman

If (1) is true, however, and Superman is Clark Kent, then we might expect that we would state exactly the same thing if we substitute the name ‘Clark Kent’ for the name ‘Superman’ in (1). That would give us (2):

(2)     Lois Lane believes that she loves Clark Kent

To many, however, it seems that there is something wrong with (2). If Superman walks into the room in his Clark Kent disguise, Lois will not light up as she does when he walks in without the disguise. If Lois is told that Clark Kent is in trouble, she will not infer that the man she loves in is in trouble. A natural explanation for these facts is that the belief reported in (1) is not the same as the belief reported in (2). Since our reports about the beliefs of others may be false if we do not take into consideration the mode of presentation under which the objects of those beliefs are thought of by the holder of the belief, it seems like intensionality may be an essential feature of intentional states.

Another phenomenon that seems to tie intentionality to intensionality is shown in the fact that we cannot infer from the fact that someone has a belief about x, that x exists. This is unusual, since for most cases of predication (ascription of a property to an object), we can infer from the fact that we have ascribed a property to an object that the object exists. For example, if the claim that the sun is bright is true, it would seem to follow that there must be such a thing as the sun. That is, predication ordinarily permits existential generalization: if a property is truly predicated of an object, then some object with that property exists (Fa → ∃xFx). However, from the fact that I believe the sun is bright, it does not follow that there is such a thing as the sun. After all, I might just as easily believe, as Kant did, that phlogiston is the cause of combustion, but as we know, there is no such thing as phlogiston. If we combine these two claims we get a third claim: that neither the assertion nor the denial of a report of an intentional state entails that the proposition the intentional state is about is true or false. For example, we could truly assert that Kant believed that phlogiston causes combustion, but this does not entail that it is true that phlogiston causes combustion.

Chisholm (1956) thought that an intentional state is any state whose description has these three features: failure to preserve truth given the inter-substitution of co-referring terms (such as ‘Superman’ for ‘Clark Kent’), failure to allow existential generalization entailment (the existence of the intentional object), and failure to entail the truth of the object proposition (such as the belief ascribed to the thinker).

However, these criteria do not seem to hold up for all intentional states. While it does not follow from the fact that Kant believes phlogiston causes combustion that there is such a thing as phlogiston, or that it is true that phlogiston causes combustion, these things would seem to follow if it we held that Kant knew that phlogiston causes combustion. That is, it does not seem possible to have knowledge of things that do not exist, or of propositions that are not true, so if someone knows Fa, then an object with the property F must exist, and if someone knows that p, then p must be true. Knowledge ascriptions therefore do not satisfy the second and third conditions proposed by Chisholm, and yet they are surely intentional states. And perceptual states, which also seem to be intentional states, do not obviously satisfy any of the conditions. You cannot perceive something that does not exist, and you cannot perceive that p is the case if p is not the case, and additionally it is possible to intersubstitute co-referring terms in descriptions of perceptions. If it is true that Jimi Hendrix saw Bob Dylan at Woodstock, then it is true that Jimi Hendrix saw Robert Zimmerman at Woodstock, because Bob Dylan is Robert Zimmerman. Hendrix might not have believed that he saw Robert Zimmerman, or have known that he saw Robert Zimmerman, but nevertheless, if he saw Bob Dylan, he saw Robert Zimmerman. And perceptual states also seem quite clearly to be intentional states.

There is surely an important connection between intentionality and intensionality, then, but how it works in detail is clearly more complex than Chisholm thought.

c. Intentionality and Consciousness

A state of a creature is a conscious state if there is something it is like for the creature to be in that state. There is something it feels like for a person to have their hand pressed onto a hot grill, but there is not anything it feels like for a cheese sandwich to be pressed onto a hot grill. Do these conscious states have an essential connection to intentionality? Might intentionality depend on consciousness, or vice versa?

Some views take conscious states to be a kind of intentional state—thus holding that consciousness depends on intentionality. There are good prima facie grounds for holding this view. It is not obvious how I could be conscious of a horse being before me without my conscious state being directed at, or about, the horse. The idea that conscious states are a species of intentional state can be teased out in various ways. We might say that conscious content is simply intentional content that is available for rational evaluation, so that if I am conscious that it is raining, I have a mental state about the rain that I can reflect upon (Dennett 1991). Or we could say that conscious states always represent the world as being in such-and-such a way, so that if I am conscious that it is raining, I have a mental state that represents the world as being rainy right now (Tye 1995). Or, that conscious states are states that are naturally selected to indicate to a subject that her environment is in such and such a way, and again therefore intentional (Dretske 1995).

However the view that there are ‘raw feels’ in our conscious experience that do not say anything at all about the world also has considerable pull. For example, you might think that when you’re conscious of the warmth of the sun on your face you can indeed reflect upon the fact and judge that it is sunny where you are, but that the warm feeling itself does not tell you that it is sunny. On this view there are two things here, the warm feeling, and the subsequent judgment ‘it is sunny’, which although formed on the basis of the feeling is nevertheless distinct from it (Ryle 1949, Sellars 1956, Peacocke 1983). On this view, conscious states are not intentional in themselves, since they do not in themselves represent the world as being in any particular way, even if they can be used to make judgments about the world.

On the other hand, we might think the dependence runs the other way: that intentional states depend on consciousness. We might suppose that it is hard to make sense of the claim that we could have mental states about the world without the world feeling any way at all to us. Searle (1983), for example, thinks that our notion of the mind essentially involves the notion of consciousness, so he denies that there could be essentially unconscious mental states. To deal with the case of beliefs or desires that I am not currently consciously entertaining, he argues that these must at least have the potential to become conscious in order to be properly understood as mental states.

This dependence claim has its skeptics too, however. The position known as ‘epiphenomenalism’ holds that there is no essential role for consciousness to play in our lives: that consciousness is caused by, but itself plays no causal role in, other mental events. We may happen to have conscious experiences concurrent with some of the events in our lives (such as intentional events), and they may even stand in constant conjunction with those events, but this in itself is not evidence that a creature could not exist that carries out the same activities with no conscious experiences at all. A real life example can get this intuition going. In a phenomenon sometimes called ‘blindsight’, subjects display above chance capacity to discriminate features of their environment while at least reporting that they have no corresponding conscious experience of these features. In one experiment, a subject is shown two drawings of a house, each identical in every respect except that one house is represented as being on fire. When asked, the subjects insist that they can see no difference between the two houses (the house on fire is in the visual region that the subject is having problems with). When pressed on which house they would prefer to live in, however, the subjects show an above chance preference for the house that is not represented as being on fire. Since the subjects seem to have distinct attitudes to the two pictures, hence distinct intentional states directed at each picture, and since there is no apparent variation in conscious experience, some take such cases to motivate the claim that it is possible to have intentional states without any conscious component.

d. Naturalizing Intentionality

Whatever the essence of intentionality might be, a further question that arises is whether we can ‘naturalize’ our account of it. That is to say, whether we can give an account of intentionality that can be exhaustively described in the terms in which the laws of nature are expressed. There is a long tradition of holding that the mind is outside of space and time – that it is an immaterial substance – and on that view, since intentional states are mental states, intentionality could not be naturalized. But particularly in the 20th century, there has been a push to reject the view that the mind is immaterial and to try to account for the mind in terms of natural processes, such as causal relations, natural selection, and any other process that can be explained in terms of the laws of the natural sciences.

The attempt faces various challenges. We have already looked at one, which is that if we take intentional states to depend on consciousness, and we hold that it is not possible to give a naturalized account of consciousness, then it follows that we cannot naturalize intentionality. But there is another particularly tricky puzzle facing the naturalization of intentionality in terms of causal relations. As we saw above (3b) at least some intentional states have the property of intensionality: it does not follow from the fact that I believe p that p is the case, and it does not follow from the fact that I do not believe p that p is not the case. Another way to put this is that our concepts do not always co-vary with the objects they represent. On the one hand we can encounter the objects our concepts refer to without our concepts triggering, for example, when Lois Lane meets Clark Kent and the thought ‘that’s Superman’ fails to occur to her. And conversely our concepts can be triggered when the object they refer to is not about, such as when I see a cow in the night and mistakenly think ‘there’s a horse’. Our concepts, in other words, can trigger when they should not, and can fail to trigger when they should. This is a problem for naturalizing intentionality, because the causal theory of intentionality (1b) is at the heart of attempts to naturalize intentionality, and the causal theory has trouble explaining intensionality. For example, the causal theory holds that a concept refers to whatever causes it to trigger. But if Lois Lane bumps into Clark Kent and her superman concept fails to trigger, this would suggest that Lois Lane’s superman concept does not refer to Clark Kent. And that’s not a good outcome, since Superman is Clark Kent. Similarly, if I see a cow in the night and my horse concept goes off, the causal account implies that my horse concept refers to cows in the night. And that’s no good either.

Dretske (1981) argues that causal relations can in fact exhibit intensionality, so that we can naturalize intentionality. A compass, he argues, indicates the location of the North Pole because the North Pole causes the compass needle to point at it. He takes a compass to be a ‘natural indicator’ of the North Pole, and so to exhibit natural intentionality. But he thinks the compass also exhibits intensionality. In addition to indicating the North Pole, the compass also indicates the location of polar bears, because there are polar bears at the North Pole. However, if the polar bears move south, the compass will not continue to indicate their location. As a result, suggests Dretske, the compass exhibits intensionality: the compass can fail to indicate the location of polar bears, even though the location of polar bears is the North Pole, just as Lois Lane’s superman concept can fail to indicate Clark Kent, even though Clark Kent is Superman. There is a problem with this account, however, because the relationship between the location of polar bears and the North Pole is very different to the relationship between Superman and Clark Kent. The location of the polar bears can fail to be where the North Pole is, but Clark Kent cannot fail to be where Superman is. That is, the kind of failure to trigger that we are concerned to explain is where a concept fails to trigger in response to what is necessarily identical to its reference – not in response to something that merely happens to be co-instantiated with its reference on some occasions.

Another attempt to allow for these cases within a causal theory appeals to the notion of a natural function or telos (Mathen and Levy 1984, Millikan 1984, Dretske 1995, Papineau 1993). If the heart has been selected by evolution to pump blood, then we can say that the natural function of the heart is to pump blood. But functions can malfunction, as we see when the heart stops, thus failing to continue to pump blood. What distinguishes the correct from the incorrect activities of the heart is whether the heart is doing what it was selected for by evolution. The teleological theory of intentionality proposes that this same mechanism distinguishes the correct and incorrect triggers of a concept. When my horse concept tokens in response to my encounter with a cow in the night, it is malfunctioning, because it was selected to alert me to the presence of horses. This account faces several objections, but the clearest is that it rules out the possibility of a creature having thoughts whose mental states did not come into being through natural selection. Although highly unlikely, it is does not seem impossible that a being formally identical to a thinking person could come into existence by chance, through the right freak coincidence of physical events (in one story it involves lightning hitting a swamp and the right chemicals instantaneously bonding to form a molecule-for-molecule match of an adult human (Davidson 1987)). If the teleological theory of intentionality were right, such a being would have no intentional states since its brain states would have no natural history, even though it would be physically and behaviorally indistinguishable from a thinking person. Many see this is as a reductio ad absurdum of the teleological account, since it seems that by hypothesis such a being would be able to perceive, form desires and beliefs about its environment, and so forth.

Another proposal still is that we can distinguish correct from incorrect triggers of a concept in terms of the relationship they stand in to one another: the incorrect triggers of a concept only cause the concept to trigger because the correct triggers do, but the correct triggers don’t trigger the concept because the incorrect ones do (Fodor 1987). To return to the cow in the night example, the proposal is that if horses didn’t cause my horse concept to trigger, cows in the night wouldn’t either: the reason cows in the night cause it to trigger is because horses cause it to trigger, and cows in the night look like horses. But the reverse is not the case: if cows in the night didn’t cause my horse concept to trigger, this needn’t mean that horses wouldn’t. Correct and incorrect triggers can therefore by identified by this ‘asymmetric dependence’ relation they have to one another. When we try to explain why the correct triggers would continue to cause a concept to token even if the incorrect triggers didn’t, however, the proposal becomes less convincing. Returning to the Twin-Earth example, if we travel to Twin-Earth our water concept will be triggered by the watery looking stuff there, presumably falsely. But since Twin-Earth water is by hypothesis ordinarily indistinguishable from Earth water, it seems wrong to say that if Twin-Earth water did not cause our water concept to trigger, Earth water still would. The reason Earth water causes our water concept to trigger, after all, is presumably because it looks, tastes and smells a certain way. But Twin-Earth water looks, tastes and smells exactly the same way, so it is far from clear why we should expect that if Twin-Earth water did not trigger our water concept Earth water still would. Fodor (1998) replies that we should discount Twin-Earth worries because Twin-Earth does not exist. But it is not clear that this helps, since we could surely discover a substance on Earth that we might not be able to distinguish from water, in which case the same worry can be raised without discussing Twin-Earth.

Needless to say there are further arguments made on behalf of these proposals, but as things stand, there is no widely accepted solution to the problem presented by intensionality for naturalizing intentionality.

4. References and Further Reading

  • Brandom, R. (1996). Making it Explicit. Harvard University Press.
  • Brentano, F. (1874/1911/1973). Psychology from an Empirical Standpoint, London: Routledge and Kegan Paul.
  • Chalmers, D. (1996). The Conscious Mind, Oxford: Oxford University Press.
  • Chalmers, D. (2006). “Foundations of Two-Dimensional Semantics.” In M. Garcia-Carpintero and J. Macia (eds). Two-Dimensional Semantics: Foundations and Applications. Oxford: Oxford University Press.
  • Chisholm, R. M. (1956). “Perceiving: a Philosophical Study,” chapter 11, selection in D. Rosenthal (ed.), The Nature of Mind, Oxford: Oxford University Press, 1990.
  • Davidson, D. (1980). Essays on Events and Actions, Oxford: Clarendon Press.
  • Davidson, D. (1987). “Knowing One’s Own Mind.” In Proceedings and Addresses of the American Philosophical Association, 60: 441–58.
  • Dennett, D.C (1991). Consciousness Explained. Boston: Little Brown.
  • Dretske, F. (1981). Knowledge and the Flow of Information, Cambridge, Mass.: MIT Press.
  • Dretske, F. (1995). Naturalizing the Mind. Cambridge, Mass.: MIT Press.
  • Dreyfus, H.L. (ed.) (1982). Husserl, Intentionality and Cognitive Science, Cambridge, Mass.: MIT Press.
  • Evans, G. (1979). “Reference and Contingency.” The Monist, 62, 2 (April, 1979), 161-189.
  • Fodor, J.A. (1975). The Language of Thought, New York: Crowell.
  • Fodor, J.A. (1987). Psychosemantics, Cambridge, Mass.: MIT Press.
  • Fodor, J.A. (1998). Concepts: Where Cognitive Science Went Wrong, New York: Oxford University Press.
  • Fodor, J. A. and Lepore, E. (1992). Holism: A Shopper’s Guide. Oxford: Blackwell.
  • Føllesdal, D. (1982). “Husserl’s notion of Noema,” in H.L. Dreyfus (ed.), The Nature of Mind, Oxford: Oxford University Press.
  • Frege, G. (1892/1952). “On Sense and Reference.” In P. Geach and M. Black (eds.), Philosophical Writings of Gottlob Frege, Oxford: Blackwell, 1952.
  • Goodman, N. (1968). Languages of Art: An Approach to a Theory of Symbols. Indianapolis: The Bobbs-Merrill Company.
  • Haugeland, J. (1981). “Semantic Engines: an Introduction to Mind Design.” In J. Haugeland (ed.), Mind Design, Philosophy, Psychology, Artificial Intelligence, Cambridge, Mass.: MIT Press, 1981.
  • Hinton, J.M., (1967). “Visual Experiences.” Mind, 76: 217–227.
  • Husserl, E. (1900/1970). Logical Investigations, (Engl. Transl. by Findlay, J.N.), London: Routledge and Kegan Paul.
  • Jackson, F. (1998). From Metaphysics to Ethics. Oxford: Oxford University Press.
  • Kaplan, D. (1979). “Dthat.” In P. French, T. Uehling, and H. Wettstein (eds.), Contemporary Perspectives in the Philosophy of Language, Minneapolis: University of Minnesota Press.
  • King, P. (2007). “Rethinking Representation in the Middle Ages.” In Representation and Objects of Thought in Medieval Philosophy, edited by Henrik Lagerlund, Ashgate Press: 81-100.
  • Kim, J. (1993). Mind and Supervenience, Cambridge: Cambridge University Press.
  • Kripke, S. (1972/1980). Naming and Necessity, Oxford: Blackwell.
  • Martin, M.G.F. (2002). “The Transparency of Experience.” Mind and Language, 17: 376–425.
  • Mohan, M. & Levy, E. (1984). “Teleology, Error, and the Human Immune System.” Journal of Philosophy 81 (7):351-372.
  • McDowell, J. (1994). Mind and World. Oxford: Oxford University Press.
  • McGinn, C. (1989). Mental Content, Oxford: Oxford University Press.
  • McGinn, C. (1990). Problems of Consciousness, Oxford: Blackwell.
  • Mill, J.S. (1884). A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation, New York: Harper.
  • Millikan, R.G. (1984). Language, Thought and Other Biological Objects, Cambridge, Mass.: MIT Press.
  • Mooney, T. (2010). “Understanding and Simple Seeing in Husserl.” Husserl Studies, 26: 19-48.
  • Moore, G.E. (1903). “The Refutation of Idealism.” Mind 12 (1903) 433-53.
  • Papineau, D. (1993). Philosophical Naturalism. Oxford: Blackwell.
  • Peacocke, C. (1983). Sense and Content: Experience, Thought and their Relations, Oxford: Oxford University Press.
  • Putnam, H. (1974). “The Meaning of ‘Meaning’,” in H. Putnam, Philosophical Papers, vol. II, Language, Mind and Reality, Cambridge: Cambridge University Press, 1975.
  • Recanati, F. (2013). Mental Files. Oxford University Press.
  • Russell, B. (1905/1956). “On Denoting,” in R. Marsh (ed.), Bertrand Russell, Logic and Knowledge, Essays 1901-1950, New York: Capricorn Books, 1956.
  • Russell, B. (1911). The Problems of Philosophy, (New York: Holt).
  • Ryle, G. (1949). The Concept of Mind. Oxford University Press.
  • Searle, J. (1958). “Do Proper Names have Sense?” Mind 67: 166-173.
  • Searle, J. (1983). Intentionality, Cambridge: Cambridge University Press.
  • Searle, J, (1994). “Intentionality (1),” in Guttenplan, S. (ed.) (1994) A Companion Volume to the Philosophy of Mind, Oxford: Blackwell.
  • Sellars, W. (1956/1997). “Empiricism and the Philosophy of Mind.” In Empiricism and the Philosophy of Mind: with an Introduction by Richard Rorty and a Study Guide by Robert Brandom, R. Brandom (ed.), Cambridge, MA: Harvard University Press.
  • Snowdon, P.F., (1981). “Perception, Vision and Causation.” Proceedings of the Aristotelian Society, New Series, 81: 175–92.
  • Soames, S. (2005). Reference and Description: The Case against Two-Dimensionalism. Princeton: Princeton University Press.
  • Strawson, P. (1959). The Bounds of Sense. Oxford University Press.
  • Tye, M. (1995). Ten Problems of Consciousness, Cambridge, Mass.: MIT Press.
  • Varela, F., Thompson, E., and Rosch E., (1991). The Embodied Mind: Cognitive Science and Human Experience, Cambridge, Mass.: MIT Press.
  • Wittgenstein, L. (1953). Philosophical Investigations. Oxford: Blackwell.

 

Author Information

Cathal O’Madagain
Email: cathalcom@gmail.com
Ecole Normale Superieure, Paris
France

Ethical Expressivism

Broadly speaking, the term “expressivism” refers to a family of views in the philosophy of language according to which the meanings of claims in a particular area of discourse are to be understood in terms of whatever non-cognitive mental states those claims are supposed to express. More specifically, an expressivist theory of claims in some area of discourse, D, will typically affirm both of the following theses. The first thesis—psychological non-cognitivism—states that claims in D express mental states that are characteristically non-cognitive. Non-cognitive states are often distinguished by their world-to-mind direction of fit, which contrasts with the mind-to-world direction of fit exhibited by cognitive states like beliefs. Some common examples of non-cognitive states are desires, emotions, pro- and con-attitudes, commitments, and so forth. According to the second thesis—semantic ideationalism—the meanings or semantic contents of claims in D are in some sense given by the mental states that those claims express. This is in contrast with more traditional propositional or truth-conditional approaches to meaning, according to which the meanings of claims are to be understood in terms of either their truth-conditions or the propositions that they express.

An expressivist theory of truth claims—that is, claims of the form “p is true”—might hold that (i) “p is true” expresses a certain measure of confidence in, or agreement with, p, and that (ii) whatever the relevant mental state, for example, agreement with p, that state just is the meaning of “p is true”. In other words, when we claim that p is true, we neither describe p as true nor report the fact that p is true; rather, we express some non-cognitive attitude toward p (see Strawson 1949). Similar expressivist treatments have been given to knowledge claims (Austin 1970; Chrisman 2012), probability claims (Barker 2006; Price 2011; Yalcin 2012), claims about causation (Coventry 2006; Price 2011), and even claims about what is funny (Gert 2002; Dreier 2009).

“Ethical expressivism”, then, is the name for any view according to which (i) ethical claims—that is, claims like “x is wrong”, “y is a good person”, and “z is a virtue”—express non-cognitive mental states, and (ii) these states make up the meanings of ethical claims. (I shall henceforth use the term “expressivism” to refer only to ethical expressivism, unless otherwise noted.) This article begins with a brief account of the history of expressivism, and an explanation of its main motivations. This is followed by a description of the famous Frege-Geach Problem, and of the role that it played in shaping contemporary versions of the view. While these contemporary expressivisms may avoid the problem as it was originally posed, recent work in metaethics suggests that Geach’s worries were really just symptoms of a much deeper problem, which can actually take many forms. After characterizing this deeper problem—the Continuity Problem—and some of its more noteworthy manifestations, the article explores a few recent trends in the literature on expressivism, including the advent of so-called “hybrid” expressivist views. See also "Non-Cognitivism in Ethics."

Table of Contents

  1. Expressivism and Non-Cognitivism: History and Motivations
  2. The Frege-Geach Problem and Hare’s Way Out
  3. The Expressivist Turn
  4. The Continuity Problem
    1. A Puzzle about Negation
    2. Making Sense of Attitude Ascriptions
    3. Saving the Differences
  5. Recent Trends
    1. Expressivists’ Attitude Problem
    2. Hybrid Theories
    3. Recent Work in Empirical Moral Psychology
  6. References and Further Reading

1. Expressivism and Non-Cognitivism: History and Motivations

The first and primary purpose of this section is to lay out a brief history of ethical expressivism, paying particular attention to its main motivations. In addition to this, the section will also answer a question that many have had about expressivism, namely: what is the difference between expressivism and “non-cognitivism”?

The difference is partly an historical one, such that a history of expressivism must begin with its non-cognitivist ancestry. Discussions of early non-cognitivism typically involve three figures in particular—A. J. Ayer, C. L. Stevenson, and R. M. Hare—and in that respect, this one will be no different. But rather than focusing upon the substance of their views, in this section, we will be more interested in the main considerations that motivated them to take up non-cognitivism in the first place. As we shall see, early non-cognitivist views were motivated mostly by two concerns: first, a desire to avoid unwanted ontological commitments, especially to a realm of “spooky,” irreducibly normative properties; and second, a desire to capture an apparently very close connection between sincere ethical claims and motivation.

In the case of Ayer, his motivation for defending a version of non-cognitivism was relatively clear, since he explains in the Introduction of the second edition of Language, Truth, and Logic (1946), “[I]n putting forward the theory I was concerned with maintaining the general consistency of my position [logical positivism].” As is well known, logical positivists were rather austere in their ontological accommodations, and happy to let the natural sciences decide (for the most part) what gets accommodated. In fact, a common way to interpret their verificationism is as a kind of method for avoiding unwanted ontological commitments—“unwanted” because they do not conform to what Ayer himself described as his and other positivists’ “radical empiricism.” Claims in some area of discourse are meaningful, in the ordinary sense of that term—which, for Ayer, is just to say that they express propositions—only if they are either analytic or empirically verifiable. Claims that are neither analytic nor empirically verifiable—like most religious claims, for instance—are meaningless; they might express something, but not propositions.

Ayer’s positivism could perhaps make room for moral properties as long as those properties were understood as literally nothing but the natural properties into which philosophers sometimes analyze them—for example, maximizing pleasure, since this is in principle verifiable—but it left no room at all for the irreducibly normative properties that some at the time took to be the very subject-matter of ethics (see Moore 1903). So in order to “maintain the general consistency of his position,” and to avoid any commitment to empirically unverifiable, irreducibly normative properties, Ayer’s positivism meant that he had to construe ordinary ethical claims as expressing something other than propositions. Moreover, for reasons unimportant to my purposes here, he argued that these claims express non-cognitive, motivational states of mind—in particular, emotions. It is for this reason that Ayer’s brand of non-cognitivism is often referred to as “emotivism”.

Stevenson likely shared some of Ayer’s ontological suspicions, but this pretty clearly is not what led him to non-cognitivism. Rather than being concerned to maintain the consistency of any pre-conceived philosophical principles, Stevenson begins by simply observing our ordinary practices of making ethical claims, and then he asks what kind of analysis of “good” is able to make the best sense out of these practices. For instance, in practice, he thinks ethical claims are made more to influence others than to inform them. In fact, in general, Stevenson seems especially impressed with what he called the “magnetism” of ethical claims—that is, their apparently close connection to people’s motivational states. But he thinks that other attempts to analyze “good” in terms of these motivational states have failed on two counts: (a) they make genuine ethical disagreement impossible, and (b) they compromise the autonomy of ethics, assigning ethical facts to the province of psychology, or sociology, or one of the natural sciences.

According to Stevenson, these other theories err in conceiving the connection between ethical claims and motivational states in terms of the former describing, or reporting, the latter—so that, for instance, the meaning of “Torture is wrong” consists in something like the proposition that I (the speaker) disapprove of torture. This is what led to problems (a) and (b) from above: two people who are merely describing or reporting their own attitudes toward torture cannot be genuinely disagreeing about its wrongness; and if the wrongness of torture were really just a matter of people’s attitudes toward it, then ethical inquiries could apparently be settled entirely by such means as introspection, psychoanalysis, or even just popular vote. Stevenson’s non-cognitivism, then, can be understood as an attempt to capture the relation between ethical claims and motivational states in a way that avoids these problems.

The solution, he thinks, is to allow that ethical claims have a different sort of meaning from ordinary descriptive claims. If ordinary descriptive claims have propositional meaning—that is, meaning that is a matter of the propositions they express—then ethical claims have what Stevenson called emotive meaning. “The emotive meaning of a word is a tendency of a word, arising through the history of its usage, to produce (result from) affective responses in people.  It is the immediate aura of feeling which hovers about a word” (Stevenson 1937, p.23; see also Ogden and Richards 1923, 125ff). A claim like “Torture is the subject of today’s debate” may get its meaning from a proposition, but the claim “Torture is wrong” has emotive meaning, in that its meaning is somehow to be understood in terms of the motivational states that it is typically used either to express or to arouse.

If Ayer and Stevenson apparently disagreed over the meaningfulness of ethical claims, with Ayer at times insisting that such claims are meaningless, and Stevenson allowing that they have a special kind of non-propositional meaning, they were nonetheless united in affirming a negative semantic thesis, sometimes called semantic non-factualism, according to which claims in some area of discourse—in this case, ethical claims—do not express propositions, and, consequently, do not have truth-conditions. Regardless of whether or not ethical claims are meaningful in some special sense, they are not meaningful in the same way that ordinary descriptive claims are meaningful, that is, in the sense of expressing propositions. Ayer and Stevenson were also apparently united in affirming what we earlier called psychological non-cognitivism. So as the term shall be used here, ‘ethical non-cognitivism’ names any view that combines semantic non-factualism and psychological non-cognitivism, with respect to ethical claims.

According to Hare, ethical claims actually have two kinds of meaning: descriptive and prescriptive. To call a thing “good” is both (a) to say or imply that it has some context-specific set of non-moral properties; this is the claim’s descriptive meaning, and (b) to commend the thing in virtue of these properties (this is the claim’s prescriptive meaning). But importantly, the prescriptive meaning of ethical claims is primary: the set of properties that I ascribe to a thing when calling it “good” varies from context to context, but in all contexts, I use “good” for the purpose of commendation. For Hare, then, ethical claims are used not to express emotions, or to excite the emotions of others, but rather to guide actions. They do this by taking the imperative mood. That is, they are first-and-foremost prescriptions. For this reason, Hare’s view is often called “prescriptivism”.

It may be less clear than it was in the case of Ayer and Stevenson whether Hare’s prescriptivism ought to count as a version of non-cognitivism. After all, it is not uncommon to suppose that sentences in the imperative mood still have propositional content. Since he rarely goes in for talk of “expression”, it is unclear whether Hare is a psychological non-cognitivist. However, it would nonetheless be fair to say that, since prescriptions do not have truth-conditions, Hare is committed to saying that the relationship between prescriptive ethical claims and propositions is fundamentally different from that between ordinary descriptive claims and propositions; and in this sense, it does seem as if he is committed to a form of semantic non-factualism. It also seems right to think that if we do not express any sort of non-cognitive, approving attitude toward a thing when we call it “good,” then we do not really commend it. So even if he is not explicit in his adherence to it, Hare does seem to accept some form of psychological non-cognitivism as well.

Also unclear are Hare’s motivations for being an ethical non-cognitivist. By the time Hare published The Language of Morals (1952), non-cognitivism was already the dominant view in moral philosophy. So there was much less of a need for Hare to motivate the view than there was for Ayer and Stevenson a couple decades earlier. Instead, Hare’s concern was mostly to give a more thorough articulation of the view than the other non-cognitivists had, and one sophisticated enough to avoid some of the problems that had already arisen for earlier versions of the view.

One thing that does appear to have motivated Hare’s non-cognitivism, however, is its ability to explain intuitions about moral supervenience. Most philosophers agree that there is some kind of relationship between a thing’s moral status and its non-moral features, such that two things cannot have different moral statuses without also having different non-moral features. This is roughly what it means to say that a thing’s moral features supervene upon its non-moral features. For example, if it is morally wrong for Stan to lie to his teacher, but not morally wrong for Stan to lie to his mother, then there must be some non-moral difference between the two actions that underlies and explains their moral difference, for example, something to do with Stan’s reasons for lying in each case. While non-philosophers may not be familiar with the term “supervenience”, the fact that we so often hold people accountable for judging like cases suggests that we do intuitively take the moral to supervene upon the non-moral.

Those philosophers, like Moore, who believe in irreducibly normative properties must explain how it is that, despite apparently not being reducible to non-moral properties, these properties are nonetheless able to supervene upon non-moral properties, which has proven to be an especially difficult task (see Blackburn 1988b). But non-cognitivists like Hare do not shoulder this difficult metaphysical burden. Instead, they explain intuitions about moral supervenience in terms of rational consistency. If Joan commends something in virtue of its non-moral properties, but then fails to commend something else with an identical set of properties, then she is inconsistent in her commendations, and thereby betrays a certain sort of irrationality. It is this simple expectation of rational consistency, and not some complicated thesis about the ontological relations that obtain between moral and non-moral properties, that explains our intuitions about moral supervenience.

Not long after Hare’s prescriptivism hit the scene, ethical non-cognitivism would be the target of an attack from Peter Geach. Given that the attack was premised upon a point made earlier by German philosopher Gottlob Frege, it has come to be known as the Frege-Geach Problem for non-cognitivism. In the next section, we will see what the Frege-Geach Problem is. Before doing so, however, let us briefly return to the question raised at the beginning of this section: what is the difference between expressivism and non-cognitivism?

In the introduction, we saw that ethical expressivism is essentially the combination of two theses concerning ethical claims: psychological non-cognitivism and semantic ideationalism. As we will see in Sections 2 and 3, the Frege-Geach Problem pressures the non-cognitivist to say more about the meanings of ethical claims than just the non-factualist thesis that they are not comprised of truth-evaluable propositions. It is partly in response to this pressure that contemporary non-cognitivists have been moved to accept semantic ideationalism. So the difference between expressivism and non-cognitivism is historical, but it is not merely historical.  Rather, the difference is substantive as well: both expressivists and non-cognitivists accept some form of psychological non-cognitivism; but whereas the earlier non-cognitivists accepted a negative thesis about the contents of ethical claims—essentially, a thesis about how ethical claims do not get their meanings—contemporary expressivists accept a positive thesis about how ethical claims do get their meanings: ethical claims mean what they do in virtue of the non-cognitive mental states they express. It should be noted, however, that there are still many philosophers who use the terms “non-cognitivism” and “expressivism” interchangeably.

2. The Frege-Geach Problem and Hare’s Way Out

Non-cognitivist theories have met with a number of objections throughout the years, but none as famous as the so-called Frege-Geach Problem. As a point of entry into the problem, observe that there are ordinary linguistic contexts in which it seems correct to say that a proposition is being asserted, and contexts in which it seems incorrect to say that a proposition is being asserted.  Consider the following two sentences:

(1)        It is snowing.

(2)        If it is snowing, then the kids will want to play outside.

In ordinary contexts, to make claim (1) is to assert that it is snowing. That is, when a speaker utters (1), she puts forward a certain proposition—in this case, the proposition that it is snowing—as true. Accordingly, if we happen to know that it is not snowing, it could be appropriate to say that the speaker is wrong.  But when a speaker utters (2), she does not thereby assert that it is snowing. Someone can coherently utter (2) without having any idea whether it is snowing, or even knowing that it is not snowing. In the event that it is not snowing, we should not then say that the speaker of (2) is wrong. However, whether “It is snowing” is being asserted or not, it surely means the same thing in the antecedent of (2) as it does in (1). Equally, while we should not say that the speaker of (2) is wrong if it happens not to be snowing, it would nonetheless be correct, in that event, to say that both (1) and the antecedent of (2) are false.

This is what Geach calls “the Frege point,” a reference to German philosopher Gottlob Frege: “A thought may have just the same content whether you assent to its truth or not; a proposition may occur in discourse now asserted, now unasserted, and yet be recognizably the same proposition” (Geach 1965, p.449). The best way to account for the facts that (a) claim (1) and the antecedent of (2) have the same semantic contents, and that (b) they are both apparently capable of truth and falsity, is to suppose that claim (1) and the antecedent of (2) both express the proposition that it is snowing. So apparently, a claim’s expressing a proposition is something wholly independent of what a speaker happens to be doing with the claim, e.g., whether asserting it or not.

Now, we should note two things about the theories of early non-cognitivists like Ayer, Stevenson, and Hare. First, they are meant only to apply to claims in the relevant area of discourse—in this case, ethical claims—and are not supposed to generalize to other sorts of claims. In other words, theirs are apparently specialized, or “local,” semantic theories. So, for instance, most ethical non-cognitivists would agree that claim (1) expresses the proposition that it is snowing, and that this accounts for the meaning of (1). Second, perhaps understandably, ethical non-cognitivists focus their theories almost entirely upon ethical claims when they are asserted. The basic question is always something like this: what really is going on when a speaker makes an assertion of the form ‘x is wrong’? Does the speaker thereby describe x as wrong? Or might it be a kind of fallacy to assume that the speaker is engaged in an act of description, based only upon the surface grammar of the sentence? Might she instead be doing something expressive or evocative? Geach observes, “Theory after theory has been put forward to the effect that predicating some term ‘P’—which is always taken to mean: predicating ‘P’ assertorically—is not describing an object as being P but some other ‘performance’; and the contrary view is labeled ‘the Descriptive Fallacy’” (Geach 1965, p.461). Little attention is paid to ethical claims in contexts where they are not being asserted.

The Frege-Geach Problem can be understood as a consequence of these two features of non-cognitivist theories. As we saw earlier with claims (1) and (2), when we embed a claim into an unasserted context, like the antecedent of a conditional, we effectively strip the claim of its assertoric force. Claim (1) is assertoric, but the antecedent of (2) is not, despite having the same semantic content. But as Geach points out, exactly the same phenomenon occurs when we take a claim at the heart of some non-cognitivist theory and embed it into an unasserted context. This is why the Frege-Geach Problem is sometimes called the Embedding Problem. For example, consider the following two claims, similar in form to claims (1) and (2):

(3)        Lying is wrong.

(4)        If lying is wrong, then getting your little brother to lie is wrong.

As with claims (1) and (2) above, the relationship between a speaker and claim (3) is importantly different from the relationship between a speaker and the antecedent of claim (4). At least in ordinary contexts, a speaker of (3) asserts that lying is wrong, whereas a speaker of (4) does no such thing. But, assuming “the Frege point” applies here as well, the semantic contents of (3) and the antecedent of (4) do not depend upon whether they are being asserted or not. In both cases, their contents ought to be the same; and therein lies the rub for ethical non-cognitivists.

Given that their theories are meant to apply only to ethical claims, and not to claims in other areas of discourse, non-cognitivists are apparently committed to telling a radically different story about the semantic content of (3), as compared to the propositional story they would presumably join everyone else in telling about the contents of claims like (1) and (2). But whatever story they tell about the content of (3), it is unclear how it could apply coherently to the antecedent of (4) as well. Take Ayer, for instance. According to Ayer, claim (3) is semantically no different from

(3’)      Lying!!

“where the shape and thickness of the exclamation marks show, by a suitable convention, that a special sort of moral disapproval is the feeling which is being expressed” (Ayer (1946)1952, p.107). Ayer believed that speakers of claims like (3) are not engaged in acts of description, but rather acts of expressing their non-cognitive attitudes toward various things. This is how Ayer’s theory treats the contents of ethical claims when they are asserted. Now, absent some independently compelling reason for thinking that “the Frege point” should not apply here, the same analysis ought to be given to the antecedent of (4). But the same analysis cannot be given to the antecedent of (4). For, just as a speaker can sincerely and coherently utter (2) without believing that it is snowing, a speaker can sincerely and coherently utter (4) without disapproving of lying. So whatever Ayer has to say about the content of the antecedent of (4), it cannot be that it consists in the expression of “a special sort of moral disapproval,” since a speaker of (4) does not express disapproval of lying. Apparently, then, he is committed to saying, counter-intuitively, that the contents of (3) and the antecedent of (4) are different.

As Geach poses it, the problem for the ethical non-cognitivist at this point is actually two-fold (see especially Geach 1965: 462-465). First, the non-cognitivist must explain how ethical claims are able to function as premises in logical inferences in the first place, if they do not express propositions. Traditionally, inference in logic is thought to be a matter of the truth-conditional relations that hold between propositions, and logical connectives like “and”, “or”, and “if-then” are thought to be truth-preserving functions from propositions to propositions. But as we have already seen, ethical non-cognitivists deny that ethical claims are even in the business of expressing propositions. So how, Geach wonders, are we apparently able to infer

(5)        Therefore, getting your little brother to lie is wrong

from (3) and (4), if the content of (3) is nothing more than an attitude of disapproval toward lying?  Or consider:

(6)        Lying is wrong or it isn’t.

Claim (6) can be inferred from (3) by a familiar logical principle, and in non-ethical contexts, we account for this by explaining how disjunction relates two or more propositions. But how can someone who denies that (3) expresses a proposition explain the relationship between (3) and (6)? The second part of the problem, related to the first, is that the non-cognitivist must explain why the inference from (3) and (4) to (5), for instance, is a valid one. As any introductory logic student knows well, the validity of modus ponens depends upon the minor premise and the antecedent of the major premise having the same content. Otherwise, the argument equivocates, and the inference is invalid. But as we just saw, on the theories of non-cognitivists like Ayer, claim (3) and the antecedent of (4) apparently do not have the same content. So Ayer seems committed to saying that what appears to be a straightforward instance of modus ponens is in fact an invalid argument. This is the so-called Frege-Geach Problem for non-cognitivism as Geach originally put it.

In response to an argument very much like Geach’s (see Searle 1962), Hare appears to give non-cognitivists a “way out” of the Frege-Geach Problem (Hare 1970). As Hare sees it, the matter ultimately comes down to whether or not the non-cognitivist can adequately account for the compositionality of language, that is, the way the meanings of complex sentences are composed of the meanings of their simpler parts. As has already been noted, linguists and philosophers of language have traditionally done this by telling a story about propositions and the various relations that may hold between them—the meaning of (2), for instance, is composed of (a) the proposition that it is snowing, (b) the proposition that the kids will want to play outside, and (c) the conditional function “if-then”. The challenge for the non-cognitivist is simply to find another way to account for compositionality—though, it turns out, this is no simple matter.

Hare’s own proposal was to think of the meanings of ethical claims in terms of the sorts of acts for which they are suited and not in terms of propositions or mental states. The claim “Lying is wrong,” for instance, is especially suited for a particular sort of act, namely, the act of condemning lying. Thinking of the meanings of ethical claims in this way allows Hare and other non-cognitivists to effectively concede “the Frege point,” since suitability for an act is something wholly independent of whether a claim is being asserted or not. It allows them, for instance, to say that the content of (3) is the same as the content of the antecedent of (4), which, we saw, was a problem for theories like Ayer’s. From here, accounting for the meanings of complex ethical claims, like (4) and (6), is a matter of conceiving logical connectives not as functions from propositions to propositions, but rather as functions from speech acts to speech acts. If non-cognitivists could do something like this, that is, draw up a kind of “logic of speech acts”, then they would apparently have the resources for meeting both of Geach’s challenges. They could explain how ethical claims can function as premises in logical inferences, and they could explain why some of those inferences, and not others, are valid. Unfortunately, Hare himself stopped short of working out such a logic, but his 1970 paper would nonetheless pave the way for future expressivist theories and their own responses to the Frege-Geach Problem.

3. The Expressivist Turn

Earlier, it was noted that the difference between non-cognitivism and expressivism is both historical and substantive. To repeat, ethical non-cognitivists were united in affirming the negative semantic thesis that ethical claims do not get their meanings from truth-evaluable propositions, as in semantic non-factualism. But as we have already seen with Hare, the Frege-Geach Problem pressures non-cognitivists to say something more than this, in order to account for the meanings of both simple and complex ethical claims, and to explain how some ethical claims can be inferred from others.

Contemporary ethical expressivists respond to this pressure by doing just that: while still affirming the semantic non-factualism of their non-cognitivist ancestors, expressivists nowadays add to this the thesis that was earlier called semantic ideationalism. That is, they think that the meanings of ethical claims are constituted not by propositions, but by the very non-cognitive mental states that they have long been thought to express. In other words, if non-cognitivists “removed” propositions from the contents of ethical claims, then expressivists “replace” those propositions with mental states, or “ideas”—hence, ideationalism. It is this move, made primarily in response to the Frege-Geach Problem, and by following Hare’s lead, that constitutes the historical turn from ethical non-cognitivism to ethical expressivism. Both non-cognitivists and expressivists believe that ethical claims express non-cognitive attitudes, but expressivists are distinguished in thinking of the expression relation itself as a semantic one.

Ethical expressivism is often contrasted with another theory of the meanings of ethical claims according to which those meanings are closely related with speaker’s non-cognitive states of mind, namely, ethical subjectivism. Ethical subjectivism can be understood as the view that the meanings of ethical claims are propositions, but propositions about speakers’ attitudes. So whatever the relationship between claim (1) above and the proposition that it is snowing, the same relationship holds between claim (3) and the proposition that I (the speaker) disapprove of lying. So ethical subjectivists can also, with expressivists, say that ethical claims mean what they do in virtue of the non-cognitive states that they express. But whereas the expressivist accounts for this in terms of the way the claim itself directly expresses the relevant state, the subjectivist accounts for it in terms of the speaker indirectly expressing the relevant state by expressing a proposition that refers to it.

The contrast between expressivism and subjectivism is important not only for the purpose of understanding what expressivism is, but also for seeing a significant advantage that it is supposed to have over subjectivism. Suppose Jones and Smith are engaged in a debate about the wrongness of lying, with Jones claiming that it is wrong, and Smith claiming that it is not wrong.  Presumably, for this to count as a genuine disagreement, it must be the case that their claims have incompatible contents. But according to subjectivism, the contents of their claims, respectively, are the propositions that I (Jones) disapprove of lying and that I (Smith) do not disapprove of lying. Clearly, though, these two propositions are perfectly compatible with each other. Where, then, where is the disagreement? This is often thought to be a particularly devastating problem for ethical subjectivism, that is, it cannot adequately account for genuine moral disagreement, but it is apparently not a problem for expressivists. According to expressivism, the disagreement is simply a matter of Jones and Smith directly expressing incompatible states of mind.  This is one of the advantages of supposing that the semantic contents of ethical claims just are mental states, and not propositions about mental states.

Now, recall the two motivations that first led people to accept ethical non-cognitivism. The first was a desire to avoid any ontological commitment to “spooky,” irreducibly normative properties. Moral realists, roughly speaking, are those who believe that properties like goodness and wrongness have every bit the ontological status as other, less controversial properties, like roundness and solidity, that is, moral properties are no less “real” than non-moral properties. But especially for those philosophers committed to a thoroughgoing metaphysical naturalism, it is hard to see how things like goodness and wrongness could have such a status. Especially when it is noted, as Mackie famously does, that moral properties as realists typically conceive them are somehow supposed to have a kind of built-in capacity to motivate those who apprehend them, to say nothing of how they are supposed to be apprehended, a capacity apparently not had by any other property (see Mackie 1977, p.38-42). Ethical expressivists avoid this problem by denying that people who make ethical claims are even engaged in the task of ascribing moral properties to things in the first place. Ontologically speaking, expressivism demands little more of the world than people’s attitudes and the speakers who express them, and so, it nicely satisfies the first of the two non-cognitivist desiderata.

The second desideratum was a desire to accommodate an apparently very close connection between ethical claims and motivation. In simple terms, motivational internalism is the view that a necessary condition for moral judgment is that the speaker be motivated to act accordingly. In other words, if Jones judges that lying is wrong, but has no motivation whatsoever to refrain from lying, or to condemn those who lie, or whatever, then internalists will typically say that Jones must not really judge lying to be wrong. Even if motivational internalism is false, though, it is surely right that we expect people’s ethical claims to be accompanied by motivations to act in certain ways; and when people who make ethical claims seem not to be motivated to act in these ways, we often assume either that they are being insincere or that something else has gone wrong. Sincere ethical claims just seem to “come with” corresponding motivations. Here, too, expressivism seems well suited to account for this feature of ethical claims, since they take ethical claims to directly express non-cognitive states of mind, for example, desires, emotions, attitudes, commitments, and these states are either capable of motivating by themselves, or at least closely tied to motivation. So while ethical expressivists distinguish themselves from earlier non-cognitivists by accepting the thesis of semantic ideationalism, they are no less capable of accommodating the very same considerations that motivated non-cognitivism in the first place.

Finally, return to the Frege-Geach Problem. As we saw in the previous section, Geach originally posed it as a kind of logical problem for non-cognitivists: by denying that claims in the relevant area of discourse express propositions, non-cognitivists take on the burden of explaining how such claims can be involved in logical inference, and why some such inferences are valid and others invalid. Hare took a first step toward meeting this challenge by proposing that we understand the contents of ethical claims in terms of speech acts, and then work out a kind of “logic” of speech acts. Contemporary expressivists, since they understand the contents of ethical claims not in terms of speech acts but in terms of mental states, are committed to doing something similar with whatever non-cognitive states they think are expressed by these claims. In other words, as it is sometimes put, expressivists owe us a kind of “logic of attitudes.”

Here, again, is our test case:

(3)        Lying is wrong.

(4)        If lying is wrong, then getting your little brother to lie is wrong.

(5)        Therefore, getting your little brother to lie is wrong.

If the meanings of (3), (4), and (5) are to be understood solely in terms of mental states, and not in terms of propositions, how is it that we can infer (5) from (3) and (4)? And why is the inference valid?

In some of his earlier work on this, Blackburn (1984) answers these questions by suggesting that complex ethical claims like (4) express higher-order non-cognitive states, in this case, something like a commitment to disapproving of getting one’s little brother to lie upon disapproving of lying. If someone sincerely disapproves of lying, and is also committed to disapproving of getting her little brother to lie as long as she disapproves of lying—the two states expressed by (3) and (4), respectively—then she thereby commits herself to disapproving of getting her little brother to lie. This is one sense in which (5) might “follow from” (3) and (4), even if it is not exactly the entailment relation with which we are all familiar from introductory logic.

Furthermore, a familiar way to account for the validity of inferences like (3)-(5) is by saying that it is impossible for the premises to be true and for the conclusion to be false. But if the expressivist takes something like the approach under consideration here, he will presumably have to say something different, since it is certainly possible for someone to hold both of the attitudes expressed by (3) and (4) without also holding the attitude expressed by (5). So for instance, the expressivist might say something like this: while a person certainly can hold the attitudes expressed by (3) and (4) without also holding the attitude expressed by (5), such a person would nonetheless exhibit a kind of inconsistency in her attitudes—she would have what Blackburn calls a “fractured sensibility” (1984: 195). It is this inconsistency that might explain why the move from (3) and (4) to (5) is “valid,” provided that we allow for this alternative sense of validity. Recall, that this is essentially the same sort of inconsistency of attitudes that Hare thought underlies our intuitions about moral supervenience.

This is just one way in which expressivists might attempt to solve the Frege-Geach Problem.  Others have attempted different sorts of “logics of attitudes,” with mixed results. In early twenty-first century discourse, the debate about whether such a thing as a “logic of attitudes” is even possible—and if so, what it should look like—is ongoing.

4. The Continuity Problem

Even if expressivists can solve, or at least avoid, the Frege-Geach Problem as Geach originally posed it, there is a deeper problem that they face, a kind of “problem behind the problem”, and that will be the subject of this section. To get a sense of the problem, consider that expressivists have taken a position that effectively pulls them in two opposing directions. On the one hand, since the earliest days of non-cognitivism, philosophers in the expressivist tradition have wanted to draw some sort of sharp contrast between claims in the relevant area of discourse and claims outside of that area of discourse, that is, between ethical and non-ethical claims. But on the other hand, and this is the deeper issue that one might think lies behind the Frege-Geach Problem, ethical claims seem to behave in all sorts of logical and semantic contexts just like their non-ethical counterparts. Ethical claims are apparently no different from non-ethical claims in being (a) embeddable into unasserted contexts, like disjunctions and the antecedents of conditionals, (b) involved in logical inferences, (c) posed as questions, (d) translated across different languages, (e) negated, (f) supported with reasons, and (g) used to articulate the objects of various states of mind, for example, we can say that Jones believes that lying is wrong, Anderson regrets that lying is wrong, and Black wonders whether lying is wrong, to name just a few. It is in accounting for the many apparent continuities between ethical and non-ethical claims that expressivists run into serious problems. So call the general problem here the Continuity Problem for expressivism.

One very significant step that expressivists have taken in order to solve the Continuity Problem is to expand their semantic ideationalism to apply to claims of all sorts, and not just to claims in the relevant area of discourse. So, in the same way that ethical claims get their meanings from non-cognitive mental states, non-ethical claims get their meanings from whatever states of mind they express. In other words, expressivists attempt to solve the Continuity Problem by swapping their “local” semantic ideationalism, that is, ideationalism specifically with respect to claims in the discourse of concern, for a more “global” ideationalist semantics intended to apply to claims in all areas of discourse. This is remarkable, as it represents a wholesale departure from the more traditional propositionalist semantics according to which sentences mean what they do in virtue of the propositions they express. Recall the earlier claims:

(1)        It is snowing.

(3)        Lying is wrong.

According to most contemporary expressivists, the meanings of both (1) and (3) are to be understood in terms of the mental states they express.  Claim (3) expresses something like disapproval of lying, and claim (1) expresses the belief that it is snowing, as opposed to the proposition that it is snowing. So even if ethical and non-ethical claims express different kinds of states, their meanings are nonetheless accounted for in the same way, that is, in terms of whatever mental states the relevant claims are supposed to express.

If nothing else, this promises to be an important first step toward solving the Continuity Problem. But taking this step, from local to global semantic ideationalism, may prove to be more trouble than it is worth, as it appears to raise all sorts of other problems a few of which we shall consider here under the general banner of the Continuity Problem.

a. A Puzzle about Negation

Keeping in mind that expressivism now appears to hinge upon it being the case that an ideationalist approach to semantics can account for all of the same logical and linguistic phenomena that the more traditional propositional or truth-conditional approaches to semantics can account for, consider a simple case of negation:

(1)        It is snowing.

(7)        It is not snowing.

On an ideationalist approach to meaning, (1) gets its meaning from the belief that it is snowing, and (7) gets its meaning from either the belief that it is not snowing, or perhaps a state of disbelief that it is snowing, assuming, for now, that a state of disbelief is something different from a mere lack of belief. A claim and its negation ought to have incompatible contents, and this is apparently how an ideationalist would account for the incompatibility of (1) and (7). But now consider a case of an ethical claim and its negation:

(3)        Lying is wrong.

(8)        Lying is not wrong.

We saw these claims earlier, in Section 3, when discussing how expressivists are supposed to be able to account for genuine moral disagreement in a way better than ethical subjectivists.  Basically, expressivists account for such disagreement by supposing that a speaker of (3) and a speaker of (8) express incompatible mental states, as is the case with (1) and (7).  But if the incompatible states in the case of (1) and (7) are states of belief that p and belief that not-p (or belief and disbelief), what are the incompatible states in this case?

The non-cognitive mental state expressed by (3) is presumably something like disapproval of lying. So what is the non-cognitive state that is expressed by (8)? On the face of it, this seems like it should be an easy question to answer, but upon reflection, it turns out to be really quite puzzling. Whatever is expressed by (8), it should be something that is independently plausible as the content of such a claim, and it should be something that is somehow incompatible with the state expressed by (3). But what is it?

To see why this is puzzling, consider the following three sentences (adapted from Unwin 1999 and 2001):

(9)        Jones does not think that lying is wrong.

(10)      Jones thinks that not lying is wrong.

(11)      Jones thinks that lying is not wrong.

These three sentences say three importantly different things about Jones. Furthermore, it seems as if the state attributed to Jones in (11) should be the very same state as the one expressed by (8) above. But again, what is that state?  Let us proceed by process of elimination. It cannot be that (11) attributes to Jones a state of approval, that is, approving of lying. Presumably, for Jones to approve of lying would be for Jones to think that lying is right, or good. But that is not what (11) says; it says only that he thinks lying is not wrong. Nor can (11) attribute to Jones a lack of disapproval of lying, since that is what is attributed in (9), and as we’ve already agreed, (9) and (11) tell us different things about Jones. Moreover, (11) also cannot attribute to Jones the state of disapproval of not lying, since that is the state being attributed in (10). But at this point, it is hard to see what mental state is left to be attributed to Jones in (11), and to be the content of (8).

The expressivist does not want to say that (3) and (8) express incompatible beliefs, or states of belief and disbelief, as with (1) and (7), since beliefs are cognitive states, and we know that expressivists are psychological non-cognitivists. If (3) and (8) express beliefs, and we share with Hume the idea that beliefs by themselves are incapable of motivating, then we will apparently not have the resources for explaining the close connection between people sincerely making one of these claims and their being motivated to act accordingly. Nor does the expressivist want to say that (3) and (8) express inconsistent propositions, since that would be to abandon her semantic non-factualism. Propositions are often thought to determine truth conditions, and truth conditions are often thought to be ways the world might be. So to allow that (3) and (8) express propositions would presumably be to allow that there is a way the world might be that would make it true that lying is wrong. Furthermore, accounting for this would involve the expressivist in precisely the sort of moral metaphysical inquiries she seeks to avoid. For these reasons, it is crucial for the expressivist to find a non-cognitive mental state to be the content of (8). It must be something incompatible with the state expressed by (3), and it must be a plausible candidate for the state attributed to Jones in (11). But as we have seen, it is very difficult to articulate just what state it is.

Expressivists must show us that, even after accepting global semantic ideationalism, we are still able to account for all of the same phenomena as those accounted for by traditional propositional approaches to meaning. But here it seems they struggle even with something as simple as negation. Further, until they provide a satisfactory explanation of the contents of negated ethical claims, it will remain unclear whether they really do have a better account of moral disagreement than ethical subjectivists, as has long been claimed.

b. Making Sense of Attitude Ascriptions

Earlier, it was noted that ethical claims are no different from non-ethical claims in being able to articulate the objects of various states of mind. Let us now look closer at why expressivists may have a problem accounting for this particular point of continuity between ethical and non-ethical discourse.

(12)      Frank fears that it is snowing.

(13)      Wanda wonders whether it is snowing.

(14)      Haddie hates that it is snowing.

Claims (12)-(14) ascribe three different attitudes to Frank, Wanda, and Haddie. Clearly, however, these three attitudes have something in common, something that can be represented by the claim from earlier

(1)        It is snowing.

Traditionally, the way that philosophers of mind and language have accounted for this is by saying that (1) expresses the proposition that it is snowing, and that what all three of the attitudes ascribed to Frank, Wanda, and Haddie have in common is that they are all directed at one and the same proposition, that is, they all have the same proposition as their object.

By abandoning traditional propositional semantics, though, expressivists take on the burden of finding some other way of explaining how the contents of expressions like “fears that”, “wonders whether”, and “hates that” are supposed to relate to the content of whatever follows them. If the content of (1) is supposed to be something like the belief that it is snowing, as ideationalists suppose, and (1) is also supposed to be able to articulate the object of Frank’s fear, then the expressivist is apparently committed to thinking that Frank’s fear is actually directed at the belief that it is snowing. But, of course, Frank is not afraid of the belief that it is snowing—he is not afraid to believe that it is snowing—rather, he is afraid that it is snowing.

Things are no less problematic in the ethical case. For consider:

(15)      Frank fears that lying is wrong.

(16)      Wanda wonders whether lying is wrong.

(17)      Haddie hates that lying is wrong.

Here again, it seems right to say that the attitudes ascribed in (15)-(17) all share something in common, something that can be represented by the claim from earlier

(3)        Lying is wrong.

But if it is denied that (3) expresses a proposition, as ethical expressivists and non-cognitivists always have, it becomes unclear how (3) could be used to articulate the object of those attitudes.  Focus upon (15) for a moment. Now, what are the contents of ‘fears that’ and ‘lying is wrong’, such that the latter is the object of the former? We presumably have one answer already, from the expressivist: the content of ‘lying is wrong’ in (15), like the content of (3), is an attitude of disapproval toward lying. However, on the plausible assumption that the content of “fears that” is an attitude of fear toward the content of whatever follows, we apparently get the expressivist saying that (15) ascribes to Frank a fear of disapproval of lying, or a fear of disapproving of lying. But surely that is not what (15) ascribes to Frank. He may fear these other things as well, but (15) says only that he fears that lying is wrong.

The expressivist may try to avoid this puzzle by insisting that “lying is wrong” as it appears in (15) has a content that is different from the content of (3), but this still leaves us wondering what the meanings of claims like (15)-(17) are supposed to be, according to the expressivist’s ideationalist semantics. As Schroeder explains, expressivists “owe an account of the meaning of each and every attitude verb, for example, fears that, wonders whether, and so on; just as much as they owe an account of “not”, “and”, and “if … then”. Very little progress has yet been made on how non-cognitivists [or expressivists] can treat attitude verbs, and the prospects for further progress look dim” (Schroeder 2008d, p.716).

c. Saving the Differences

One might think that a simple way to defeat any non-factualist account of ethical claims is simply to point out that we can coherently embed ethical claims into truth claims. It makes perfect sense, for instance, for someone to say, “It is true that lying is wrong.” Presumably, however, this could only make sense if whatever follows “It is true that” is the sort of thing that can be true. Of course, propositions are among the sorts of things that can be true, in fact, this is often thought to be their distinguishing characteristic. But non-factualists deny that ethical claims express propositions. So how do they account for the fact that the truth-predicate seems to apply just as well to ethical claims as it does to non-ethical claims?

If this were a devastating problem for non-cognitivists, then the non-cognitivist tradition in ethics would not have lasted for very long, since philosophers were well aware of the matter soon after Ayer first published Language, Truth, and Logic in 1936. The thought then—essentially just an application of Ramsey’s (1927) famous redundancy theory of truth—was that, in at least some cases, the truth-predicate does not actually ascribe some metaphysically robust property being true to whatever it is being predicated of. Rather, to add the truth-predicate to a claim is to do nothing more than to simply assert the claim by itself. In claiming that “It is true that lying is wrong,” on this view, a speaker expresses the very same state that is expressed by claiming only that “Lying is wrong,” and nothing more; hence, the “redundancy” of the truth predicate.

In early twenty-first century discourse, theories like Ramsey’s are referred to as deflationary or minimalist theories of truth, since they effectively “deflate” or “minimize” the ontological significance of the truth-predicate. Some ethical expressivists, in part as a way of solving the Continuity Problem, have taken to supplementing their expressivism with deflationism. The basic idea goes something like this: if we accept a deflationary theory of truth across the board, we can apparently say that ethical claims are truth-apt, in fact, every bit as truth-apt as any other sort of claim. This allows the expressivist to avoid simple versions of the objection noted at the beginning of this section.  Interestingly, the deflationism need not stop with the truth-predicate. We might also deflate the notion of a proposition by insisting that a proposition is just whatever is expressed by a truth-apt claim. As long as we allow that ethical claims are truth-apt, in some deflationary sense, we may now be able to say, for instance, that

(3)        Lying is wrong

expresses the proposition that lying is wrong, after all. If this is allowed, then the expressivist may now have the resources for accounting for the compositionality of ethical discourse in basically the same way in which traditional propositional semanticists would do so. The meanings of complex ethical claims are to be understood in terms of the propositions expressed by their parts. Once the notion of a proposition is deflated, we might just as well deflate the notion of belief by saying something to the effect that all it is for one to believe that p is for one to accept a claim that expresses the proposition that p. In these ways, perhaps an expressivist can “earn the right” to talk of truth, propositions, and beliefs, and perhaps also knowledge, in the ethical domain, just as they do in non-ethical domains.

This is the essence of Blackburn’s brand of expressivism, known commonly nowadays as ‘quasi-realism’. As we saw earlier, moral realists are those who believe that moral properties have every bit the ontological status as other, less controversial properties, like roundness and solidity. This allows realists to account for things like truth, propositions, beliefs, and knowledge in the ethical domain in precisely the same way that we ordinarily do in other domains, such as those that include facts about roundness and solidity. By deflating the relevant notions, however, Blackburn and other moral non-realists are nonetheless supposed to be able to say all the things that realists say about moral truth, and the like; hence, “quasi”-realism.

There are at least two problems for ethical expressivists who take this approach to solving the Continuity Problem. The first is simply that deflationism is independently a very controversial view. In his own defense of a deflationary theory of truth, Paul Horwich addresses no fewer than thirty-nine “alleged difficulties” faced by such a theory (Horwich 1998). Granted, he apparently believes that all of these difficulties can be addressed with some degree of satisfaction, but few will deny that deflationary theories of truth represent a departure from the common assumption that truth is a real property of things, and that this property consists in something like a thing’s corresponding with reality. Deflationism may help expressivists avoid the Continuity Problem, but at the cost of then burdening them to defend deflationism against its many problems.

A second and more interesting problem, though, is that taking this deflationary route may, in the end, ruin what was supposed to be so unique about expressivism all along. In other words, there is a sense in which deflationism may too good a response to the Continuity Problem. After all, at the core of ethical expressivism is the belief that there is some significant difference between ethical and non-ethical discourse. Recall again our two basic instances of each:

(1)        It is snowing.

(3)        Lying is wrong.

As we just saw, once deflationism is allowed to run its course, we end up saying remarkably similar things about (1) and (3). Both are truth-apt; both express propositions; both can be the objects of belief; both can be known; and so forth. But now you may be wondering: what, then, is supposed to be the significant difference that sets (3) apart from (1)? Or, another way of putting it: what would be the point of contention between an expressivist and her opponents if both parties agreed to deflate such notions as truth, proposition, and belief? This has sometimes been called the problem of “saving the differences” between ethical and non-ethical discourse.

One response to this problem might be to say that the relevant differences between ethical and non-ethical discourse actually occur at a level below the surface of the two linguistic domains. Recall that we deflated the notion of belief by saying that to believe that p is just to accept a claim that expresses the proposition that p. Using these terms, the expressivist might say that the main difference between (1) and (3) is a matter of what is involved in “accepting” the two claims. Accepting an ethical claim like (3) is something importantly different from accepting a non-ethical claim like (1), and presumably the difference has something to do with the types of mental states involved in doing so.  Whether or not this sort of response will work is the subject of an ongoing debate in early twenty-first century philosophical literature.

5. Recent Trends

While the Continuity Problem remains a lively point, or collection of points, of debate between expressivists and their critics, it is certainly not the only topic with which those involved in the literature are currently occupied. Here we review a few other recent trends in expressivist thought, perhaps the most notable among them being the advent of so-called “hybrid” expressivist theories.

a. Expressivists’ Attitude Problem

There are some who would say that the Continuity Problem just is the Frege-Geach Problem, that is, that the Frege-Geach Problem ought to be understood very broadly, so as to include all of the many issues associated with the apparent logical and semantic continuities between ethical and non-ethical discourse. Even so, ethical expressivism faces other problems as well. Let us now look briefly at an issue that is receiving more and more attention these days—the so-called Moral Attitude Problem for ethical expressivism.

Recall again that expressivists often claim to have a better way of accounting for the nature of moral disagreement than the account on offer from ethical subjectivists. The idea, according to the expressivist, is supposed to be that a moral disagreement is ultimately just a disagreement in non-cognitive attitudes. Rather than expressing propositions about their opposing attitudes—which, we saw earlier, would be perfectly compatible with each other—the two disagreeing parties directly express those opposing non-cognitive attitudes. But then, in our discussion of the puzzle about negation, we saw that the expressivist may actually owe us more than this. Specifically, she owes us an explanation of what, exactly, those opposing attitudes are supposed to be. If Jones claims that lying is wrong, and Smith claims that it is not wrong, then Jones and Smith are engaged in a moral disagreement about lying. The expressivist, presumably, will say that Jones expresses something like disapproval of lying. But then what is the state that is directly expressed by Smith’s claim, such that it is disagrees, or is incompatible, with Jones’ disapproval?

According to the Moral Attitude Problem, the issue actually runs deeper than this, for there are more constraints on the expressivist’s answer than just that the state expressed by Smith be something incompatible with Jones’ disapproval of lying. In fact, Jones’ disapproval of lying may turn out to be no less mysterious than whatever sort of state is supposed to be expressed by Smith. After all, we disapprove of all sorts of things. Suppose that Jones also disapproves of Quentin Tarantino movies, but Smith does not. Presumably, this would not count as a moral disagreement, despite the fact that Jones and Smith are expressing mental states similar to those expressed in their disagreement about lying. So what is it, according to ethical expressivism, that makes the one disagreement, and not the other, a moral disagreement? This is especially puzzling given that expressivists often clarify their view by saying that moral disagreements are more like aesthetic disagreements, like a disagreement over Tarantino films; than they are like disagreements over facts, such as whether or not it is snowing.

So the Moral Attitude Problem, basically, is the problem of specifying the exact type, or types, of attitude expressed by ethical claims, such that someone expressing the relevant state counts as making an ethical claim—as opposed to an aesthetic claim, or something else entirely. Judith Thomson raises something like the Moral Attitude Problem when she writes,

The [ethical expressivist] needs to avail himself of a special kind of approval and disapproval: these have to be moral approval and moral disapproval.  For presumably he does not wish to say that believing Alice ought to do a thing is having toward her doing it the same attitude of approval that I have toward the sound of her splendid new violin. (Thomson 1996, p.110)

And several years later, in a paper entitled “Some Not-Much-Discussed Problems for Non-Cognitivism in Ethics,” Michael Smith raises the same problem:

[Ethical expressivists] insist that it is analytic that when people sincerely make normative claims they thereby express desires or aversions.  But which desires and aversions … , and what special feature do they possess that makes them especially suitable for expression in a normative claim? (Smith 2001, p.107)

But it is only very recently that expressivists and their opponents have begun to give the Moral Attitude Problem the attention that it deserves (see Merli 2008; Kauppinen 2010; Köhler 2013; Miller 2013, pp.39-47, pp.81-87; and Björnsson and McPherson 2014)

What can the expressivist say in response? For starters, expressivists can, and should, point out that the Moral Attitude Problem is not unique to their view. Indeed, those who think that ethical claims express cognitive states, like beliefs—namely, ethical cognitivists—face a very similar challenge: Jones believes both that lying is wrong and that Quentin Tarantino movies are bad, but only one of these counts as a moral belief; what is it, exactly, that distinguishes the moral from the non-moral belief? Cognitivists will say that the one belief has a moral proposition as its content, whereas the other belief does not. But that just pushes the question back a step: what, now, is it that distinguishes the moral from the non-moral proposition? Whether it be a matter of spelling out the difference between moral and non-moral beliefs, or that between moral and non-moral propositions, cognitivists are no less burdened to give an account of the nature of moral thinking than are ethical expressivists.

In fact, Köhler argues that expressivists can actually take what are essentially the same routes in response to the Moral Attitude Problem as those taken by cognitivists. Cognitivists, he thinks, have just two options: they can either (a) characterize the nature of moral thinking by reference to some realm of sui generis moral facts which, when they are the objects of beliefs, make those beliefs moral beliefs, or else (b) do the same, but without positing a realm of sui generis moral facts, and instead identifying moral facts with some set of non-moral facts. Similarly, it seems expressivists have two options: they can either (a) say that “the moral attitude” is some sui generis state of mind, or else (b) insist that “the moral attitude” can be analyzed in terms of non-cognitive mental states with which we are already familiar, like desires and aversions, approval and disapproval, and so forth.

The second of these options for expressivists is certainly the more popular of the two. But according to Köhler, if expressivists are to be successful in taking this approach, they ought to conceive of the identity between “the moral attitude” and other, more familiar non-cognitive states in much the same way that naturalistic moral realists conceive of the identity between moral and non-moral facts—that is, either by insisting that the identity is synthetic a posteriori, as the so-called “Cornell realists” do with moral and non-moral facts, or by insisting that the identity is conceptual, but non-obvious, an approach to conceptual analysis proposed by David Lewis, and recently taken up by a few philosophers from Canberra. Otherwise, if an expressivist is comfortable allowing for a sui generis non-cognitive mental state to hold the place of “the moral attitude,” she should get to work explaining what this state is like. Indeed, Köhler argues that this can be done without violating expressivism’s long-standing commitment to metaphysical naturalism (see Köhler 2013, pp.495-507).

b. Hybrid Theories

Perhaps the most exciting of recent trends in the expressivism literature is the advent of so-called “hybrid” expressivist theories. The idea behind hybrid theories, very basically, is that we might be able to secure all of the advantages of both expressivism and cognitivism by allowing that ethical claims express both non-cognitive and cognitive mental states. Why call them hybrid expressivist views, then, and not hybrid cognitivist views? Recall that the two central theses of ethical expressivism are psychological non-cognitivism—the thesis that ethical claims express mental states that are characteristically non-cognitive—and semantic ideationalism—the thesis that the meanings of ethical claims are to be understood in terms of the mental states that they express. Since neither of these theses state that ethical claims express only non-cognitive states, the hybrid theorist can affirm both of them whole-heartedly. For that reason, hybrid theories are generally considered to be forms of expressivism.

The idea that a single claim might express two distinct mental states is not a new one. Philosophers of language have long thought, for instance, that slurs and pejoratives are capable of doing this. Consider the term “yankee” as used by people living in the American South. In most cases, among Southerners, to call someone a “yankee” is to express a certain sort of negative attitude toward the person. But importantly, the term “yankee” cannot apply to just anyone, rather, it applies only to people who are from the North. Acordingly, when native Southerner Roy says, “Did you hear?  Molly’s dating a yankee!” he expresses both (a) a belief that Molly’s partner is from the North, and (b) a negative attitude toward Molly’s partner. It seems we need to suppose that Roy has and expresses both of these states—one cognitive, the other non-cognitive—in order to make adequate sense of the meaning of his claim. In much the same way, hybrid theorists in metaethics suggest that ethical claims can express both beliefs and attitudes. Indeed, these philosophers often model their theories on an analogy to the nature of slurs and pejoratives (see Hay 2013).

Even within the expressivist tradition, the language of hybridity may be new, but the basic idea is not. Recall from earlier that Hare believed ethical claims have two sorts of meaning: descriptive meaning and prescriptive meaning. To claim that something is “good,” he thinks, is to both (a) say or imply that it has some context-specific set of non-moral properties; this is the claim’s descriptive meaning, and (b) commend the thing in virtue of these properties; this is the claim’s prescriptive meaning. This is not far off from a hybrid view according to which “good”-claims express both (a) a belief that something has some property or properties, and (b) a positive non-cognitive attitude toward the thing. Hare was apparently ahead of his time in this respect. The hybrid movement as it is now known is less than a decade old.

One of the earliest notable hybrid views is Ridge’s “ecumenical expressivism” (see Ridge 2006 and 2007). In its initial form, ecumenical expressivism is the view that ethical claims express two closely related mental states—one a belief, and the other a non-cognitive state like approval or disapproval. Furthermore, as an instance of semantic ideationalism, ecumenical expressivism adds that the literal meanings, or semantic contents, of ethical claims are to be understood solely in terms of these mental states. So, for example, the claim

(3)        Lying is wrong

expresses something like these two states: (a) disapproval of things that have a certain property F, and (b) a belief that lying has property F. Notably, the view allows for a kind of subjectivity to moral judgment, since the nature of property F will differ from person to person. A utilitarian, for instance, might disapprove of behavior that fails to maximize utility; a Kantian might instead disapprove of behavior that disrespects people’s autonomy; and so on and so forth. Furthermore, Ridge’s view is supposed to be able to solve the Frege-Geach Problem by conceiving of logical inference and validity in terms of the relationships that obtain among beliefs.

(4)        If lying is wrong, then getting your little brother to lie is wrong.

According to ecumenical expressivism, complex ethical claims like (4) also express two states: (a) disapproval of things that have a certain property F, and (b) the complex belief that if lying has property F, then getting one’s little brother to lie has property F as well. Coupled with an account of logical validity understood in terms of consistency of beliefs, this looks like a promising way to satisfy Geach’s two challenges. (Ridge has since updated his view so that it is no longer a semantic theory, but rather a meta-semantic theory. Thus, rather than attempting to assign literal meanings to ethical claims, Ridge means only to explain that in virtue of which ethical claims have the meanings that they do. See Ridge 2014.)

The implicature-style views defended by Copp and Finlay also fall within the hybrid camp (Copp 2001, 2009; Finlay 2004, 2005). Coined by philosopher H. Paul Grice, the term “implicature” refers to a semantic phenomenon in which a speaker means or implies one thing, while saying something else. A popular example is that of the professor who writes, “Alex has good handwriting,” in a letter of recommendation. What the professor says is that Alex has good handwriting, but what the professor means or implies is that Alex is not an especially good student. So the claim “Alex has good handwriting” has both a literal content, that Alex has good handwriting, and an implicated content, that Alex is not an especially good student.

In the same way, Copp and Finlay suggest that ethical claims have both literal and implicated contents. Once again:

(3)        Lying is wrong

According to these implicature-style views, someone who sincerely utters (3) thereby communicates two things. First, she either expresses a belief, or asserts a proposition, to the effect that lying is wrong—this is the claim’s literal content. Second, she implies that she has some sort of non-cognitive attitude toward lying—this is the claim’s implicated content. It is in this way that implicature-style views are supposed to capture the virtues of both cognitivism and expressivism. Where Copp and Finlay disagree is over the matter of what it is in virtue of which the non-cognitive attitude is implicated. According to Copp, it is a matter of linguistic conventions that govern ethical discourse; whereas Finlay thinks it is a matter of the dynamics of ethical conversation. So Copp’s view is an instance of conventional implicature, while Finlay’s is an instance of conversational implicature.

There may be yet another way to “go hybrid” with one’s expressivism. Rather than hybridizing the mental state(s) expressed by ethical claims, one might instead hybridize the very notion of expression itself. This is the route taken by defenders of a view known as ‘ethical neo-expressivism’ (Bar-On and Chrisman 2009; Bar-On, Chrisman, and Sias 2014). Ethical neo-expressivism rests upon two very important distinctions. The first is a distinction between two different kinds of expression. When we say that agents express their mental states and that sentences express propositions, we refer not just to two different instances of expression, but more importantly, to two different kinds expression, which are often conflated by expressivists.  To see how the two kinds of expression come apart, consider:

(18)      It is so great to see you!

(19)      I am so glad to see you!

Intuitively, these two sentences have different semantic contents. Setting aside complicated issues related to indexicality, sentence (18) expresses the proposition that it is so great to see you (the addressee), and sentence (19) expresses the proposition that I (the speaker) am so glad to see you (the addressee). However, these two different sentences might nonetheless function as vehicles for expressing the same mental state, that is, I might express my gladness or joy at seeing a friend by uttering either of them. Indeed, I might also do so by hugging my friend, or even just by smiling. Importantly, the neo-expressivist urges, it is not the speaker who expresses this or that proposition, but the sentences. People cannot express propositions, but sentences can, in virtue of being conventional representations of them. However, it is not the sentences that express gladness or joy, but the speaker. Sentences cannot express mental states; they are just strings of words. But people can certainly express their mental states by performing various acts, some of which involve the utterance of sentences. Call the relation between sentences and propositions semantic-expression, or s-expression; and call the relation between agents and their mental states action-expression, or a-expression.

According to neo-expressivists, most ethical expressivists, including most hybrid theorists, conflate these two senses of expression because they fail to adequately recognize a second distinction. Notice that terms like “claim”, “judgment”, and “statement” are ambiguous: they might refer either to an act or to the product of that act. So the term “ethical claim” might refer either to the act of making an ethical claim, or to the product of this act—which, presumably, is a sentence tokened either in thought or in speech. This distinction between ethical claims understood as acts and ethical claims understood as products maps nicely onto the earlier distinction between a- and s-expression. Understood as acts, ethical claims are different from non-ethical claims in that, when making an ethical claim, a speaker a-expresses some non-cognitive attitude. In this way, neo-expressivists can apparently affirm psychological non-cognitivism, and may also have the resources for “saving the differences” between ethical and non-ethical discourse. On the other hand, understood as products—that is, sentences containing ethical terms—ethical claims are just like non-ethical claims in s-expressing propositions, and not necessarily in the deflationary sense of proposition noted above. By allowing that ethical claims express propositions, the neo-expressivist may have all she needs in order to avoid the Continuity Problem.

Now, according to some, semantic ideationalism is essential to expressivism. Gibbard, for instance, writes, “The term ‘expressivism’ I mean to cover any account of meanings that follow this indirect path: to explain the meaning of a term, explain what states of mind the term can be used to express” (2003, p.7). However, ethical neo-expressivism, as we have just seen, rejects semantic ideationalism in favor of the more traditional propositional approach to meaning. In light of this, one might legitimately wonder whether neo-expressivism ought to count as an expressivist view. But as Bar-On, Chrisman, and Sias (2014) argue, neo-expressivism is perfectly capable of accommodating both of the main motivations of non-cognitivism and expressivism described in Sections 1 and 3—that is, avoiding a commitment to “spooky,” irreducibly normative properties, and accounting for the close connection between sincere ethical claims and motivation.  Besides, as we saw earlier, it looks like the expressivist’s commitment to semantic ideationalism is what got her into trouble with the Continuity Problem in the first place. So even if neo-expressivism represents something of a departure from mainstream expressivist thought, it may nonetheless be a departure worth considering.

c. Recent Work in Empirical Moral Psychology

Expressivists have long recognized that it is possible to make an ethical claim without being in whatever is supposed to be the corresponding non-cognitive mental state. It is possible, for instance, to utter

(3)        Lying is wrong

without, at the same time, disapproving of lying. Maybe the speaker is just reciting a line from a play; or maybe the speaker suffers from a psychological disorder that renders him incapable of ever being in the relevant non-cognitive state, and he is just repeating something that he has heard others say. These are surely possibilities, and expressivists have at times had different things to say about them, and other cases like them. Either way, though, expressivists generally assume that ethical claims are nonetheless tied to non-cognitive states in a way that justifies us in thinking that a speaker of an ethical claim, if she is being sincere, ought to be motivated to act accordingly. This is one of the two main motivations that attract people to theories in the expressivist tradition.

The assumption that sincere ethical claims in ordinary cases are accompanied by non-cognitive states is presumably one that has empirical implications.  If true, for instance, one might expect to find activity in regions of the brain associated with such states as people make ethical claims sincerely. Indeed, this is precisely what researchers in empirical moral psychology have found throughout various studies conducted over the past few decades. From brain scans to behavioral experiments, tests of skin conductance to moral judgment surveys given in disgusting environments, study after study seems to confirm a view that is sometimes called “psychological sentimentalism”—that is, the view that people are prompted to make the ethical claims that they make primarily by their emotional responses to things.

Now, to be sure, the link posited by psychological sentimentalism is a causal one—our emotions cause us to make certain ethical claims—and that is importantly different from the conceptual link that expressivists generally assume exists between non-cognitive states and ethical claims. But expressivists may nonetheless benefit from exploring how recent work in empirical moral psychology can be used to support parts of their view—for example, how it is that the conceptual link is supposed to have come about. If nothing else, expressivists may find significant empirical support for the view, shared by everyone in the tradition since Ayer, that ethical claims are expressions of characteristically non-cognitive states of mind.

6. References and Further Reading

  • Austin, J. L. (1970). “Other Minds.” In J. O. Urmson and G. J. Warnock (eds.), Philosophical Papers. Second Edition. Oxford: Clarendon Press.
  • Ayer, A. J. (1946/1952). Language, Truth, and Logic. New York: Dover.
  • Barker, S. (2006). “Truth and the Expressing in Expressivism.” In Horgan, T. and Timmons, M. (eds.). Metaethics after Moore. Oxford: Clarendon Press.
  • Bar-On, D. and M. Chrisman (2009). “Ethical Neo-Expressivism.” In R. Shafer-Landau (ed.). Oxford Studies in Metaethics, Vol. 4. Oxford: Oxford University Press.
  • Bar-On, D., M. Chrisman, and J. Sias (2014). “(How) Is Ethical Neo-Expressivism a Hybrid View.” In M. Ridge and G. Fletcher (eds.), Having It Both Ways: Hybrid Theories and Modern Metaethics. Oxford: Oxford University Press.
  • Björnsson, G. and T. McPherson (2014). “Moral Attitudes for Non-Cognitivists: Solving the Specification Problem,” Mind,
  • Blackburn, S. (1984). Spreading the Word. Oxford: Clarendon Press.
  • Blackburn, S. (1988a). “Attitudes and Contents.” Ethics 98: 501-17.
  • Blackburn, S. (1988b). “Supervenience Revisited.” In G. Sayre-McCord (ed.), Essays on Moral Realism. Ithaca: Cornell University Press.
  • Blackburn, S. (1998). Ruling Passions. Oxford: Clarendon Press.
  • Boisvert, D. (2008). “Expressive-Assertivism.” Pacific Philosophical Quarterly 89(2): 169-203.
  • Boyd, R. (1988). “How to Be a Moral Realist.” In G. Sayre-McCord (ed.), Essays on Moral Realism. Ithaca: Cornell University Press.
  • Brink, D. (1989). Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press.
  • Chrisman, M. (2008). “Expressivism, Inferentialism, and Saving the Debate.” Philosophy and Phenomenological Research 77: 334-358.
  • Chrisman, M. (2012). “Epistemic Expressivism.” Philosophy Compass 7(2): 118-126.
  • Copp, D. (2001). “Realist-Expressivism: A Neglected Option for Moral Realism.” Social Philosophy and Policy 18(2): 1-43.
  • Copp, D. (2009). “Realist-Expressivism and Conventional Implicature.” In Shafer-Landau, R. (ed.). Oxford Studies in Metaethics, Vol. 4. Oxford: Oxford University Press.
  • Coventry, A. (2006). Hume’s Theory of Causation: A Quasi-Realist Interpretation. London: Continuum.
  • Divers, J. and A. Miller (1994). “Why Expressivists About Value Should Not Love Minimalism About Truth.” Analysis 54: 12-19.
  • Dreier, J. (2004). “Meta-Ethics and the Problem of Creeping Minimalism.” Philosophical Perspectives 18: 23-44.
  • Dreier, J. (2009). “Relativism (and Expressivism) and the Problem of Disagreement.” Philosophical Perspectives 23: 79-110.
  • Finlay, S. (2004). “The Conversational Practicality of Value Judgment.” The Journal of Ethics 8: 205-223.
  • Finlay, S. (2005). “Value and Implicature.” Philosophers’ Imprint 5: 1-20.
  • Geach, P. (1965). “Assertion.” Philosophical Review 74: 449-465.
  • Gert, J. (2002). “Expressivism and Language Learning.” Ethics 112: 292-314.
  • Gibbard, A. (1990). Wise Choices, Apt Feelings. Cambridge, MA: Harvard University Press.
  • Gibbard, A. (2003). Thinking How to Live. Cambridge, MA: Harvard University Press.
  • Greene, J. D. (2008). “The Secret Joke of Kant’s Soul.” In Walter Sinnott-Armstrong (ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. Cambridge, MA: MIT Press, pp. 35-79.
  • Greene, J. D. and J. Haidt (2002). “How (and Where) Does Moral Judgment Work?” Trends in Cognitive Sciences 6: 517-523.
  • Haidt, J. (2001). “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment.” Psychological Review 108(4): 814-834.
  • Hare, R. M. (1952). The Language of Morals. Oxford: Oxford University Press.
  • Hare, R. M. (1970). “Meaning and Speech Acts.” The Philosophical Review 79: 3-24.
  • Hay, Ryan. (2013). “Hybrid Expressivism and the Analogy between Pejoratives and Moral Language.” European Journal of Philosophy 21(3): 450-474.
  • Horwich, P. (1998). Truth. Second Edition. Oxford: Blackwell.
  • Jackson, F. (1998). From Metaphysics to Ethics. Oxford: Clarendon Press.
  • Jackson, F. and P. Pettit (1995). “Moral Functionalism and Moral Motivation.” Philosophical Quarterly 45: 20-40.
  • Kauppinen, A. (2010). “What Makes a Sentiment Moral?” In R. Shafer-Landau (ed.), Oxford Studies in Metaethics, Vol. 5. Oxford: Oxford University Press.
  • Köhler, S. (2013). “Do Expressivists Have an Attitude Problem?” Ethics 123(3): 479-507.
  • Lewis, D. (1970). “How to Define Theoretical Terms.” Journal of Philosophy 67: 427-446.
  • Lewis, D. (1972). “Psychophysical and Theoretical Identifications.” Australasian Journal of Philosophy 50: 249-258.
  • Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. London: Penguin.
  • Merli, D. (2008). “Expressivism and the Limits of Moral Disagreement.” Journal of Ethics 12: 25-55.
  • Miller, A. (2013). Contemporary Metaethics: An Introduction. Second Edition. Cambridge: Polity.
  • Moore, G. E. (1903). Principia Ethica. New York: Cambridge University Press.
  • Nichols, S. (2004). Sentimental Rules: On the Natural Foundations of Moral Judgment. New York: Oxford University Press.
  • Ogden, C. K. and I. A. Richards (1923). The Meaning of Meaning. New York: Harcourt Brace & Jovanovich.
  • Price, H. (2011). “Expressivism for Two Voices.” In J. Knowles and H. Rydenfelt (eds.). Pragmatism, Science, and Naturalism. Peter Lang.
  • Prinz, J. (2006). “The Emotional Basis of Moral Judgments.” Philosophical Explorations 9(1): 29-43.
  • Ramsey, F. P. (1927). “Facts and Propositions.” Proceedings of the Aristotelian Society 7 (Supplementary): 153-170.
  • Ridge, M. (2006). “Ecumenical Expressivism: Finessing Frege.” Ethics 116: 302-336.
  • Ridge, M. (2007). “Ecumenical Expressivism: The Best of Both Worlds?” In Shafer-Landau, R. (ed.). Oxford Studies in Metaethics, Vol. 2. Oxford: Oxford University Press.
  • Ridge, M. (2014). Impassioned Belief. Oxford: Oxford University Press.
  • Ridge, M. and G. Fletcher, eds. (2014). Having It Both Ways: Hybrid Theories and Modern Metaethics. Oxford: Oxford University Press.
  • Schroeder, M. (2008a). Being For: Evaluating the Semantic Program of Expressivism. Oxford: Oxford University Press.
  • Schroeder, M. (2008b). “Expression for Expressivists.” Philosophy and Phenomenological Research 76(1): 86-116.
  • Schroeder, M. (2008c). “How Expressivists Can and Should Solve Their Problem with Negation.” Noûs 42(4): 573-599.
  • Schroeder, M. (2008d). “What is the Frege-Geach Problem?” Philosophy Compass 3(4): 703-720.
  • Schroeder, M. (2009). “Hybrid Expressivism: Virtues and Vices.” Ethics 119(2): 257-309.
  • Searle, J. (1962). “Meaning and Speech Acts.” Philosophical Review 71 (1962): 423-32.
  • Searle, J. (1969). Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press.
  • Smith, M. (1994a). The Moral Problem. Oxford: Blackwell.
  • Smith, M. (1994b). “Why Expressivists About Value Should Love Minimalism About Truth.” Analysis 54: 1-12.
  • Smith, M. (2001). “Some Not-Much-Discussed Problems for Non-Cognitivism in Ethics.” Ratio 14: 93-115.
  • Stevenson, C. L. (1937). “The Emotive Meaning of Ethical Terms.” Mind 46: 14-31.
  • Stevenson, C. L. (1944). Ethics and Language. New Haven: Yale University Press.
  • Strawson, P. F. (1949). “Truth.” Analysis 9: 83-97.
  • Thomson, J. (1996). “Moral Objectivity.” In G. Harman and J. Thomson, Moral Relativism and Moral Objectivity (Great Debates in Philosophy). Oxford: Blackwell.
  • Unwin, N. (1999). “Quasi-Realism, Negation and the Frege-Geach Problem.” Philosophical Quarterly 49: 337-352.
  • Unwin, N. (2001). “Norms and Negation: A Problem for Gibbard’s Logic.” Philosophical Quarterly 51: 60-75.
  • Yalcin, S. (2012). “Bayesian Expressivism.” Proceedings of the Aristotelian Society CXII(2): 123-160.

Author Information

James Sias
Email: siasj@dickinson.edu
Dickinson College
U. S. A.

Gottfried Leibniz: Philosophy of Mind

LeibnizGottfried Wilhelm Leibniz (1646-1716) was a true polymath: he made substantial contributions to a host of different fields such as mathematics, law, physics, theology, and most subfields of philosophy.  Within the philosophy of mind, his chief innovations include his rejection of the Cartesian doctrines that all mental states are conscious and that non-human animals lack souls as well as sensation.  Leibniz’s belief that non-rational animals have souls and feelings prompted him to reflect much more thoroughly than many of his predecessors on the mental capacities that distinguish human beings from lower animals.  Relatedly, the acknowledgment of unconscious mental representations and motivations enabled Leibniz to provide a far more sophisticated account of human psychology.  It also led Leibniz to hold that perception—rather than consciousness, as Cartesians assume—is the distinguishing mark of mentality.

The capacities that make human minds superior to animal souls, according to Leibniz, include not only their capacity for more elevated types of perceptions or mental representations, but also their capacity for more elevated types of appetitions or mental tendencies.  Self-consciousness and abstract thought are examples of perceptions that are exclusive to rational souls, while reasoning and the tendency to do what one judges to be best overall are examples of appetitions of which only rational souls are capable.  The mental capacity for acting freely is another feature that sets human beings apart from animals and it in fact presupposes the capacity for elevated kinds of perceptions as well as appetitions.

Another crucial contribution to the philosophy of mind is Leibniz’s frequently cited mill argument.  This argument is supposed to show, through a thought experiment that involves walking into a mill, that material things such as machines or brains cannot possibly have mental states.  Only immaterial things, that is, soul-like entities, are able to think or perceive.  If this argument succeeds, it shows not only that our minds must be immaterial or that we must have souls, but also that we will never be able to construct a computer that can truly think or perceive.

Finally, Leibniz’s doctrine of pre-established harmony also marks an important innovation in the history of the philosophy of mind.  Like occasionalists, Leibniz denies any genuine interaction between body and soul.  He agrees with them that the fact that my foot moves when I decide to move it, as well as the fact that I feel pain when my body gets injured, cannot be explained by a genuine causal influence of my soul on my body, or of my body on my soul.  Yet, unlike occasionalists, Leibniz also rejects the idea that God continually intervenes in order to produce the correspondence between my soul and my body.  That, Leibniz thinks, would be unworthy of God.  Instead, God has created my soul and my body in such a way that they naturally correspond to each other, without any interaction or divine intervention.  My foot moves when I decide to move it because this motion has been programmed into it from the very beginning.  Likewise, I feel pain when my body is injured because this pain was programmed into my soul.  The harmony or correspondence between mental states and states of the body is therefore pre-established.

Table of Contents

  1. Leibnizian Minds and Mental States
    1. Perceptions
      1. Consciousness, Apperception, and Reflection
      2. Abstract Thought, Concepts, and Universal Truths
    2. Appetitions
  2. Freedom
  3. The Mill Argument
  4. The Relation between Mind and Body
  5. References and Further Reading
    1. Primary Sources in English Translation
    2. Secondary Sources

1. Leibnizian Minds and Mental States

Leibniz is a panpsychist: he believes that everything, including plants and inanimate objects, has a mind or something analogous to a mind.  More specifically, he holds that in all things there are simple, immaterial, mind-like substances that perceive the world around them.  Leibniz calls these mind-like substances ‘monads.’  While all monads have perceptions, however, only some of them are aware of what they perceive, that is, only some of them possess sensation or consciousness.  Even fewer monads are capable of self-consciousness and rational perceptions.  Leibniz typically refers to monads that are capable of sensation or consciousness as ‘souls,’ and to those that are also capable of self-consciousness and rational perceptions as ‘minds.’  The monads in plants, for instance, lack all sensation and consciousness and are hence neither souls nor minds; Leibniz sometimes calls this least perfect type of monad a ‘bare monad’ and compares the mental states of such monads to our states when we are in a stupor or a dreamless sleep.  Animals, on the other hand, can sense and be conscious, and thus possess souls (see Animal Minds).  God and the souls of human beings and angels, finally, are examples of minds because they are self-conscious and rational.  As a result, even though there are mind-like things everywhere for Leibniz, minds in the stricter sense are not ubiquitous.

All monads, even those that lack consciousness altogether, have two basic types of mental states: perceptions, that is, representations of the world around them, and appetitions, or tendencies to transition from one representation to another.  Hence, even though monads are similar to the minds or souls described by Descartes in some ways—after all, they are immaterial substances—consciousness is not an essential property of monads, while it is an essential property of Cartesian souls.  For Leibniz, then, the distinguishing mark of mentality is perception, rather than consciousness (see Simmons 2001).  In fact, even Leibnizian minds in the stricter sense, that is, monads capable of self-consciousness and reasoning, are quite different from the minds in Descartes’s system.  While Cartesian minds are conscious of all their mental states, Leibnizian minds are conscious only of a small portion of their states.  To us it may seem obvious that there is a host of unconscious states in our minds, but in the seventeenth century this was a radical and novel notion.  This profound departure from Cartesian psychology allows Leibniz to paint a much more nuanced picture of the human mind.

One crucial aspect of Leibniz’s panpsychism is that in addition to the rational monad that is the soul of a human being, there are non-rational, bare monads everywhere in the human being’s body.  Leibniz sometimes refers to the soul of a human being or animal as the central or dominant monad of the organism.  The bare monads that are in an animal’s body, accordingly, are subordinate to its dominant monad or soul.  Even plants, for Leibniz, have central or dominant monads, but because they lack sensation, these dominant monads cannot strictly speaking be called souls.  They are merely bare monads, like the monads that are subordinate to them.

The claim that there are mind-like things everywhere in nature—in our bodies, in plants, and even in inanimate objects—strikes many readers of Leibniz as ludicrous.  Yet, Leibniz thinks he has conclusive metaphysical arguments for this claim.  Very roughly, he holds that a complex, divisible thing such as a body can only be real if it is made up of parts that are real.  If the parts in turn have parts, those have to be real as well.  The problem is, Leibniz claims, that matter is infinitely divisible: we can never reach parts that do not themselves have parts.  Even if there were material atoms that we cannot actually divide, they must still be spatially extended, like all matter, and therefore have spatial parts.  If something is spatially extended, after all, we can at least in thought distinguish its left half from its right half, no matter how small it is.  As a result, Leibniz thinks, purely material things are not real.  The reality of complex wholes depends on the reality of their parts, but with purely material things, we never get to parts that are real since we never reach an end in this quest for reality.  Leibniz concludes that there must be something in nature that is not material and not divisible, and from which all things derive their reality.  These immaterial, indivisible things just are monads.  Because of the role they play, Leibniz sometimes describes them as “atoms of substance, that is, real unities absolutely destitute of parts, […] the first absolute principles of the composition of things, and, as it were, the final elements in the analysis of substantial things”  (p. 142.  For a more thorough description of monads, see Leibniz: Metaphysics, as well as the Monadology and the New System of Nature, both included in Ariew and Garber.)

a. Perceptions

As already seen, all monads have perceptions, that is, they represent the world around them.  Yet, not all perceptions—not even all the perceptions of minds—are conscious.  In fact, Leibniz holds that at any given time a mind has infinitely many perceptions, but is conscious only of a very small number of them.  Even souls and bare monads have an infinity of perceptions.  This is because Leibniz believes, for reasons that need not concern us here (but see Leibniz: Metaphysics), that each monad constantly perceives the entire universe.  For instance, even though I am not aware of it at all, my mind is currently representing every single grain of sand on Mars.  Even the monads in my little toe, as well as the monads in the apple I am about to eat, represent those grains of sand.

Leibniz often describes perceptions of things of which the subject is unaware and which are far removed from the subject’s body as ‘confused.’  He is fond of using the sound of the ocean as a metaphor for this kind of confusion: when I go to the beach, I do not hear the sound of each individual wave distinctly; instead, I hear a roaring sound from which I am unable to discern the sounds of the individual waves (see Principles of Nature and Grace, section 13, in Ariew and Garber, 1989).  None of these individual sounds stands out.  Leibniz claims that confused perceptions in monads are analogous to this confusion of sounds, except of course for the fact that monads do not have to be aware even of the confused whole.  To the extent that a perception does stand out from the rest, however, Leibniz calls it ‘distinct.’  This distinctness comes in degrees, and Leibniz claims that the central monads of organisms always perceive their own bodies more distinctly than they perceive other bodies.

Bare monads are not capable of very distinct perceptions; their perceptual states are always muddled and confused to a high degree.  Animal souls, on the other hand, can have much more distinct perceptions than bare monads.  This is in part because they possess sense organs, such as eyes, which allow them to bundle and condense information about their surroundings (see Principles of Nature and Grace, section 4).  The resulting perceptions are so distinct that the animals can remember them later, and Leibniz calls this kind of perception ‘sensation.’  The ability to remember prior perceptions is extremely useful, as a matter of fact, because it enables animals to learn from experience.  For instance, a dog that remembers being beaten with a stick can learn to avoid sticks in the future (see Principles of Nature and Grace, section 5, in Ariew and Garber, 1989).  Sensations are also tied to pleasure and pain: when an animal distinctly perceives some imperfection in its body, such as a bruise, this perception just is a feeling of pain.  Similarly, when an animal perceives some perfection of its body, such as nourishment, this perception is pleasure.  Unlike Descartes, then, Leibniz believed that animals are capable of feeling pleasure and pain.

Consequently, souls differ from bare monads in part through the distinctness of their perceptions: unlike bare monads, souls can have perceptions that are distinct enough to give rise to memory and sensation, and they can feel pleasure and pain.  Rational souls, or minds, share these capacities.  Yet they are additionally capable of perceptions of an even higher level.  Unlike the souls of lower animals, they can reflect on their own mental states, think abstractly, and acquire knowledge of necessary truths.  For instance, they are capable of understanding mathematical concepts and proofs.  Moreover, they can think of themselves as substances and subjects: they have the ability to use and understand the word ‘I’ (see Monadology, section 30).  These kinds of perceptions, for Leibniz, are distinctively rational perceptions, and they are exclusive to minds or rational souls.

It is clear, then, that there are different types of perceptions: some are unconscious, some are conscious, and some constitute reflection or abstract thought.  What exactly distinguishes these types of perceptions, however, is a complicated question that warrants a more detailed investigation.

i. Consciousness, Apperception, and Reflection

Why are some perceptions conscious, while others are not?  In one text, Leibniz explains the difference as follows: “it is good to distinguish between perception, which is the internal state of the monad representing external things, and apperception, which is consciousness, or the reflective knowledge of this internal state, something not given to all souls, nor at all times to a given soul” (Principles of Nature and Grace, section 4).  This passage is interesting for several reasons: Leibniz not only equates consciousness with what he calls ‘apperception,’ and states that only some monads possess it.  He also seems to claim that conscious perceptions differ from other perceptions in virtue of having different types of things as their objects: while unconscious perceptions represent external things, apperception or consciousness has perceptions, that is, internal things, as its object.  Consciousness is therefore closely connected to reflection, as the term ‘reflective knowledge’ also makes clear.

The passage furthermore suggests that Leibniz understands consciousness in terms of higher-order mental states because it says that in order to be conscious of a perception, I must possess “reflective knowledge” of that perception.  One way of interpreting this statement is to understand these higher-order mental states as higher-order perceptions: in order to be conscious of a first-order perception, I must additionally possess a second-order perception of that first-order perception.  For example, in order to be conscious of the glass of water in front of me, I must not only perceive the glass of water, but I must also perceive my perception of the glass of water.  After all, in the passage under discussion, Leibniz defines ‘consciousness’ or ‘apperception’ as the reflective knowledge of a perception.  Such higher-order theories of consciousness are still endorsed by some philosophers of mind today (see Consciousness).  For an alternative interpretation of Leibniz’s theory of consciousness, however, see Jorgensen 2009, 2011a, and 2011b).

There is excellent textual evidence that according to Leibniz, consciousness or apperception is not limited to minds, but is instead shared by animal souls.  One passage in which Leibniz explicitly ascribes apperception to animals is from the New Essays: “beasts have no understanding … although they have the faculty for apperceiving the more conspicuous and outstanding impressions—as when a wild boar apperceives someone who is shouting at it” (p. 173).  Moreover, Leibniz sometimes claims that sensation involves apperception (e.g. New Essays p. 161; p. 188), and since animals are clearly capable of sensation, they must thus possess some form of apperception.  Hence, it seems that Leibniz ascribes apperception to animals, which in turn he elsewhere identifies with consciousness.

Yet, the textual evidence for animal consciousness is unfortunately anything but neat because in the New Essays—that is, in the very same text—Leibniz also suggests that there is an important difference between animals and human beings somewhere in this neighborhood.  In several passages, he says that any creature with consciousness has a moral or personal identity, which in turn is something he grants only to minds.  He states, for instance, that “consciousness or the sense of I proves moral or personal identity” (New Essays, p. 236).  Hence, it seems clear that for Leibniz there is something in the vicinity of consciousness that animals lack and that minds possess, and which is crucial for morality.

A promising solution to this interpretive puzzle is the following: what animals lack is not consciousness generally, but only a particular type of consciousness.  More specifically, while they are capable of consciously perceiving external things, they lack awareness, or at least a particular type of awareness, of the self.  In the Monadology, for instance, Leibniz argues that knowledge of necessary truths distinguishes us from animals and that through this knowledge “we rise to reflexive acts, which enable us to think of that which is called ‘I’ and enable us to consider that this or that is in us” (sections 29-30).  Similarly, he writes in the Principles of Nature and Grace that “minds … are capable of performing reflective acts, and capable of considering what is called ‘I’, substance, soul, mind—in brief, immaterial things and immaterial truths” (section 5).  Self-knowledge, or self-consciousness, then, appears to be exclusive to rational souls.  Leibniz moreover connects this consciousness of the self to personhood and moral responsibility in several texts, such as for instance in the Theodicy: “In saying that the soul of man is immortal one implies the subsistence of what makes the identity of the person, something which retains its moral qualities, conserving the consciousness, or the reflective inward feeling, of what it is: thus it is rendered susceptible to chastisement or reward” (section 89).

Based on these passages, it seems that one crucial cognitive difference between human beings and animals is that even though animals possess the kind of apperception that is involved in sensation and in an acute awareness of external objects, they lack a certain type of apperception or consciousness, namely reflective self-knowledge or self-consciousness.  Especially because of the moral implications of this kind of consciousness that Leibniz posits, this difference is clearly an extremely important one.  According to these texts, then, it is not consciousness or apperception tout court that distinguishes minds from animal souls, but rather a particular kind of apperception.  What animals are incapable of, according to Leibniz, is self-knowledge or self-awareness, that is, an awareness not only of their perceptions, but also of the self that is having those perceptions.

Because Leibniz associates consciousness so closely with reflection, one might wonder whether the fact that animals are capable of conscious perceptions implies that they are also capable of reflection.  This is another difficult interpretive question because there appears to be evidence both for a positive and for a negative answer.  Reflection, according to Leibniz, is “nothing but attention to what is within us” (New Essays, p. 51).  Moreover, as already seen, he argues that reflective acts enable us “to think of that which is called ‘I’ and … to consider that this or that is in us” (Monadology, section 30).  Leibniz does not appear to ascribe reflection to animals explicitly, and in fact, there are several texts in which he says in no uncertain terms that they lack reflection altogether.  He states for instance that “the soul of a beast has no more reflection than an atom” (Loemker, p. 588).  Likewise, he defines ‘intellection’ as “a distinct perception combined with a faculty of reflection, which the beasts do not have” (New Essays, p. 173) and explains that “just as there are two sorts of perception, one simple, the other accompanied by reflections that give rise to knowledge and reasoning, so there are two kinds of souls, namely ordinary souls, whose perception is without reflection, and rational souls, which think about what they do” (Strickland, p. 84).

On the other hand, as seen, Leibniz does ascribe apperception or consciousness to animals, and consciousness in turn appears to involve higher-order mental states.  This suggests that Leibnizian animals must perceive or know their own perceptions when they are conscious of something, and that in turn seems to imply that they can reflect after all.  A closely related reason for ascribing reflection to animals is that Leibniz sometimes explicitly associates reflection with apperception or consciousness.  In a passage already quoted above, for instance, Leibniz defines ‘consciousness’ as the reflective knowledge of a first-order perception.  Hence, if animals possess consciousness it seems that they must also have some type of reflection.

We are consequently faced with an interpretive puzzle: even though there is strong indirect evidence that Leibniz attributes reflection to animals, there is also direct evidence against it.  There are at least two ways of solving this puzzle.  In order to make sense of passages in which Leibniz restricts reflection to rational souls, one can either deny that perceiving one’s internal states is sufficient for reflection, or one can distinguish between different types of reflection, in such a way that the most demanding type of reflection is limited to minds.  One good way to deny that perception of one’s internal states is sufficient for reflection is to point out that Leibniz defines reflection as “attention to what is within us” (New Essays, p. 51), rather than as ‘perception of what is within us.’  Attention to internal states, arguably, is more demanding than mere perception of these states, and animals may well be incapable of the former.  Attention might be a particularly distinct perception, for instance.  Alternatively, one can argue that reflection requires a self-concept, or self-knowledge, which also goes beyond the mere perception of internal states and may be inaccessible to animals.  Perceiving my internal states, on that interpretation, amounts to reflection only if I also possess knowledge of the self that is having those states.  Instead of denying that perceiving one’s own states is sufficient for reflection, one can also distinguish different types of reflection and claim that while the mere perception of one’s internal states is a type of reflection, there is a more demanding type of reflection that requires attention, a self-concept, or something similar.  Yet, the difference between those two responses appears to be merely terminological.  Based on the textual evidence discussed above, it is clear that either reflection generally, or at least a particular type of reflection, must be exclusive to minds.

ii. Abstract Thought, Concepts, and Universal Truths

So far, we have seen that one cognitive capacity that elevates minds above animal souls is self-consciousness, which is a particular type of reflection.  Before turning to appetitions, we should briefly investigate three additional, mutually related, cognitive abilities that only minds possess, namely the abilities to abstract, to form or possess concepts, and to know general truths.  In what may well be Leibniz’s most intriguing discussion of abstraction, he says that some non-human animals “apparently recognize whiteness, and observe it in chalk as in snow; but it does not amount to abstraction, which requires attention to the general apart from the particular, and consequently involves knowledge of universal truths which beasts do not possess” (New Essays, p. 142).  In this passage, we learn not only that beasts are incapable of abstraction, but also that abstraction involves “attention to the general apart from the particular” as well as “knowledge of universal truths.”  Hence, abstraction for Leibniz seems to consist in separating out one part of a complex idea and focusing on it exclusively.  Instead of thinking of different white things, one must think of whiteness in general, abstracting away from the particular instances of whiteness.  In order to think about whiteness in the abstract, then, it is not enough to perceive different white things as similar to one another.

Yet, it might still seem mysterious how precisely animals should be able to observe whiteness in different objects if they are unable to abstract.  One fact that makes this less mysterious, however, is that, on Leibniz’s view, while animals are unable to pay attention to whiteness in general, the idea of whiteness may nevertheless play a role in their recognition of whiteness.  As Leibniz explains in the New Essays, even though human minds are aware of complex ideas and particular truths first as well as rather easily, and have to expend a lot of effort to subsequently achieve awareness of simple ideas and general principles, the order of nature is the other way around:

The truths that we start by being aware of are indeed particular ones, just as we start with the coarsest and most composite ideas.  But that doesn’t alter the fact that in the order of nature the simplest comes first, and that the reasons for particular truths rest wholly on the more general ones of which they are mere instances. … The mind relies on these principles constantly; but it does not find it so easy to sort them out and to command a distinct view of each of them separately, for that requires great attention to what it is doing. (p. 83f.)

Here, Leibniz says that minds can rely on general principles, or abstract ideas, without being aware of them, and without having distinct perceptions of them separately.  This might help us to explain how animals can observe whiteness in different white objects without being able to abstract: the simple idea of whiteness might play a role in their cognition, even though they are not aware of it, and are unable to pay attention to this idea.

The passage just quoted is interesting for another reason: It shows that abstracting and achieving knowledge of general truths have a lot in common and presuppose the capacity to reflect.  It takes a special effort of mind to become aware of abstract ideas and general truths, that is, to separate these out from complex ideas and particular truths.  It is this special effort, it seems, of which animals are incapable; while they can at times achieve relatively distinct perceptions of complex or particular things, they lack the ability to pay attention, or at least sufficient attention, to their internal states.  At least part of the reason for their inability to abstract and to know general truths, then, appears to be their inability, or at least very limited ability, to reflect.

Abstraction also seems closely related to the possession or formation of concepts: arguably, what a mind gains when abstracting the idea of whiteness from the complex ideas of particular white things is what we would call a concept of whiteness.  Hence, since animals cannot abstract, they do not possess such concepts.  They may nevertheless, as suggested above, have confused ideas such as a confused idea of whiteness that allows them to recognize whiteness in different white things, without enabling them to pay attention to whiteness in the abstract.

An interesting question that arises in this context is the question whether having an idea of the future or thinking about a future state requires abstraction.  One reason to think so is that, plausibly, in order to think about the future, for instance about future pleasures or pains, one needs to abstract from the present pleasures or pains that one can directly experience, or from past pleasures and pains that one remembers.  After all, just as one can only attain the concept of whiteness by abstracting from other properties of the particular white things one has experienced, so, arguably, one can only acquire the idea of future pleasures through abstraction from particular present pleasures.  It may be for this reason that Leibniz sometimes notes that animals have “neither foresight nor anxiety for the future” (Huggard, p. 414).  Apparently, he does not consider animals capable of having an idea of the future or of future states.

Leibniz thinks that in addition to sensible concepts such as whiteness, we also have concepts that are not derived from the senses, that is, we possess intellectual concepts.  The latter, it seems, set us apart even farther from animals because we attain them through reflective self-awareness, of which animals, as seen above, are not capable.  Leibniz says, for instance, that “being is innate in us—the knowledge of being is comprised in the knowledge that we have of ourselves.  Something like this holds of other general notions” (New Essays, p. 102).  Similarly, he states a few pages later that “reflection enables us to find the idea of substance within ourselves, who are substances” (New Essays, p. 105).  Many similar statements can be found elsewhere.  The intellectual concepts that we can discover in our souls, according to Leibniz, include not only being and substance, but also unity, similarity, sameness, pleasure, cause, perception, action, duration, doubting, willing, and reasoning, to name only a few.  In order to derive these concepts from our reflective self-awareness, we must apparently engage in abstraction: I am distinctly aware of myself as an agent, a substance, and a perceiver, for instance, and from this awareness I can abstract the ideas of action, substance, and perception in general.  This means that animals are inferior to us among other things in the following two ways: they cannot have distinct self-awareness, and they cannot abstract.  They would need both of these capacities in order to form intellectual concepts, and they would need the latter—that is, abstraction—in order to form sensible concepts.

Intellectual concepts are not the only things that minds can find in themselves: in addition, they are also able to discover eternal or general truths there, such as the axioms or principles of logic, metaphysics, ethics, and natural theology.  Like the intellectual concepts just mentioned, these general truths or principles cannot be derived from the senses and can thus be classified as innate ideas.  Leibniz says, for instance,

Above all, we find [in this I and in the understanding] the force of the conclusions of reasoning, which are part of what is called the natural light. … It is also by this natural light that the axioms of mathematics are recognized. … [I]t is generally true that we know [necessary truths] only by this natural light, and not at all by the experiences of the senses. (Ariew and Garber, p. 189)

Axioms and general principles, according to this passage, must come from the mind itself and cannot be acquired through sense experience.  Yet, also as in the case of intellectual concepts, it is not easy for us to discover such general truths or principles in ourselves; instead, it takes effort or special attention.  It again appears to require the kind of attention to what is within us of which animals are not capable.  Because they lack this type of reflection, animals are “governed purely by examples from the senses” and “consequently can never arrive at necessary and general truths” (Strickland p. 84).

b. Appetitions

Monads possess not only perceptions, or representations of the world they inhabit, but also appetitions.  These appetitions are the tendencies or inclinations of these monads to act, that is, to transition from one mental state to another.  The most familiar examples of appetitions are conscious desires, such as my desire to have a drink of water.  Having this desire means that I have some tendency to drink from the glass of water in front of me.  If the desire is strong enough, and if there are no contrary tendencies or desires in my mind that are stronger—for instance, the desire to win the bet that I can refrain from drinking water for one hour—I will attempt to drink the water.  This desire for water is one example of a Leibnizian appetition.  Yet, just as in the case of perceptions, only a very small portion of appetitions is conscious.  We are unaware of most of the tendencies that lead to changes in our perceptions.  For instance, I am aware neither of perceiving my hair growing, nor of my tendencies to have those perceptions.  Moreover, as in the case of perceptions, there are an infinite number of appetitions in any monad at any given time.  This is because, as seen, each monad represents the entire universe.  As a result, each monad constantly transitions from one infinitely complex perceptual state to another, reflecting all changes that take place in the universe.  The tendency that leads to a monad’s transition from one of these infinitely complex perceptual states to another is therefore also infinitely complex, or composed of infinitely many smaller appetitions.

The three types of monads—bare monads, souls, and minds—differ not only with respect to their perceptual or cognitive capacities, but also with respect to their appetitive capacities.  In fact, there are good reasons to think that three different types of appetitions correspond to the three types of perceptions mentioned above, that is, to perception, sensation, and rational perception.  After all, Leibniz distinguishes between appetitions of which we can be aware and those of which we cannot be aware, which he sometimes also calls ‘insensible appetitions’ or ‘insensible inclinations.’  He appears to further divide the type of which we can be aware into rational and non-rational appetitions.  This threefold division is made explicit in a passage from the New Essays:

There are insensible inclinations of which we are not aware.  There are sensible ones: we are acquainted with their existence and their objects, but have no sense of how they are constituted. … Finally there are distinct inclinations which reason gives us: we have a sense both of their strength and of their constitution. (p. 194)

According to this passage, then, Leibniz acknowledges the following three types of appetitions: (a) insensible or unconscious appetitions, (b) sensible or conscious appetitions, and (c) distinct or rational appetitions.

Even though Leibniz does not say so explicitly, he furthermore believes that bare monads have only unconscious appetitions, that animal souls additionally have conscious appetitions, and that only minds have distinct or rational appetitions.  Unconscious appetitions are tendencies such as the one that leads to my perception of my hair growing, or the one that prompts me unexpectedly to perceive the sound of my alarm in the morning.  All appetitions in bare monads are of this type; they are not aware of any of their tendencies.  An example of a sensible appetition, on the other hand, is an appetition for pleasure.  My desire for a piece of chocolate, for instance, is such an appetition: I am aware that I have this desire and I know what the object of the desire is, but I do not fully understand why I have it.  Animals are capable of this kind of appetition; in fact, many of their actions are motivated by their appetitions for pleasure.  Finally, an example of a rational appetition is the appetition for something that my intellect has judged to be the best course of action.  Leibniz appears to identify the capacity for this kind of appetition with the will, which, as we will see below, plays a crucial role in Leibniz’s theory of freedom.  What is distinctive of this kind of appetition is that whenever we possess it, we are not only aware of it and of its object, but also understand why we have it.  For instance, if I judge that I ought to call my mother and consequently desire to call her, Leibniz thinks, I am aware of the thought process that led me to make this judgment, and hence of the origins of my desire.

Another type of rational appetition is the type of appetition involved in reasoning.  As seen, Leibniz thinks that animals, because they can remember prior perceptions, are able to learn from experience, like the dog that learns to run away from sticks.  This sort of behavior, which involves a kind of inductive inference (see Deductive and Inductive Arguments), can be called a “shadow of reasoning,” Leibniz tells us (New Essays, p. 50).  Yet, animals are incapable of true—that is, presumably, deductive—reasoning, which, Leibniz tells us, “depends on necessary or eternal truths, such as those of logic, numbers, and geometry, which bring about an indubitable connection of ideas and infallible consequences” (Principles of Nature and Grace, section 5, in Ariew and Garber, 1989).  Only minds can reason in this stricter sense.

Some interpreters think that reasoning consists simply in very distinct perception.  Yet that cannot be the whole story.  First of all, reasoning must involve a special type of perception that differs from the perceptions of lower animals in kind, rather than merely in degree, namely abstract thought and the perception of eternal truths.  This kind of perception is not just more distinct; it has entirely different objects than the perceptions of non-rational souls, as we saw above.  Moreover, it seems more accurate to describe reasoning as a special kind of appetition or tendency than as a special kind of perception.  This is because reasoning is not just one perception, but rather a series of perceptions.  Leibniz for instance calls it “a chain of truths” (New Essays, p. 199) and defines it as “the linking together of truths” (Huggard, p. 73).  Thus, reasoning is not the same as perceiving a certain type of object, nor as perceiving an object in a particular fashion.  Rather, it consists mainly in special types of transitions between perceptions and therefore, according to Leibniz’s account of how monads transition from perception to perception, in appetitions for these transitions.  What a mind needs in order to be rational, therefore, are appetitions that one could call the principles of reasoning.  These appetitions or principles allow minds to transition, for instance, from the premises of an argument to its conclusion.  In order to conclude ‘Socrates is mortal’ from ‘All men are mortal’ and ‘Socrates is a man,’ for example, I not only need to perceive the premises distinctly, but I also need an appetition for transitioning from premises of a particular form to conclusions of a particular form.

Leibniz states in several texts that our reasonings are based on two fundamental principles: the Principle of Contradiction and the Principle of Sufficient Reason.  Human beings also have access to several additional innate truths and principles, for instance those of logic, mathematics, ethics, and theology.  In virtue of these principles we have a priori knowledge of necessary connections between things, while animals can only have empirical knowledge of contingent, or merely apparent, connections.  The perceptions of animals, then, are not governed by the principles on which our reasonings are based; the closest an animal can come to reasoning is, as mentioned, engaging in empirical inference or induction, which is based not on principles of reasoning, but merely on the recognition and memory of regularities in previous experience.  This confirms that reasoning is a type of appetition: using, or being able to use, principles of reasoning cannot just be a matter of perceiving the world more distinctly.  In fact, these principles are not something that we acquire or derive from perceptions.  Instead, at least the most basic ones are innate dispositions for making certain kinds of transitions.

In connection with reasoning, it is important to note that even though Leibniz sometimes uses the term ‘thought’ for perceptions generally, he makes it clear in some texts that it strictly speaking belongs exclusively to minds because it is “perception joined with reason” (Strickland p. 66; see also New Essays, p. 210).  This means that the ability to think in this sense, just like reasoning, is also something that is exclusive to minds, that is, something that distinguishes minds from animal souls.  Non-rational souls neither reason nor think, strictly speaking; they do however have perceptions.

The distinctive cognitive and appetitive capacities of the three types of monads are summarized in the following table:

Leibniz-Mind table

2. Freedom

One final capacity that sets human beings apart from non-rational animals is the capacity for acting freely.  This is mainly because Leibniz closely connects free agency with rationality: acting freely requires acting in accordance with one’s rational assessment of which course of action is best.  Hence, acting freely involves rational perceptions as well as rational appetitions.  It requires both knowledge of, or rational judgments about, the good, as well as the tendency to act in accordance with these judgments.  For Leibniz, the capacity for rational judgments is called ‘intellect,’ and the tendency to pursue what the intellect judges to be best is called ‘will.’  Non-human animals, because they do not possess intellects and wills, or the requisite type of perceptions and appetitions, lack freedom.  This also means, however, that most human actions are not free, because we only sometimes reason about the best course of action and act voluntarily, on the basis of our rational judgments.  Leibniz in fact stresses that in three quarters of their actions, human beings act just like animals, that is, without making use of their rationality (see Principles of Nature and Grace, section 5, in Ariew and Garber, 1989).

In addition to rationality, Leibniz claims, free actions must be self-determined and contingent (see e.g. Theodicy, section 288).  An action is self-determined—or spontaneous, as Leibniz often calls it—when its source is in the agent, rather than in another agent or some other external entity.  While all actions of monads are spontaneous in a general sense since, as we will see in section four, Leibniz denies all interaction among created substances, he may have a more demanding notion of spontaneity in mind when he calls it a requirement for freedom.  After all, when an agent acts on the basis of her rational judgment, she is not even subject to the kind of apparent influence of her body or of other creatures that is present, for instance, when someone pinches her and she feels pain.

In order to be contingent, on the other hand, the action cannot be the result of compulsion or necessitation.  This, again, is generally true for all actions of monads because Leibniz holds that all changes in the states of a creature are contingent.  Yet, there may again be an especially demanding sense in which free actions are contingent for Leibniz.  He often says that when a rational agent does something because she believes it to be best, the goodness she perceives, or her motives for acting, merely incline her towards action without necessitating action (see e.g. Huggard, p. 419; Fifth Letter to Clarke, sections 8-9; Ariew and Garber, p. 195; New Essays, p. 175).  Hence, Leibniz may be attributing a particular kind of contingency to free actions.

Even though Leibniz holds that free actions must be contingent, that is, that they cannot be necessary, he grants that they can be determined.  In fact, Leibniz vehemently rejects the notion that a world with free agents must contain genuine indeterminacy.  Hence, Leibniz is what we today call a compatibilist about freedom and determinism (see Free Will).  He believes that all actions, whether they are free or not, are determined by the nature and the prior states of the agent.  What is special about free actions, then, is not that they are undetermined, but rather that they are determined, among other things, by rational perceptions of the good.  We always do what we are most strongly inclined to do, for Leibniz, and if we are most strongly inclined by our judgment about the best course of action, we pursue that course of action freely.  The ability to act contrary even to one’s best reasons or motives, Leibniz contends, is not required for freedom, nor would it be worth having.   As Leibniz puts it in the New Essays, “the freedom to will contrary to all the impressions which may come from the understanding … would destroy true liberty, and reason with it, and would bring us down below the beasts” (p. 180).  In fact, being determined by our rational understanding of the good, as we are in our free actions, makes us godlike, because according to Leibniz, God is similarly determined by what he judges to be best.  Nothing could be more perfect and more desirable than acting in this way.

3. The Mill Argument

In several of his writings, Leibniz argues that purely material things such as brains or machines cannot possibly think or perceive.  Hence, Leibniz contends that materialists like Thomas Hobbes are wrong to think that they can explain mentality in terms of the brain.  This argument is without question among Leibniz’s most influential contributions to the philosophy of mind.  It is relevant not only to the question whether human minds might be purely material, but also to the question whether artificial intelligence is possible.  Because Leibniz’s argument against perception in material objects often employs a thought experiment involving a mill, interpreters refer to it as ‘the mill argument.’  There is considerable disagreement among recent scholars about the correct interpretation of this argument (see References and Further Reading).  The present section sketches one plausible way of interpreting Leibniz’s mill argument.

The most famous version of Leibniz’s mill argument occurs in section 17 of the Monadology:

Moreover, we must confess that perception, and what depends on it, is inexplicable in terms of mechanical reasons, that is, through shapes and motions.  If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters into a mill.  Assuming that, when inspecting its interior, we will only find parts that push one another, and we will never find anything to explain a perception.  And so, we should seek perception in the simple substance and not in the composite or in the machine.

To understand this argument, it is important to recall that Leibniz, like many of his contemporaries, views all material things as infinitely divisible.  As already seen, he holds that there are no smallest or most fundamental material elements, and every material thing, no matter how small, has parts and is hence complex.  Even if there were physical atoms—against which Leibniz thinks he has conclusive metaphysical arguments—they would still have to be extended, like all matter, and we would hence be able to distinguish between an atom’s left half and its right half.  The only truly simple things that exist are monads, that is, unextended, immaterial, mind-like things.  Based on this understanding of material objects, Leibniz argues in the mill passage that only immaterial entities are capable of perception because it is impossible to explain perception mechanically, or in terms of material parts pushing one another.

Unfortunately Leibniz does not say explicitly why exactly he thinks there cannot be a mechanical explanation of perception.  Yet it becomes clear in other passages that for Leibniz perceiving has to take place in a simple thing.  This assumption, in turn, straightforwardly implies that matter—which as seen is complex—is incapable of perception.  This, most likely, is behind Leibniz’s mill argument.  Why does Leibniz claim that perception can only take place in simple things?  If he did not have good reasons for this claim, after all, it would not constitute a convincing starting point for his mill argument.

Leibniz’s reasoning appears to be the following.  Material things, such as mirrors or paintings, can represent complexity.  When I stand in front of a mirror, for instance, the mirror represents my body.  This is an example of the representation of one complex material thing in another complex material thing.  Yet, Leibniz argues, we do not call such a representation ‘perception’: the mirror does not “perceive” my body.  The reason this representation falls short of perception, Leibniz contends, is that it lacks the unity that is characteristic of perceptions: the top part of the mirror represents the top part of my body, and so on.  The representation of my body in the mirror is merely a collection of smaller representations, without any genuine unity.  When another person perceives my body, on the other hand, her representation of my body is a unified whole.  No physical thing can do better than the mirror in this respect: the only way material things can represent anything is through the arrangement or properties of their parts.  As a result, any such representation will be spread out over multiple parts of the representing material object and hence lack genuine unity.  It is arguably for this reason that Leibniz defines ‘perception’ as “the passing state which involves and represents a multitude in the unity or in the simple substance” (Monadology, section 14).

Leibniz’s mill argument, then, relies on a particular understanding of perception and of material objects.  Because all material objects are complex and because perceptions require unity, material objects cannot possibly perceive.  Any representation a machine, or a material object, could produce would lack the unity required for perception.  The mill example is supposed to illustrate this: even an extremely small machine, if it is purely material, works only in virtue of the arrangement of its parts.  Hence, it is always possible, at least in principle, to enlarge the machine.  When we imagine the machine thus enlarged, that is, when we imagine being able to distinguish the machine’s parts as we can distinguish the parts of a mill, we will realize that the machine cannot possibly have genuine perceptions.

Yet the basic idea behind Leibniz’s mill argument can be appealing even to those of us who do not share Leibniz’s assumptions about perception and material objects.  In fact, it appears to be a more general version of what is today called “the hard problem of consciousness," that is, the problem of explaining how something physical could explain, or give rise to, consciousness.  While Leibniz’s mill argument is about perception generally, rather than conscious perception in particular, the underlying structure of the argument appears to be similar: mental states have characteristics—such as their unity or their phenomenal properties—that, it seems, cannot even in principle be explained physically.  There is an explanatory gap between the physical and the mental.

4. The Relation between Mind and Body

The mind-body problem is a central issue in the philosophy of mind.  It is, roughly, the problem of explaining how mind and body can causally interact.  That they interact seems exceedingly obvious: my mental states, such as for instance my desire for a cold drink, do seem capable of producing changes in my body, such as the bodily motions required for walking to the fridge and retrieving a bottle of water.  Likewise, certain physical states seem capable of producing changes in my mind: when I stub my toe on my way to the fridge, for instance, this event in my body appears to cause me pain, which is a mental state.  For Descartes and his followers, it is notoriously difficult to explain how mind and body causally interact.  After all, Cartesians are substance dualists: they believe that mind and body are substances of a radically different type (see Descartes: Mind-Body Distinction).  How could a mental state such as a desire cause a physical state such as a bodily motion, or vice versa, if mind and body have absolutely nothing in common?  This is the version of the mind-body problem that Cartesians face.

For Leibniz, the mind-body problem does not arise in exactly the way it arises for Descartes and his followers, because Leibniz is not a substance dualist.  We have already seen that, according to Leibniz, an animal or human being has a central monad, which constitutes its soul, as well as subordinate monads that are everywhere in its body.  In fact, Leibniz appears to hold that the body just is the collection of these subordinate monads and their perceptions (see e.g. Principles of Nature and Grace section 3), or that bodies result from monads (Ariew and Garber, p. 179).  After all, as already seen, he holds that purely material, extended things would not only be incapable of perception, but would also not be real because of their infinite divisibility.  The only truly real things, for Leibniz, are monads, that is, immaterial and indivisible substances.  This means that Leibniz, unlike Descartes, does not believe that there are two fundamentally different kinds of substances, namely physical and mental substances.  Instead, for Leibniz, all substances are of the same general type.  As a result, the mind-body problem may seem more tractable for Leibniz: if bodies have a semi-mental nature, there are fewer obvious obstacles to claiming that bodies and minds can interact with one another.

Yet, for complicated reasons that are beyond the scope of this article (but see Leibniz: Causation), Leibniz held that human minds and their bodies—as well as any created substances, in fact—cannot causally interact.  In this, he agrees with occasionalists such as Nicolas Malebranche.  Leibniz departs from occasionalists, however, in his positive account of the relation between mental and corresponding bodily events.  Occasionalists hold that God needs to intervene in nature constantly to establish this correspondence.  When I decide to move my foot, for instance, God intervenes and moves my foot accordingly, occasioned by my decision.  Leibniz, however, thinks that such interventions would constitute perpetual miracles and be unworthy of a God who always acts in the most perfect manner.  God arranged things so perfectly, Leibniz contends, that there is no need for these divine interventions.  Even though he endorses the traditional theological doctrine that God continually conserves all creatures in existence and concurs with their actions (see Leibniz: Causation), Leibniz stresses that all natural events in the created world are caused and made intelligible by the natures of created things.  In other words, Leibniz rejects the occasionalist doctrine that God is the only active, efficient cause, and that the laws of nature that govern natural events are merely God’s intentions to move his creatures around in a particular way.  Instead for Leibniz these laws, or God’s decrees about the ways in which created things should behave, are written into the natures of these creatures.  God not only decided how creatures should act, but also gave them natures and natural powers from which these actions follow.  To understand the regularities and events in nature, we do not need to look beyond the natures of creatures.  This, Leibniz claims, is much more worthy of a perfect God than the occasionalist world, in which natural events are not internally intelligible.

How, then, does Leibniz explain the correspondence between mental and bodily states if he denies that there is genuine causal interaction among finite things and also denies that God brings about the correspondence by constantly intervening?  Consider again the example in which I decide to get a drink from the fridge and my body executes that decision.  It may seem that unless there is a fairly direct link between my decision and the action—either a link supplied by God’s intervention, or by the power of my mind to cause bodily motion—it would be an enormous coincidence that my body carries out my decision.  Yet, Leibniz thinks there is a third option, which he calls ‘pre-established harmony.’  On this view, God created my body and my mind in such a way that they naturally, but without any direct causal links, correspond to one another.  God knew, before he created my body, that I would decide to get a cold drink, and hence made my body in such a way that it will, in virtue of its own nature, walk to the fridge and get a bottle of water right after my mind makes that decision.

In one text, Leibniz provides a helpful analogy for his doctrine of pre-established harmony.  Imagine two pendulum clocks that are in perfect agreement for a long period of time.  There are three ways to ensure this kind of correspondence between them: (a) establishing a causal link, such as a connection between the pendulums of these clocks, (b) asking a person constantly to synchronize the two clocks, and (c) designing and constructing these clocks so perfectly that they will remain perfectly synchronized without any causal links or adjustments (see Ariew and Garber, pp. 147-148).  Option (c), Leibniz contends, is superior to the other two options, and it is in this way that God ensures that the states of my mind correspond to the states of my body, or in fact, that the perceptions of any created substance harmonize with the perceptions of any other.  The world is arranged and designed so perfectly that events in one substance correspond to events in another substance even though they do not causally interact, and even though God does not intervene to adjust one to the other.  Because of his infinite wisdom and foreknowledge, God was able to pre-establish this mutual correspondence or harmony when he created the world, analogously to the way a skilled clockmaker can construct two clocks that perfectly correspond to one another for a period of time.

5. References and Further Reading

a. Primary Sources in English Translation

  • Ariew, Roger and Daniel Garber, eds. Philosophical Essays. Indianapolis: Hackett, 1989.
    • Contains translations of many of Leibniz’s most important shorter writings such as the Monadology, the Principles of Nature and Grace, the Discourse on Metaphysics, and excerpts from Leibniz’s correspondence, to name just a few.
  • Ariew, Roger, ed.  Correspondence [between Leibniz and Clarke]. Indianapolis: Hackett, 2000.
    • A translation of Leibniz’s correspondence with Samuel Clarke, which touches on many important topics in metaphysics and philosophy of mind.
  • Francks, Richard and Roger S. Woolhouse, eds. Leibniz's 'New System' and Associated Contemporary Texts. Oxford: Oxford University Press, 1997.
    • Contains English translations of additional short texts.
  • Francks, Richard and Roger S. Woolhouse, eds. Philosophical Texts. Oxford: Oxford University Press, 1998.
    • Contains English translations of additional short texts.
  • Huggard, E. M., ed. Theodicy: Essays on the Goodness of God, the Freedom of Man and the Origin of Evil. La Salle: Open Court, 1985.
    • Translation of the only philosophical monograph Leibniz published in his lifetime, which contains many important discussions of free will.
  • Lodge, Paul, ed. The Leibniz–De Volder Correspondence: With Selections from the Correspondence between Leibniz and Johann Bernoulli. New Haven: Yale University Press, 2013.
    • An edition, with English translations, of Leibniz’s correspondence with De Volder, which is a very important source of information about Leibniz’s mature metaphysics.
  • Loemker, Leroy E., ed. Philosophical Papers and Letters. Dordrecht: D. Reidel, 1970.
    • Contains English translations of additional short texts.
  • Look, Brandon and Donald Rutherford, eds. The Leibniz–Des Bosses Correspondence. New Haven: Yale University Press, 2007.
    • An edition, with English translations, of Leibniz’s correspondence with Des Bosses, which is another important source of information about Leibniz’s mature metaphysics.
  • Parkinson, George Henry Radcliffe and Mary Morris, eds. Philosophical Writings. London: Everyman, 1973.
    • Contains English translations of additional short texts.
  • Remnant, Peter and Jonathan Francis Bennett, eds. New Essays on Human Understanding. Cambridge: Cambridge University Press, 1996.
    • Translation of Leibniz’s section-by-section response to Locke’s Essay Concerning Human Understanding, written in the form of a dialogue between the two fictional characters Philalethes and Theophilus, who represent Locke’s and Leibniz’s views, respectively.
  • Rescher, Nicholas, ed. G.W. Leibniz's Monadology: An Edition for Students. Pittsburgh: University of Pittsburgh Press, 1991.
    • An edition, with English translation, of the Monadology, with commentary and a useful collection of parallel passages from other Leibniz texts.
  • Strickland, Lloyd H., ed. The Shorter Leibniz Texts: A Collection of New Translations. London: Continuum, 2006.
    • Contains English translations of additional short texts.

b. Secondary Sources

  • Adams, Robert Merrihew. Leibniz: Determinist, Theist, Idealist. New York: Oxford University Press, 1994.
    • One of the most influential and rigorous works on Leibniz’s metaphysics.
  • Borst, Clive. "Leibniz and the Compatibilist Account of Free Will." Studia Leibnitiana 24.1 (1992): 49-58.
    • About Leibniz’s views on free will.
  • Brandom, Robert. "Leibniz and Degrees of Perception." Journal of the History of Philosophy 19 (1981): 447-79.
    • About Leibniz’s views on perception and perceptual distinctness.
  • Davidson, Jack. "Imitators of God: Leibniz on Human Freedom." Journal of the History of Philosophy 36.3 (1998): 387-412.
    • Another helpful article about Leibniz’s views on free will and on the ways in which human freedom resembles divine freedom.
  • Davidson, Jack. "Leibniz on Free Will." The Continuum Companion to Leibniz. Ed. Brandon Look. London: Continuum, 2011. 208-222.
    • Accessible general introduction to Leibniz’s views on freedom of the will.
  • Duncan, Stewart. "Leibniz's Mill Argument Against Materialism." Philosophical Quarterly 62.247 (2011): 250-72.
    • Helpful discussion of Leibniz’s mill argument.
  • Garber, Daniel. Leibniz: Body, Substance, Monad. New York: Oxford University Press, 2009.
    • A thorough study of the development of Leibniz’s metaphysical views.
  • Gennaro, Rocco J. "Leibniz on Consciousness and Self-Consciousness." New Essays on the Rationalists. Eds. Rocco J. Gennaro and C. Huenemann. Oxford: Oxford University Press, 1999. 353-371.
    • Discusses Leibniz’s views on consciousness and highlights the advantages of reading Leibniz as endorsing a higher-order thought theory of consciousness.
  • Jolley, Nicholas. Leibniz. London; New York: Routledge, 2005.
    • Good general introduction to Leibniz’s philosophy; includes chapters on the mind and freedom.
  • Jorgensen, Larry M. "Leibniz on Memory and Consciousness." British Journal for the History of Philosophy 19.5 (2011a): 887-916.
    • Elaborates on Jorgensen (2009) and discusses the role of memory in Leibniz’s theory of consciousness.
  • Jorgensen, Larry M. "Mind the Gap: Reflection and Consciousness in Leibniz." Studia Leibnitiana 43.2 (2011b): 179-95.
    • About Leibniz’s account of reflection and reasoning.
  • Jorgensen, Larry M. "The Principle of Continuity and Leibniz's Theory of Consciousness." Journal of the History of Philosophy 47.2 (2009): 223-48.
    • Argues against ascribing a higher-order theory of consciousness to Leibniz.
  • Kulstad, Mark. Leibniz on Apperception, Consciousness, and Reflection. Munich: Philosophia, 1991.
    • Influential, meticulous study of Leibniz’s views on consciousness.
  • Kulstad, Mark. "Leibniz, Animals, and Apperception." Studia Leibnitiana 13 (1981): 25-60.
    • A shorter discussion of some of the issues in Kulstad (1991).
  • Lodge, Paul, and Marc E. Bobro. "Stepping Back Inside Leibniz's Mill." The Monist 81.4 (1998): 553-72.
    • Discusses Leibniz’s mill argument.
  • Lodge, Paul. "Leibniz's Mill Argument Against Mechanical Materialism Revisited." Ergo (2014).
    • Further discussion of Leibniz’s mill argument.
  • McRae, Robert. Leibniz: Perception, Apperception, and Thought. Toronto: University of Toronto Press, 1976.
    • An important and still helpful, even if somewhat dated, study of Leibniz’s philosophy of mind.
  • Murray, Michael J. "Spontaneity and Freedom in Leibniz." Leibniz: Nature and Freedom. Eds. Donald Rutherford and Jan A. Cover. Oxford: Oxford University Press, 2005. 194-216.
    • Discusses Leibniz’s views on free will and self-determination, or spontaneity.
  • Phemister, Pauline. "Leibniz, Freedom of Will and Rationality." Studia Leibnitiana 26.1 (1991): 25-39.
    • Explores the connections between rationality and freedom in Leibniz.
  • Rozemond, Marleen. "Leibniz on the Union of Body and Soul." Archiv für Geschichte der Philosophie 79.2 (1997): 150-78.
    • About the mind-body problem and pre-established harmony in Leibniz.
  • Rozemond, Marleen. "Mills Can't Think: Leibniz's Approach to the Mind-Body Problem." Res Philosophica 91.1 (2014): 1-28.
    • Another helpful discussion of the mill argument.
  • Savile, Anthony. Routledge Philosophy Guidebook to Leibniz and the Monadology. New York: Routledge, 2000.
    • Very accessible introduction to Leibniz’s Monadology.
  • Simmons, Alison. "Changing the Cartesian Mind: Leibniz on Sensation, Representation and Consciousness." The Philosophical Review 110.1 (2001): 31-75.
    • Insightful discussion of the ways in which Leibniz’s philosophy of mind differs from the Cartesian view; also argues that Leibnizian consciousness consists in higher-order perceptions.
  • Sotnak, Eric. "The Range of Leibnizian Compatibilism." New Essays on the Rationalists. Eds. Rocco J. Gennaro and C. Huenemann. Oxford: Oxford University Press, 1999. 200-223.
    • About Leibniz’s theory of freedom.
  • Swoyer, Chris. "Leibnizian Expression." Journal of the History of Philosophy 33 (1995): 65-99.
    • About Leibnizian perception.
  • Wilson, Margaret Dauler. "Confused Vs. Distinct Perception in Leibniz: Consciousness, Representation, and God's Mind." Ideas and Mechanism: Essays on Early Modern Philosophy. Princeton: Princeton University Press, 1999. 336-352.
    • About Leibnizian perception as well as perceptual distinctness.

 

Author Information

Julia Jorati
Email: jorati.1@osu.edu
The Ohio State University
U. S. A.

The Problem of the Criterion

The Problem of the Criterion is considered by many to be a fundamental problem of epistemology.  In fact, Chisholm (1973, 1) claims that the Problem of the Criterion is “one of the most important and one of the most difficult of all the problems of philosophy.” A popular form of the Problem of the Criterion can be raised by asking two seemingly innocent questions: What do we know? How are we to decide in any particular case whether we have knowledge?  One quickly realizes how troubling the Problem of the Criterion is because it seems that before we can answer the first question we must already have an answer to the second question, but it also seems that before we can answer the second question we must already have an answer to the first question.  That is, it seems that before we can determine what we know we must first have a method or criterion for distinguishing cases of knowledge from cases that are not knowledge.  Yet, it seems that before we can determine the appropriate criterion of knowledge we must first know which particular instances are in fact knowledge.  So, we seem to be stuck going around a circle without any way of getting our epistemological theorizing started.  Although there are various ways of responding to the Problem of the Criterion, the problem is difficult precisely because it seems that each response comes at a cost.  This article examines the nature of the Problem and the costs associated with the most promising responses to the Problem.

Table of Contents

  1. The Problem
  2. Chisholm on the Problem of the Criterion
  3. Other Responses to the Problem of the Criterion
    1. Explanationist Responses
      1. Explanatory Particularism
      2. Coherentism
      3. Applied Evidentialism
    2. Dissolution
  4. The Problem of the Criterion’s Relation to Other Philosophical Problems
  5. References and Further Reading

1. The Problem

The Problem of the Criterion is the ancient problem of the “wheel” or the “diallelus”.  It comes to us from Book 2 of Sextus Empiricus’ Outlines of Pyrrhonism.  Sextus presents the Problem of the Criterion as a major issue in the debate between the Academic Skeptics and the Stoics.  After Sextus’ presentation though, philosophers largely seemed to lose interest in the Problem of the Criterion until the modern period.  The problem resurfaced in the late 1500’s with Michael De Montaigne’s “Apology for Raymond Sebond” and it again had a significant influence.  Following the modern period, however, the Problem of the Criterion largely disappeared until the early 19th century when G.W.F. Hegel (1807) presented the problem and, arguably, put forward one of the first coherentist responses to the Problem of the Criterion (Rockmore (2006) and Aikin (2010)). In the late 19th and early 20th centuries Cardinal D.J. Mercier (1884) and his student P. Coffey (1917) again reminded the world of the problem.  In the late 20th century the Problem of the Criterion played an important role in the work of two philosophers: Roderick Chisholm and Nicholas Rescher.  In fact, it is primarily due to the work of Roderick Chisholm that the Problem of the Criterion is discussed by contemporary epistemologists at all. (See Amico (1993) and Popkin (2003) for further discussion of the historical development of the Problem of the Criterion).

In light of Chisholm’s enormous influence on contemporary discussions of the Problem of the Criterion his presentation of the problem is a fitting place to begin getting clear on things. Chisholm (1973, 12) often introduces the Problem of the Criterion with the following pairs of questions:

(A)  What do we know? What is the extent of our knowledge?

(B) How are we to decide whether we know? What are the criteria of knowledge?

However, Chisholm also speaks approvingly of Montaigne’s presentation of the Problem of the Criterion, which is in terms of true/false appearances rather than knowledge.  Further, there is some ambiguity in Chisholm’s own discussions of the Problem of the Criterion as to whether the problem presented by the Problem of the Criterion is the meta-epistemological problem of determining when we have knowledge or the epistemological problem of determining what is true.  So, there is a difficulty in determining exactly what problem the Problem of the Criterion is supposed to pose.

The fact that Chisholm’s discussion oscillates between these two versions of the Problem of the Criterion and the fact that he seems to be aware of the two versions of the problem help make it clear that perhaps there is no such thing as the Problem of the Criterion. Perhaps the Problem of the Criterion is rather a set of related problems.  This is something that many philosophers since Chisholm, and Chisholm himself (see his 1977), have noted.  For instance, Robert Amico (1993) argues that Chisholm mistakenly takes himself to be discussing the same problem as Sextus Empiricus when he considers the Problem of the Criterion.  Richard Fumerton (2008) points out that there are at least two versions of the Problem of the Criterion.  The first is a methodological problem of trying to identify sources of knowledge or justified belief (this, he claims, is the version of the problem that Chisholm focuses on).  The second is the problem of trying to identify the necessary and sufficient conditions for correctly applying concepts such as ‘knowledge’ or ‘justification’.  Michael DePaul (1988, 70) expresses a version of the Problem of the Criterion limited to moral discourse in terms of two questions: “Which of our actions are morally right?” and “What are the criteria of right action?”

Since there are many versions of the Problem of the Criterion, one might worry that it will be nearly impossible to formulate the Problem of the Criterion precisely.  Fortunately, this is not the case.  Although there are many particular instances of the Problem of the Criterion, they all seem to be questions of epistemic priority.  In other words, the various versions of the Problem of the Criterion are focused on trying to answer the question “how is it possible to theorize in epistemology without taking anything epistemic for granted?” (Conee 2004, 17).  More generally: how is it possible to theorize at all without making arbitrary assumptions? Hence, perhaps the best way to formulate the Problem of the Criterion in its most general form is with the following pair of questions (Cling (1994) and McCain and Rowley (2014)):

(1) Which propositions are true?

(2) How can we tell which propositions are true?

Plausibly, all the various formulations of particular versions of the Problem of the Criterion can be understood as instances of the problem one faces when trying to answer these general questions.

Before moving on it is important to be clear about the nature of (1) and (2).  These are not questions about the nature of truth itself.  Rather, these are epistemological questions concerning which propositions we should think are true and what the correct criteria are for determining whether a proposition should be accepted as true or false.  It is possible that one could have answers to these questions without possessing any particular theory of truth, or even taking a stand at all as to the correct theory of truth.  Additionally, it is possible to have a well-developed theory of the nature of truth without having an answer to either (1) or (2).  So, the issue at the heart of the Problem of the Criterion is how to start our epistemological theorizing in the correct way, not how to discover a theory of the nature of truth.

Most would admit that it is important to start our epistemological theorizing in an appropriate way by not taking anything epistemic for granted, if possible.  However, this desire to start theorizing in the right way coupled with the questions of the Problem of the Criterion does not yield a problem—it is merely a desire we have and questions we need to answer.  The problem yielded by the Problem of the Criterion arises because one might plausibly think that we cannot answer (1) until we have an answer to (2), but we cannot answer (2) until we have an answer to (1).  So, at least initially, consideration of the Problem of the Criterion makes it seem that we cannot get our theorizing started at all.  This seems to land us in a pretty extreme form of skepticism—we cannot even begin the project of trying to determine which propositions to accept as true.

Of course, there are anti-skeptical ways to respond to the Problem of the Criterion.  According to Chisholm, these anti-skeptical responses are question-begging.  In light of this one might think that extreme skepticism is inevitable.  However, this might not be correct.  The extreme skepticism threatened by the Problem of the Criterion itself seems guilty of begging the question.  This is why Chisholm (1973, 37) claims “we can deal with the problem only by begging the question.”

2. Chisholm on the Problem of the Criterion

According to Chisholm, there are only three responses to the Problem of the Criterion: particularism, methodism, and skepticism.  The particularist assumes an answer to (1) and then uses that to answer (2), whereas the methodist assumes an answer to (2) and then uses that to answer (1).  The skeptic claims that you cannot answer (1) without first having an answer to (2) and you cannot answer (2) without first having an answer to (1), and so you cannot answer either. Chisholm claims that, unfortunately, regardless of which of these responses to the Problem of the Criterion we adopt we are forced to beg the question.  It will be worth examining each of the responses to the Problem of the Criterion that Chisholm considers and how each begs the question against the others.

The particularist assumes an answer to (1) that does not epistemically depend on an answer to (2) and uses her answer to (1) to answer (2). More precisely, the particularist response to the Problem of the Criterion is:

Particularism             Assume an answer to (1) (accept some set of propositions as true) that does not depend on an answer to (2) and use the answer to (1) to answer (2).

What is the epistemic status of the particularist’s answer to (1)? Chisholm (1973, 37) seems to take it that its status is weak, being nothing more than an assumption:

But in all of this I have presupposed the approach I have called “particularism.” The “methodist” and the “skeptic” will tell us that we have started in the wrong place. If now we try to reason with them, then, I am afraid, we will be back on the wheel.

One might think that the question-begging only occurs if the particularist tries to reason with her methodist or skeptical interlocutors.  So, one might think the problem for particularism is simply a lack of reasons in support of particularism that advocates of methodism or skepticism would accept.

However, things are worse than this. The real problem with particularism is not simply the dialectical problem of providing grounds that methodists and skeptics will accept; rather it is an epistemic problem. The problem with particularism is that the particularist’s starting point is an unfounded assumption.  Particularism starts with a set of particular propositions and works from there.  If the particularist goes beyond that set of particular propositions to provide reasons for accepting them, she abandons that particularist response and either picks a new set of particular propositions to assume (a new particularist response) or picks something other than simply a new set of only particular propositions to assume and ceases to be a particularist.  So, the problem for the particularist response is much deeper than a dialectical problem that arises only when trying to deal with opposing views.  The particularist cannot offer reasons for particularism beyond the unfounded assumption of a set of particular propositions. By simply assuming an answer to (1), the particularist begs the question against both the methodists and the skeptics.

Particularism is not unique in begging the question though.  It seems that methodism begs the question too.  The methodist response to the Problem of the Criterion is:

Methodism                 Assume an answer to (2) (accept some criterion to be a correct criterion of truth – one that successfully discriminates true propositions from false ones) that does not depend on an answer to (1) and use the answer to (2) to answer (1).

Since methodism begins by assuming that some criterion is a correct criterion of truth without providing any epistemic reason to prefer this response to the alternatives, it begs the question against particularism and skepticism.

The skeptical response to the Problem of the Criterion assumes that both particularism and methodism are mistaken.   That is, the skeptical response to the Problem of the Criterion assumes that there is no answer to (1) that does not depend on an answer to (2) and there is no answer to (2) that does not depend on an answer to (1). As Chisholm (1973, 14) explains the response:

And so we can formulate the position of the skeptic on these matters. He will say: ‘You cannot answer question 1 until you answer question 2. And you cannot answer question 2 until you answer question 1. Therefore, you cannot answer either question. You cannot know what, if anything, you know, and there is no possible way for you to decide in any particular case.’

(The names of the questions have been changed from Chisholm’s “A” and “B” to “1” and “2”, respectively, in this quote in order to maintain continuity with the present discussion)

A bit more succinctly:

Skepticism                  Assume that (i) there is no independent answer to (1) or (2), and (ii) if (1) and (2) cannot be answered independently, they cannot be answered at all.

According to Chisholm, the skeptical response has no more to recommend it that particularism or methodism.  The reason for this is that skepticism, as a response to the Problem of the Criterion, is question-begging.  The skeptic simply assumes that there is no independent answer to (1) or (2) and though both the particularist and the methodist deny this assumption, they can only respond by appealing to assumptions of their own.  The skeptic has no reasons to support the assumption that there is no independent answer to (1) or (2).  The conflict between the three responses that Chisholm considers comes down to ungrounded assumptions. It is because of this fact that Chisholm claims when facing the Problem of the Criterion we have no choice but to beg the question. Since all responses beg the question, skepticism is no better off than any other response to the Problem of the Criterion.

At this point it is worth getting clear on two further points about the skeptical response.  First, it should be noted that the skeptical response is not the only response that might lead to a thoroughgoing skepticism.  For instance, one might be a methodist who assumes the criterion for distinguishing true from false propositions is absolute certainty.  That is, a methodist might think that the only way to tell whether a proposition is true is for the truth of the proposition to be absolutely certain for her.  Pretty clearly this sort of methodism will lead to a fairly extreme skepticism.  One of the lessons of Cartesian skepticism is that it is implausible to think that we can be absolutely certain about the truth of any proposition about the external world.

Second, one might think that the skeptical response to the Problem of the Criterion really is better off than particularism or methodism. One might think that the skeptical response simply emerges from consideration of the problems facing both particularism and methodism, and so does not have to make any assumptions of its own.

Although the skeptical response may arise in this way, it does not absolve skepticism of begging the question. As Chisholm notes, the skeptical response has nothing in itself that makes it better than particularism or methodism. The skeptical resolution of the Problem of the Criterion has nothing to appeal to other than unfounded assumptions in order to motivate it over its alternatives.  Without something more than unfounded assumptions there does not seem to be any reason to prefer the skeptic’s response. Given this, accepting the skeptical response would still beg the question because there is no more reason to accept it than there is to accept any of the other positions.  Since the skeptical response has nothing more to recommend it in itself than the other responses, there is no more reason to accept the skeptical response because of the problems for particularism and methodism than there is to accept particularism because of the problems with the other responses, or to accept methodism for the same reason.  All three options seem to be on equal footing when it comes to having reason to pick them over their rivals and they all beg the question.

Each of these responses to the Problem of the Criterion begins with an unfounded assumption, one that is unsupported by reasons, and so begs the question in an epistemic sense against the other two.  Despite this and his emphasis on the fact that all three responses are unappealing because of their question-begging, Chisholm famously argues in support of particularism.  His argument in support of particularism, which he sometimes refers to as “commonsensism”, involves criticizing the other two responses and giving some reasons for preferring particularism.

Concerning methodism, Chisholm offers two objections.  First, he objects that the criterion that methodism starts with will be “very broad and far-reaching and at the same time completely arbitrary” (1973, 17).  Essentially, he thinks that there can be no good reason for starting with a broad criterion.  Second, he objects that methodism (at least of the empiricist variety that he considers in detail) will lead to skepticism.  When we adopt the methodist’s broad criterion it will turn out that many of the things we commonsensically take ourselves to know do not count as knowledge.  Chisholm finds this unacceptable.

Chisholm’s case against the skeptical response to the Problem of the Criterion seems also to come down to two things.  The first is quite plain.  If methodism is flawed because it will lead to skepticism concerning many areas where we take ourselves to have knowledge, it is no surprise that Chisholm finds the skeptical response to the Problem of the Criterion to be unacceptable.  It too has this result.  In fact, the skeptical response is in a sense doubly skeptical.  It not only holds that we lack knowledge in areas that we typically take ourselves to have knowledge, it holds that we cannot even begin the process of determining what we do know.  The second problem Chisholm seems to have with skepticism is simply that it has no more to recommend it that either of the other views.  Admittedly, this does not seem to be much of a criticism, especially since he grants that all three responses make unfounded assumptions.

Unfortunately, Chisholm’s positive support for particularism is very sparse.  In fact in his Aquinas Lectures he only claims, “in favor of our approach [particularism] there is the fact that we do know many things, after all” (1973, 38).  But, of course, this seems to merely be a statement of the assumption made by particularism, not a defense of it. As a solution to the Problem of the Criterion, Chisholm’s particularism seems to be lacking.  In fact, Robert Amico (1988b) argues that Chisholm’s “solution” is clearly unacceptable because Chisholm does not give us good independent reasons to reject either methodism or skepticism, he does not provide good reasons to prefer particularism to the other responses, and, as Chisholm himself admits, particularism begs the question.

Given the very weak argument in support of his preferred view, one might wonder what Chisholm is really up to when he discusses the Problem of the Criterion.  Throughout the many works in which he discusses the Problem of the Criterion Chisholm consistently favors particularism, but he also makes it clear that all responses to the Problem of the Criterion are unappealing and his own view must, just like its rivals, beg the question.  In responses to Amico’s criticisms Chisholm claims that particularism is superior to methodism and skepticism because by being a particularist one can give a reasonable account of knowledge, but one cannot make progress in epistemology by taking a methodist or skeptical approach.

A few further points about Chisholm’s take on the Problem of the Criterion that are often overlooked are worth mentioning here.  First, he claims that we should remain open to the possibility of one day discovering a version of methodism that fares better than the empiricist version he criticizes.  Second, Chisholm is adamant that in supporting particularism he is not trying to solve the Problem of the Criterion because “the problem of the criterion has no solution” (1988, 234).  So, Chisholm thinks that particularism is simply the best of a set of bad options—the options are bad because they beg the question; particularism is best because it allows us to make progress in epistemology.

3. Other Responses to the Problem of the Criterion

Chisholm claimed that there are only three responses to the Problem of the Criterion and that there is no solution to this problem.   Many philosophers disagree with Chisholm on both points.  In fact, Andrew Cling (1994) argues that there are eight non-skeptical responses to the Problem of the Criterion.  Importantly, Cling does not consider two of the non-skeptical responses that we will consider below, so it seems that if Cling is correct concerning the eight non-skeptical approaches he mentions and the two additional approaches discussed below are distinct responses, there are at least eleven (ten non-skeptical and one skeptical) responses to the Problem of the Criterion.  While there are many possible responses to the Problem of the Criterion the focus here will be limited to those that have been defended in the literature.

a. Explanationist Responses

As noted above, there are a number of responses to the Problem of the Criterion beyond the three kinds that Chisholm considers.  The employment of explanatory reasoning offers promising alternatives to the responses Chisholm considers.  These explanationist responses share a commitment to explanatory reasoning—they all involve attempting to answer (1) and (2) in a way that yields the most satisfactory explanatory picture.  A helpful way of understanding explanationist responses is as employing the method of reflective equilibrium to respond to the Problem of the Criterion. Roughly, the method of reflective equilibrium involves starting with a set of data (beliefs, intuitions, etc.) and making revisions to that set—giving up some of the data, adding new data to the set, giving more/less weight to some of the data, and so on—so as to create the best explanatory picture overall.  Reaching this equilibrium state of maximized explanatory coherence of the remaining data is thought to make accepting whatever data remains, whether this includes any of one’s initial data or not, reasonable (see coherentism and John Rawls for more on reflective equilibrium).  Of course, there have been criticisms of the viability of reflective equilibrium as a method of reasoning; however, for current purposes these can be set aside because the ultimate concern here is simply the sort of responses that can be generated by employing reflective equilibrium.

There are a variety of ways that one might attempt to respond to the Problem of the Criterion by using the method of reflective equilibrium.  The variation in these responses is largely a result of what one includes in the set of data that will form the basis for one’s reflection.  It is worth considering some of the more promising varieties of this response that have been put forward in the literature.

i. Explanatory Particularism

Although the explanatory particularism defended by Paul Moser (1989) is a kind of particularism, its explanationist elements warrant discussing it as a separate variety of response.  Moser’s (1989, 261) explanatory particularism begins with one’s “considered, but revisable, judgments” concerning particular propositions.  This is importantly different from the sort of particularism that Chisholm describes because explanatory particularism allows that the beliefs about the truth of particular propositions are revisable whereas particularism as Chisholm describes it does not clearly allow for this.  It is because of this that Moser claims that explanatory particularism does not beg the question against skeptics by ruling out skepticism from the start.  Importantly, the kind of skepticism Moser is discussing here is not the skeptical response to the Problem of the Criterion, but rather the sort of skepticism that grants that we can get started in epistemological theorizing while claiming that ultimately we will end up lacking knowledge in a wide range of cases. External world skepticism is an example of this sort of skepticism; it grants that we are aware of what is required for knowledge, but claims that we simply fail to have knowledge of the world around us.  Like Chisholm’s particularism, explanatory particularism uses this initial set of propositions (i.e. this answer to (1)) to develop epistemic principles or criteria for truth (i.e. to answer (2)).  The initial set of propositions and criteria are both continually revised until a state of maximal explanatory coherence is reached.

Moser claims that explanatory particularism avoids begging the question in the way that Chisholm’s particularism or methodism does.  The reason for this is that Moser claims that the beliefs that explanatory particularism starts with are revisable.  Despite this and Moser’s claim that explanatory particularism does not beg the question against the skeptic, it is not clear that it avoids begging the question against the skeptical response to the Problem of the Criterion.  After all, explanatory particularism assumes an independent answer to (1)—revisable or not it is still an answer—and then uses that to answer (2).  So, it at least seems that explanatory particularism begs the question against the skeptical response by denying the skeptic’s assumption that there is no independent answer to (1) or (2).

Ernest Sosa (2009) also defends a view that we might call explanatory particularism.  On Sosa’s view we begin with particular items of knowledge.  That is, we start with particular propositions that we know to be true.  According to Sosa, we know these propositions because our beliefs with respect to these propositions satisfy a correct general criterion of knowledge (they are formed by sufficiently reliable cognitive faculties).  Although we have knowledge of these propositions, we merely have what he terms “animal knowledge”.  Our knowledge of these propositions when we begin is only animal knowledge because we lack a higher-order perspective on these beliefs.  That is to say, we lack “reflective knowledge” of the fact that these first-order beliefs satisfy the proper criterion of knowledge.  However, on Sosa’s view we use our animal knowledge to develop a perspective on our epistemic situation that offers us an explanatory picture about how or why our first-order beliefs really do constitute knowledge, i.e. we develop reflective knowledge as to how our particular pieces of animal knowledge satisfy the proper criterion for knowledge.  This explanatory perspective yields reflective knowledge and it strengthens our animal knowledge.  A significant component of this picture is that we use our starting animal knowledge to come to answer both (1) and (2) from a reflective standpoint.  So, we begin with an answer to (1) in terms of animal knowledge and we use that answer to develop a perspective that gives us an answer to (2) with respect to both animal and reflective knowledge and an answer to (1) in terms of reflective knowledge.  Sosa’s explanatory response to the Problem of the Criterion relies on a mixture of levels.

Although Sosa’s explanatory particularism with its multiple levels seems more complex that Moser’s, one might think that it begs the question in the same way that Moser’s response does.  Namely, Sosa’s response, like Moser’s, assumes an independent answer to (1). Sosa’s assumed answer is only in terms of animal knowledge, however, it is still an answer.  His response then requires using that answer to develop an explanatory perspective that provides an answer to (2).  Thus, one might think that Sosa’s response seems to beg the question against the skeptical response in the same way that Moser’s response seems to: by denying the skeptic’s assumption (i) that there is no independent answer to (1) or (2).

ii. Coherentism

Coherentism responds to the Problem of the Criterion by starting with both beliefs about which propositions are true and beliefs about the correct method or methods for telling which beliefs are true.  It then uses these beliefs to attempt to answer both (1) and (2) at the same time (DePaul 1988 & 2009, Cling 1994, and Poston 2011). As Andrew Cling (1994, 274) explains:

To be a coherentist is to reject the epistemic priority of beliefs and criteria of truth. Instead, coherentists recommend balancing beliefs against criteria and criteria against beliefs until they all form a consistent, mutually supporting system.

The coherentist does not simply assume that the criterion of truth is to balance “beliefs against criteria and criteria against beliefs.”  To understand coherentism in this way would simply make it a variety of methodism, and so fail to appreciate the importance of its employment of reflective equilibrium.  Instead, the coherentist response involves starting with both beliefs about criteria of truth and also beliefs that particular propositions are true and then makes adjustments to beliefs of either kind in an attempt to reach a state of reflective equilibrium.  Once this equilibrium state has been reached the coherentist uses it to complete her answers to (1) and (2).

On one understanding of coherentism (Cling’s 1994 and Poston’s 2011) the coherentist accepts one of the skeptic’s assumptions, but denies the other.  In particular, this version of coherentism shares the skeptic’s assumption of (i) (there is no independent answer to (1) or (2)), but denies (ii) (if (1) and (2) cannot be answered independently, they cannot be answered at all).  More precisely, on this way of understanding coherentism it involves accepting (i) of Skepticism and adding to it the further assumptions that: (a) a particular criterion is correct (namely, explanatory goodness), (b) a set of particular propositions are true, and (c) the criterion and the set of propositions are not independent of each other.  However, it seems that if one begins with beliefs about which propositions are true and beliefs about the correct criteria for telling which beliefs are true along with the assumption that there is no independent answer to (1) or (2), this version of coherentism will beg the question for reasons similar to why Skepticism begs the question.  That is to say, the coherentist’s assumption of (i) begs the question against particularism and methodism.  After all, (i) is a groundless assumption with which the coherentist starts.  It may be awareness of this feature that helped lead Cling (2009) to ultimately abandon his coherentist response in favor of a skeptical stance with respect to the Problem of the Criterion.

Another way of understanding this approach is as Michael DePaul (1988 & 2009) depicts it.  According to this way of understanding coherentism, the coherentist starts with beliefs about which particular propositions are true and about the correct criteria for telling which beliefs are true, but she does not assume (i). This version of coherentism seems to avoid begging the question against both particularists and methodists because it does not assume that we can answer (1) prior to (2) or that we can answer (2) prior to (1) nor does it assume that they cannot be answered independently.  Instead, this kind of coherentism merely applies reflective equilibrium to the coherentist’s starting set of beliefs without taking a stand on (i) at all.  Now it might turn out that after the application of reflective equilibrium the coherentist is committed to a particular position with respect to (i), but this kind of coherentism does not have to take a stand on (i) from the start.  So, in some respects this way of understanding coherentism may seem superior to the previous version of coherentism.  However, its use of beliefs in the relevant data set seems to beg the question against the skeptic because starting with beliefs about which propositions are true assumes that we can answer and in fact already have an answer to (1).  Likewise, a belief about which criteria are successful for telling which beliefs are true assumes that we can answer and have an answer to (2).   In other words, this version of coherentism seems to beg the question against skepticism by assuming that (ii) is false.  Thus, applying reflective equilibrium to a set of beliefs appears to beg the question by assuming that one of the assumptions of skepticism is false from the outset.  This may be why DePaul (2009) accepts Chisholm’s position that all responses to the Problem of the Criterion end up begging the question.

A final coherentist response is Nicholas Rescher’s “systems-theoretic approach”.  Rescher’s development of this approach takes place over several books (1973a, 1973b, 1977, and 1980).  Although Rescher’s systems-theoretic approach is complex, the relevant details for the present discussion of the Problem of the Criterion are relatively straightforward. Rescher’s response begins by appealing to pragmatic considerations.  It starts with a method and a goal, applies the method and checks to see whether the results satisfy the goal.  So, with respect to the Problem of the Criterion the idea is that our goal is to come to believe true propositions and we start with some criterion for distinguishing true propositions from false.  We apply our criterion and then see if it helps us achieve our goal.  Assuming that the criterion does help us achieve our goal, we have completed the first step in Rescher’s process.  The second step in this process involves showing that a pragmatically successful criterion/method is connected to the truth.  Here Rescher (1977, 107) claims that “only when all the pieces fit together” do we have justification for the criterion.  Further, he is clear that coherence is central to this process.  It is because of this that Robert Amico (1993) argues that Rescher’s view, though complex, is simply a coherentist version of methodism—Rescher ultimately assumes that coherence is the appropriate criterion of truth.  This is so despite the fact that the criterion/method that Rescher starts with may not be coherence because ultimately his way of establishing that any criterion that one starts with is actually a correct criterion is by appeal to coherence.  Since Rescher assumes this role for coherence from the outset, his approach seems to be a form of methodism.  Although Rescher’s approach is a kind of methodism with a significant explanatory element and one that may make more progress in epistemology than the sort that Chisholm criticizes, it seems to be vulnerable to the same charge of question-begging that Chisholm leveled at other forms of methodism—something Rescher may accept since he does not believe the Problem of the Criterion can be solved, but it is at best something that one can “meet and overcome” (1980, 13).

iii. Applied Evidentialism

A final explanationist response to the Problem of the Criterion is what Earl Conee (2004) calls “Applied Evidentialism” (McCain and Rowley (2014) call it the “Seeming Intuition Response”).  This explanationist response differs from the previous ways of using reflective equilibrium to respond to the Problem of the Criterion in that it does not start with a set of beliefs.  Rather, Applied Evidentialism begins with one’s evidence.  In particular, when Conee defends this view he suggests beginning with the set of intuitions or what seems true to us about various propositions.  That is to say, Applied Evidentialism begins with what seems true to us both with respect to propositions about particular items of fact and with respect to criteria for determining when propositions are true.  According to Applied Evidentialism, the way to respond to the Problem of the Criterion is to start with these intuitions and then make modifications—give up some intuitions, form different intuitions, rank some intuitions as more/less important than others, and so on— until a state of equilibrium has been reached.  Once such an equilibrium state has been reached the data from that state can be used to answer (1) and (2). 

Like the other ways of using reflective equilibrium to respond to the Problem of the Criterion, Applied Evidentialism does not seem to beg the question against particularism or methodism because it does not assume that there can be no independent answer to (1) or (2).  Additionally, Applied Evidentialism does not seem to beg the question against the skeptic because it refrains from assuming an answer to (1) or (2) at the outset.  Further, Applied Evidentialism does not assume from the start that the equilibrium state that we end up with will be anti-skeptical.  It is consistent with Applied Evidentialism that reflection on our initial intuitions will in the end lead us to the conclusion that we are unaware of which propositions are true or that we lack an appropriate criterion for discovering this information.  In other words, Applied Evidentialism does not assume that we will have an answer to (1) or (2) when we reach our end equilibrium state.  After all, it could be that our equilibrium state is one in which no method appears to be correct and our best position with respect to each proposition seems to be to suspend judgment concerning its truth.  So, Applied Evidentialism does not seem to beg any questions against the skeptical response to the Problem of the Criterion or other kinds of skepticism, such as Cartesian skepticism.

One might worry that Applied Evidentialism is really a form of methodism, and hence, open to the same charge of question begging as other kinds of methodism.  After all, Applied Evidentialism suggests that using the method of reflective equilibrium on one’s intuitions can provide a response to the Problem of the Criterion.

Upon reflection, however, it seems that Applied Evidentialism is not a kind of methodism.  Plausibly, someone can employ a method without having any beliefs about, or even conscious awareness of, the method at all.  Kevin McCain and William Rowley (2014) argue that methods are analogous to rules in this sense.  They maintain that someone might behave in accordance with a rule without intending to obey the rule or even being aware that there is such a rule at all.  For example, one can act in accordance with a rule of not driving faster than 50mph by simply not driving over 50mph.  She does not need to know that this is a rule or even have an intention to follow rules concerning speed limits.  Ignorance of a rule does not mean that one fails to act in accordance with a rule.  Likewise, McCain and Rowley claim, one can employ the method of reflective equilibrium without accepting or even being aware of the method being used.  So, Applied Evidentialism does not seem to be a kind of methodism.

McCain and Rowley further argue that Applied Evidentialism does not beg the question by assuming that reflective equilibrium is the correct criterion or method at the outset.  They maintain that this is not to say that one cannot be aware that reflective equilibrium is a good method from the outset.  Rather, they claim that the important point is that Applied Evidentialism does not take the goodness of reflective equilibrium as a starting assumption—perhaps one has the intuition that reflective equilibrium is a good method to employ, perhaps not.  The key, they argue, is that unlike methodism Applied Evidentialism does not require one to have beliefs about, or even awareness of, reflective equilibrium to begin to respond to the Problem of the Criterion.  So, they argue Applied Evidentialism is not a form of methodism.  And thus, Applied Evidentialism does not beg the questions that methodism does.

Even if one accepts that Applied Evidentialism does not beg the question, it may have other problems.  It seems that in order to avoid begging the question Applied Evidentialism requires being able to employ reflective equilibrium in responding to the Problem of the Criterion without needing reasons to think that reflective equilibrium is a good method from the start.  This, however, seems to commit the supporter of Applied Evidentialism to accepting that certain kinds of circular reasoning can provide one with good reasons.  More precisely, if Applied Evidentialism is to avoid being a form of methodism, and the question begging that comes with methodism, then it seems that Applied Evidentialism requires that one can have good reasons to believe the results of employing reflective equilibrium without first having good reasons to accept reflective equilibrium as a good method.  But, this allows for epistemic circularity because it can be the case that the claim that reflective equilibrium is a good method is itself one of the results that is produced in the final equilibrium state.  The heart of this worry is that Applied Evidentialism allows someone to use reflective equilibrium to come to reasonably believe that reflective equilibrium is a good method for determining true propositions. This is a kind of rule-circularity that occurs when a rule or method is employed to establish that that very rule or method is acceptable. The status of rule-circularity is contentious.  Several authors argue that it is benign (for example, Braithwaite (1953), Conee (2004), Matheson (2012), Sosa (2009), and Van Cleve (1984)), but others argue that it is vicious circularity (e.g., Cling (2003) and Vogel (2008)).  Depending on whether this circularity is benign or vicious, Applied Evidentialism is a promising or problematic response to the Problem of the Criterion (for more on this issue see epistemic circularity).

b. Dissolution

Robert Amico (1988a, 1993, and 1996) offers a very different response to the Problem of the Criterion.  Rather than attempting to solve the Problem of the Criterion, Amico attempts to “dissolve” it.  According to Amico, a philosophical problem is a question that can only be answered theoretically—it cannot be answered by purely empirical investigation.  Further, a philosophical problem is such that there is rational doubt as to the correct answer to the question asked by the problem.  He explains rational doubt as simply being such that withholding belief in a particular answer is the justified doxastic attitude to take.  Since he explicates philosophical problems in terms of rational doubt and rational doubt is relative to a person, Amico maintains that problems are always relative to particular people.  A particular question poses a problem for someone when that question generates rational doubt for her. 

It is because of the role of rational doubt that Amico distinguishes between solutions to problems and dissolutions of problems.  A solution to a problem is a set of true statements that answer the question that generates the problem and removes the rational doubt concerning the answer to the question.  Dissolution occurs when the rational doubt is removed, not by an answer to the question, but rather by recognition that it is impossible to adequately answer the question.  For example, Amico claims that the problem of how to square a circle is dissolved as soon as one recognizes that it is impossible to make a circular square.  Once someone sees that it is impossible to make a circular square, the question “How do you square a circle?” does not generate any rational doubt for her.  Without rational doubt, Amico claims that the problem has been dissolved and there is no need to look for a solution.

Like all problems, Amico claims that the Problem of the Criterion is only a problem for a particular person when its question raises rational doubt for the person.  When we first consider the questions posed by the Problem of the Criterion Amico claims that we may have rational doubt about how to answer the questions in such a way that that answer can be justified to the skeptic.  So, we face a problem.  However, Amico argues that consideration of the failure of other responses—in particular their tendency to be question begging— and consideration of the nature of the problem itself allows one to recognize that it is in fact impossible to answer the questions of the Problem of the Criterion in a way that can be justified to the skeptic.  Once one recognizes that it is impossible to answer the skeptic’s questions Amico alleges that the rational doubt generated by the Problem of the Criterion is removed.  Thus, he claims that the Problem of the Criterion is at that point dissolved.  Since it has been dissolved, we should not be troubled by the Problem of the Criterion at all.

There are three major challenges to Amico’s purported dissolution of the Problem of the Criterion.  The first, as Sharon Ryan (1996) argues, is that it does not seem that the problem has been dissolved, but instead it seems that Amico has simply accepted that the skeptic is correct.  Amico responds by claiming that the skeptical position is not a solution to the problem because that position cannot be justified to the particularist or the methodist.  Since none of the three positions can justify their position to the others, he claims that the problem is dissolved.  It is not clear that this adequately responds to Ryan’s criticism because one might think that claiming that there is no acceptable answer to the questions of the Problem of the Criterion is exactly what the skeptic had in mind all along.

The second major challenge to Amico’s view comes from the various responses to the Problem of the Criterion.  Although he does discuss several responses, Amico does not argue that all of the responses mentioned above fail to provide answers that remove the rational doubt raised by the Problem of the Criterion.  Insofar as one thinks that some of these responses to the Problem of the Criterion provide a solution to the problem, one will rightly be skeptical of Amico’s proffered dissolution.

The third major challenge to Amico’s view arises because he seems to rest his dissolution on what can and cannot be said in response to a skeptic.  Andrew Cling argues that the Problem of the Criterion does not require skeptical interlocutors at all.  Rather, Cling maintains that the difficulty illuminated by the Problem of the Criterion is that anti-skeptics have commitments that seem plausible when considered individually, but they are jointly inconsistent.  The inconsistency among these commitments is present whether or not there are skeptics.  Thus, Cling contends that arguing that the Problem of the Criterion is constituted by questions that cannot be answered does not dissolve the problem; it brings the problem to light.

4. The Problem of the Criterion’s Relation to Other Philosophical Problems

The Problem of the Criterion is a significant philosophical issue in its own right—if Chisholm is correct, it is one of the most fundamental of all philosophical problems.  However, according to many philosophers, there are additional reasons to study this problem.  They claim that the Problem of the Criterion is closely related to several other perennial problems of philosophy.  It is worth briefly noting some of the philosophical problems thought to be closely related to the Problem of the Criterion. 

First, James Van Cleve (1979) and Ernest Sosa (2007) maintain that the Cartesian Circle is in fact just a special instance of the Problem of the Criterion (See Descartes for more on the Cartesian Circle).  Sosa also argues that the problem of easy knowledge is closely related to the Problem of the Criterion—something that Stewart Cohen (2002) and Andrew Cling note as well.  In places Sosa seems to go so far as to suggest that the problem of easy knowledge and the Problem of the Criterion are the same problem. (See epistemic circularity for more on the problem of easy knowledge).

Next, Ruth Weintraub (1995) argues that Hume’s attack on induction is simply a special case of the Problem of the Criterion.  She claims that Hume essentially applies the Problem of the Criterion to induction rather than applying the problem in a general fashion (For more on Humean inductive skepticism see confirmation and induction, epistemology, and Hume: causation).

According to Bryson Brown (2006), the challenge of responding to skepticism about the past is just a version of the Problem of the Criterion.  He claims that debunking Bertrand Russell’s five-minute old universe hypothesis, for example, involves providing a criterion for trusting memory.  This, he argues, requires satisfactorily responding to the Problem of the Criterion.

Andrew Cling (2009) and (2014) maintains that the Problem of the Criterion and the regress argument for skepticism are closely related.  In fact, he argues that they are both instances of a more general problem that he calls the “paradox of reasons”.  Cling argues that this paradox arises because it seems that it is possible to have reasons for a belief, it seems that reasons themselves must be supported by reasons, and it seems that if an endless sequence of reasons—either in the form of an infinite regress or a circle of reasons—is necessary for having reasons for a belief, then it is impossible to have reasons for a belief.  According to Cling, these three commitments are inconsistent.  The important point for the current purpose is that Cling maintains that the Problem of the Criterion and the regress argument for skepticism are both instances of the paradox of reasons (See infinitism in epistemology for more on regress arguments).

Finally, Howard Sankey (2010, 2011, and 2012) argues that the Problem of the Criterion provides one of the primary, if not the primary, argument in support of epistemic relativism.  Relativists take the Problem of the Criterion to show that it is not possible to provide a justification for choosing one criterion over another.  However, rather than opting for skepticism, which claims that no criterion is justified, relativists respond to the Problem of the Criterion by holding that all criteria are equally rational to adopt—one’s choice is determined simply by the context in which one finds oneself.  Sankey argues that a clear understanding of the Problem of the Criterion is key to responding to the threat of epistemic relativism (For more on epistemic relativism see relativism).

The Problem of the Criterion is a significant philosophical problem in its own right.  However, if these philosophers are correct in claiming that the Problem of the Criterion is related to all of these various philosophical problems in important ways, close study of this problem and its responses could yield insights that are very far-ranging.

5. References and Further Reading

  • Aikin, S.F. “The Problem of the Criterion and a Hegelian Model for Epistemic Infinitism.” History of Philosophy Quarterly 27 (2010): 379-88.
    • Puts forward the view that Hegel proposes what is arguably a coherentist response to the Problem of the Criterion.
  • Amico, R. P. “Reply to Chisholm on the Problem of the Criterion.” Philosophical Papers 17 (1988a): 235-36.
    • Presents a very brief formulation of his dissolution of the Problem of the Criterion.
  • Amico, R. P. “Roderick Chisholm and the Problem of the Criterion.” Philosophical Papers 17 (1988b): 217-29.
    • Argues that Chisholm’s particularist response to the Problem of the Criterion is unsatisfactory.
  • Amico, R. P. The Problem of the Criterion. Lanham, MD: Rowman & Littlefield Publishers, Inc., 1993.
    • The only book-length treatment of the Problem of the Criterion. Includes a helpful discussion of the history of the Problem of the Criterion, critiques of major responses to the Problem of the Criterion, and the full formulation of Amico’s proposed dissolution.
  • Amico, R. P. “Skepticism and the Problem of the Criterion.” In K. G. Lucey (ed.), On Knowing and the Known. Amherst, NY: Prometheus Books, 1996. 132-41.
    • Argues against the skeptical response to the Problem of the Criterion in favor of his dissolution of the problem.
  • Braithwaite, R.B. Scientific Explanation. Cambridge: Cambridge University Press, 1953.
    • Argues that the sort of rule-circularity present in inductive arguments in support of induction is not always vicious.
  • Brown, B. “Skepticism About the Past and the Problem of the Criterion.” Croatian Journal of Philosophy 6 (2006): 291-306.
    • Argues that skepticism about the past is in essence a limited form of the Problem of the Criterion.
  • Chisholm, R.M. Perceiving. Ithaca, NY: Cornell University Press, 1957.
    • Chisholm’s earliest discussion of the Problem of the Criterion appears in this work.
  • Chisholm, R.M.  The Problem of the Criterion. Milwaukee, WI: Marquette University Press, 1973.
    • The Aquinas Lecture on the Problem of the Criterion by one of the most influential epistemologists of the twentieth century. Arguably, this is the most important contemporary work on the Problem of the Criterion.
  • Chisholm, R.M. Theory of Knowledge. Englewood Cliffs, NJ: Prentice Hall, 2nd Edition, 1977; 3rd Edition, 1989.
    • Chisholm’s famous and widely used epistemology textbook; contains brief discussions of the Problem of the Criterion in both of its later editions.
  • Chisholm, R.M. The Foundations of Knowing. Minneapolis, MN: University of Minnesota Press, 1982.
    • Contains a reprint of Chisholm’s 1973 Aquinas Lecture.
  • Chisholm, R.M. “Reply to Amico on the Problem of the Criterion.” Philosophical Papers 17 (1988): 231-34.
    • Responds to Amico’s criticisms of his particularist response to the Problem of the Criterion. Claims that the Problem of the Criterion cannot be solved.
  • Cling, A.D. “Posing the Problem of the Criterion.” Philosophical Studies 75 (1994): 261-92.
    • Argues that there are many more options for responding to the Problem of the Criterion than Chisholm considers.  Presents his coherentist response to the Problem of the Criterion.
  • Cling, A.D. “Epistemic Levels and the Problem of the Criterion.” Philosophical Studies 88 (1997): 109-40.
    • Presents the Problem of the Criterion as an argument for skepticism.  Argues that both Chisholm and Van Cleve fail to solve the problem.
  • Cling, A.D. “Self-Supporting Arguments.” Philosophy and Phenomenological Research 66 (2003): 279-303.
    • Evaluates the strength of self-supporting arguments in deductive and inductive logic.  Argues that rule-circularity is a kind of vicious circularity.
  • Cling, A.D. “Reasons, Regresses, and Tragedy: The Epistemic Regress Problem and the Problem of the Criterion.” American Philosophical Quarterly 46 (2009): 333-46.
    • Argues that the Problem of the Criterion and the regress argument for skepticism are both species of a more general problem, the “paradox of reasons”
  • Cling, A.D. “The Epistemic Regress Problem, the Problem of the Criterion, and the Value of Reasons.” Metaphilosophy 45 (2014): 161-71.
    • Further develops the idea that the Problem of the Criterion and the regress argument for skepticism are both species of a more general problem, the “paradox of reasons”.  Also, includes a discussion of the kinds of reasons that this problem reveals we can and cannot have.
  • Coffey, P. Epistemology or Theory of Knowledge. London: Longmans, Green, 1917.
    • This work by D.J. Mercier’s pupil is largely responsible for ushering discussion of the Problem of the Criterion into the 20th century.
  • Cohen, S. “Basic Knowledge and the Problem of Easy Knowledge.” Philosophy and Phenomenological Research 65 (2002): 309-29.
    • Presents the problem of easy knowledge and notes its relevance to the Problem of the Criterion.
  • Conee, E. “First Things First.” In E. Conee and R. Feldman, Evidentialism. New York: Oxford University Press, 2004. 11-36.
    • Presents and defends “Applied Evidentialism” as a response to the Problem of the Criterion.
  • DePaul, M. “The Problem of the Criterion and Coherence Methods in Ethics.” Canadian Journal of Philosophy 18 (1988): 67-86.
    • Presents a version of the Problem of the Criterion in terms of moral theories and describes his coherentist response to the Problem of the Criterion.
  • DePaul, M. “Pyrrhonian Moral Skepticism and the Problem of the Criterion.” Philosophical Issues 19 (2009), 38-56.
    • Claims, like Chisholm, that all responses to the Problem of the Criterion—including the skeptical response—beg the question.
  • DePaul, M. “Sosa, Certainty and the Problem of the Criterion.” Philosophical Papers 40 (2011), 287-304.
    • Suggests that Chisholm’s own particularist response to the Problem of the Criterion may have included some subtle methodism. Also, provides a discussion of Sosa’s recent work on the Problem of the Criterion.
  • Fumerton, R. “The Problem of the Criterion.” In J. Greco (ed.), The Oxford Handbook of Skepticism. Oxford: Oxford University Press, 2008. 34-52.
    • Claims there are at least two distinct problems often called the “Problem of the Criterion”.  Also, discusses some responses to the Problem of the Criterion.
  • Greco, J. “Epistemic Circularity: Vicious, Virtuous and Benign.” International Journal for the Study of Skepticism 1 (2011): 1-8.
    • Provides a nice summary of Sosa’s most recent work on the Problem of the Criterion.
  • Hegel, G.W.F. Phenomenology of Spirit. Oxford: Oxford University Press, 1979.
    • Helped draw attention back to the Problem of the Criterion in the 19th century.  Presents the Problem of the Criterion as a crisis for Spirit, and (arguably) proposes a coherentist response to the problem.
  • Lemos, Noah. Commonsense: A Contemporary Defense. New York: Cambridge University Press, 2004.
    • Defends Chisholm’s particularist response to the Problem of the Criterion.
  • Matheson, J. “Epistemic Relativism.” In A. Cullison (ed.), Continuum Companion to Epistemology. New York: Continuum, 2012. 161-79.
    • Argues against epistemic relativism and offers considerations for thinking that at least some kinds of epistemic circularity are not vicious.
  • Mercier, D.J. Criteriologie 8th Edition. Paris: Felix Alcan, 1923.
    • Helped draw attention back to the Problem of the Criterion in the 19th century.  Also, Chisholm cites Mercier’s conditions for what a satisfying criterion would have to look like.
  • McCain, K. and Rowley, W. “Pick Your Poison: Beg the Question or Embrace Circularity.” International Journal for the Study of Skepticism (2014): 125-40.
    • Explains why the three responses to the Problem of the Criterion that Chisholm considers each beg the question.  Also, argues that it is possible to respond to the Problem of the Criterion without begging the question, but doing so requires a commitment to certain forms of circularity as epistemically acceptable.
  • Montaigne, M. de. “Apology for Raymond Sebond.” In J. Zeitlin (trans.), Essays of Michael De Montaigne, New York: Knopf, 1935.
    • The Problem of the Criterion appears to have resurfaced in the modern period with this work.
  • Moser, P.K. Knowledge and Evidence. Cambridge: Cambridge University Press, 1989.
    • Presents and defends his explanatory particularist response to the Problem of the Criterion.
  • Popkin, R.H. The History of Sceptism: From Savonarola to Bayle (Revised and Expanded Edition). New York: Oxford University Press, 2003.
    • Discusses the historical development of skepticism. Of particular interest is the discussion of the influence that the Problem of the Criterion had on philosophy during the modern period.
  • Poston, T. “Explanationist Plasticity & The Problem of the Criterion.” Philosophical Papers 40 (2011): 395-419.
    • Defends a coherentist response to the Problem of the Criterion.
  • Rescher, N. The Coherence Theory of Truth. Oxford: Clarendon Press, 1973a.
    • Part of the series of books in which Rescher’s “systems-theoretic approach” to the Problem of the Criterion is developed.
  • Rescher, N. The Primacy of Practice. Oxford: Basil Blackwell, 1973b.
    • Part of the series of books in which Rescher’s “systems-theoretic approach” to the Problem of the Criterion is developed.
  • Rescher, N. Methodological Pragmatism. Oxford: Basil Blackwell, 1977.
    • Part of the series of books in which Rescher’s “systems-theoretic approach” to the Problem of the Criterion is developed.
  • Rescher, N. Scepticism. Totowa, N.J.: Rowman & Littlefield Publishers, 1980.
    • Part of the series of books in which Rescher’s “systems-theoretic approach” to the Problem of the Criterion is developed.
  • Rockmore, T. “Hegel and Epistemological Constructivism.” Idealistic Studies 36 (2006): 183-90.
    • Argues that Hegel proposes a coherentist response to the Problem of the Criterion.
  • Ryan, S. “Reply to Amico on Skepticism and the Problem of the Criterion.” In K. G. Lucey (ed.), On Knowing and the Known. Amherst, NY: Prometheus Books, 1996. 142-48.
    • Argues that Amico’s dissolution of the Problem of the Criterion really amounts to accepting the skeptical response to the Problem of the Criterion.
  • Sankey, H. “Witchcraft, Relativism and the Problem of the Criterion.” Erkenntnis 72 (2010): 1-16.
    • Explores the relationship between epistemic relativism and the Problem of the Criterion.
  • Sankey, H. “Epistemic Relativism and the Problem of the Criterion.” Studies in the History and Philosophy of Science 42 (2011): 562-70.
    • Explores the relationship between epistemic relativism and the Problem of the Criterion.
  • Sankey, H. “Scepticism, Relativism, and the Argument from the Criterion.” Studies in the History and Philosophy of Science 43 (2012): 182-90.
    • Explores the relationship between epistemic relativism and the Problem of the Criterion.
  • Sextus Empiricus. The Skeptic Way: Sextus Empiricus’s Outlines of Pyrrhonism, (trans.) B. Mates. New York: Oxford University Press, 1996.
    • The original presentation of the Problem of the Criterion.
  • Sosa, E. A Virtue Epistemology: Apt Belief and Reflective Knowledge, Volume I. New York: Oxford University Press, 2007.
    • Argues that the Cartesian Circle is a version of the Problem of the Criterion.
  • Sosa, E. Reflective Knowledge: Apt Belief and Reflective Knowledge, Volume II. New York: Oxford University Press, 2009.
    • Develops Sosa’s response to the Problem of the Criterion.  Argues that the problem of easy knowledge is a version of the Problem of the Criterion.
  • Van Cleve, J. “Foundationalism, Epistemic Principles, and the Cartesian Circle.” The Philosophical Review 88 (1979): 55-91.
    • Argues that the Cartesian Circle is simply a special case of the Problem of the Criterion.
  • Van Cleve, J. “Reliability, Justification, and the Problem of Induction.” Midwest Studies in Philosophy 9 (1984): 555-67.
    • Presents an inductive argument in support of induction and argues that the rule-circularity involved in such an argument is not vicious.
  • Van Cleve, J. “Sosa on Easy Knowledge and the Problem of the Criterion.” Philosophical Studies 153 (2011): 19-28.
    • Discusses Sosa’s response to the Problem of the Criterion and the related, according to Sosa, problem of easy knowledge.
  • Vogel, J. “Epistemic Bootstrapping.” Journal of Philosophy 105 (2008): 518-39.
    • Argues that many forms of epistemic circularity are viciously circular.
  • Weintraub, R. “What Was Hume’s Contribution to the Problem of Induction?” Philosophical Quarterly 45 (1995): 460-70.
    • Argues that the problem of induction is simply a special case of the Problem of the Criterion.

 

Author Information

Kevin McCain
Email: mccain@uab.edu
University of Alabama at Birmingham
U. S. A.

Molyneux’s Question

William MolyneuxMolyneux’s question, also known as Molyneux’s problem, concerns the possibility that a person born blind might immediately identify a shape previously familiar to them only by touch if they were made to see. Through personal correspondence, William Molyneux initially presented this query to John Locke in 1688. Locke then interposed the question within the Second edition of his An Essay Concerning Human Understanding:

"Suppose a Man born blind, and now adult, and taught by his touch to distinguish between a Cube, and a Sphere of the same metal, and nighly of the same bigness, so as to tell, when he felt one and t’other, which is the Cube, which the Sphere.  Suppose then the Cube and Sphere placed on a Table, and the Blind Man to be made to see.  Quære, Whether by his sight, before he touched them, he could now distinguish, and tell, which is the Globe, which the Cube (Locke 1694/1979)."

Molyneux’s question soon became a fulcrum for early research in the epistemology of concepts, challenging common nativist intuitions about concept acquisition; asking whether sensory features distinguish concepts and how concepts may be applied in novel experiences. The question was reprinted and discussed by a wide range of early modern philosophers, including Gottfried Leibniz, George Berkeley, and Adam Smith, and was perhaps the most important problem in the burgeoning discipline of psychology of the 18th century.

The question has since undergone various stages of development, both as a mental exercise and as an experimental paradigm, garnering a variety of both affirmative and negative replies during three centuries of debate and deliberation. A renewed interest has been sparked by very recent empirical work on subjects recently healed of cataracts who failed to identify the shapes at first sight, but were soon re-tested with successful results.

Should we answer Molyneux’s question with a “no,” as was the common response of the 18th century, or “yes,” as some philosophers today claim? How should the success of these answers be decided? Is the question theoretical or empirical? Can the question be sufficiently answered by science? What is its philosophical importance?

Table of Contents

  1. A Complex of Questions
  2. Negative Replies
  3. Affirmative Replies
  4. Development as a Thought Experiment
  5. Development as an Empirical Problem
  6. Conclusion
  7. References and Further Reading

1. A Complex of Questions

Molyneux’s question prompts a number of perplexing issues in both the psychology and philosophy of perception. It links these fields of study with a variety of questions:

  • Does sensory experience individuate the senses?
  • Does sensory experience individuate sensory concepts?
  • Are sensory-specific concepts, if there are such, accessible to conscious reflection or perceptual learning as to make them immediately usable for recognition tasks by other senses?
  • Is our sensory knowledge of the external world indirect?

The first two of these questions represent a central consideration for answering Molyneux’s question with a “no.” In the traditional view, which is much less prominent today (see Macpherson 2011), sensory experience is the principal basis for individuating both the senses and concepts. Berkeley (1709), for instance, held to the strongest form of sensory individuation: the senses are metaphysically distinct, some being portals to spatial dimensions (touch) and others non-spatial (vision). In consequence, he infamously defended a metaphysical individuation or heterogeneity between concepts acquired from different senses. A concept of a seen line and a touched line do not together constitute a longer perceived line, existing on distinct axes of existence.

Others embraced a weak epistemological difference between the senses, and hence between sensory concepts; in this view, translation of their content is possible but learning their common meaning requires rational thought (Leibniz) or time and experience (Locke). Just as rational exertion or time and experience are needed to train the use of one sense—one must “teach” one's own sight to see 3-D figures when viewing “3-D Magic Eye” photos, for instance—visually presented shapes require a degree of perceptual training before they can be recognized correctly as those previously touched. Others held to a stronger distinction between sensory formats, claiming that they are too incompatible to be translated, though the same “meaning” is shared between them, offering a strong epistemological difference between sensory concepts (Lotze). At best, such concepts become correlated, but are never known to correspond to the same objects.

If concepts are heterogeneous (weakly, strongly, or metaphysically), then direct knowledge of objects in the external world is problematic. For instance, if our knowledge of a tomato is acquired from vision and later from touch, then these two different presentations of a tomato might amount to two different concepts of tomato altogether, not to mention the soon-to-be smelled and tasted tomato. This invites the possibility that our knowledge of the external world is indirect, being affected by the peculiarities of our sensory organs and processing mechanisms. By contrast, others have argued that because we have direct knowledge of only one tomato, any phenomenal difference between the senses is circumstantial, like an accent rather than a distinct language, a view defended by John Campbell (1996).

The issues prompted by Molyneux’s question lead to a complex of answers in response. Answers can be neatly (perhaps too neatly) categorized into “no” and “yes,” but they are negations and affirmations of different questions, determined by the basis on which these answers are given. Hence, we find some philosophers answering “yes” for one reason, “no” for another, others claiming that there is no possible answer, and yet others claiming that a plurality of answers is agreeable. In addition, a number of purposed modifications, both empirical and theoretical, further attempt to isolate specific queries within the general question. Given its complexity, we should judge the success of Molyneux’s question based not on its answerability but on its productivity.

2. Negative Replies

The two central reasons for answering Molyneux’s question “no” concern the heterogeneous nature of concepts, either as metaphysically or epistemologically distinct, and the involvement of perceptual learning—the inferring of connections in otherwise disparate sensory representations, involving one or multiple senses. Though these reasons are interrelated, as perceptual learning presumes heterogeneity, they are differentiated by emphasis of the philosopher. These views have evolved in various ways, as new empirical discoveries suggest that unconscious neurological learning processes should be considered separately from conscious “conceptual” processes. The diagram below provides a map of these negative replies.

molyneux-1

Molyneux himself stressed the issue of perceptual learning, replying that the felt corner of a cube would not at first appear to the eye in the same way as the seen corner of the cube. Time and experience are the means for acquiring knowledge of the associations between seen and felt properties of shape. Locke agreed to Molyneux’s negative reply, but based on his own reasoning on perceptual learning within the sense of sight alone, claiming that sight initially produces primitive sensations later altered by practice; the first appearance of a sphere is as a “circle variously colored” but is judged after time and experience to be a sphere singularly colored. Those considering Locke’s reply have observed that this description of first sight is pure conjecture, as Locke had no access to his own memory of first sight. Others have argued that were two-dimensional shapes presented to the once-blind, Locke would have replied to Molyneux’s question in the affirmative. But, Locke’s example may (like Berkeley) express the idea that the primitive visual sensations of a sphere are non-spatial altogether, and thus a two-dimensional shape would not help the once-blind identify the shapes.

The question of first sight and perceptual learning, however, has become an empirical issue of late. Current research on neural plasticity, or how adept our brains are at changing in response to novel information, informs Shaun Gallagher’s negative reply. He considers the numerous reports of subjects who fail to recognize shapes after their cataracts are removed, and attributes their recognitional inability to the significant deterioration of their visual cortex. Gallager’s reply is based on a degenerative case of perceptual learning within the visual sense; with disuse the faculty of vision “unlearns” its ability to see. By way of contrast, Marjolein Degenaar argues from this same set of data that cataract surgeries show Molyneux’s question not to be testable; she concludes that there is “no answer” to Molyneux’s question. She follows Julien Offray de La Mettrie who contests the applicability of the cataract operations to Molyneux’s question because of the physiological distress involved, but Degenaar adds that no other experimental paradigm will better suited to testing Molyneux’s question.

Perceptual learning, however, is ineffective if sensory formats are thought to be completely distinct. Berkeley’s argument for the heterogeneity of the senses was based on observations that vision is a non-spatial sense: retinal images are inverted, double images can be generated from two eyes, and distance is inaccessible to sight. Sight depends on correlations with touch so that the body can use sight to interact with spatial features of objects. (Followers of this thought, such as Comte de Buffon, misread these ideas as entailing that infants and the once-blind initially see the world as inverted, doubled, and without distance. Because of this, the basis of Buffon’s own negative reply falls to the “psychologist’s fallacy” that we actually see our retinal images, images known only by an anatomy lesson.)

Consideration of the non-spatiality of the sensations produced by the senses at first sight led Étienne Bonnet de Condillac to retract an earlier affirmative reply. He based his newfound intuition on imagining himself as an eyeless statue, confined to mere tactile knowledge of objects, only able to understand the size, distance, and orientation of objects at a distance by use of rigid sticks that, when crossed like a drafting compass, provided him with the information to calculate their true size/distance ratios. Postulating that sight alone would independently produce the same knowledge (by rays of light replacing the tactile sticks to calculate the size, distance, and shape of objects around him), he could account for common meaning between the senses. With his emphasis on the heterogeneity of the senses, Condillac claimed that the once-blind would be entirely confused by the initial visual appearance of color patch sensations. In 1887, Hermann Lotze argued that all sensations are non-spatial, but soon are correlated with learned “local signs”—representations of behavioral interactions with the spatial features of the external world. Lotze maintained that it takes much time and effort to learn to perceive a spatial world. Since the blind perceive space by touch alone, which is a very different mode than sight, the local signs of the once-blind would be inapplicable to both the new visual sensations and the local signs that they would later produce, leading him to conclude a negative reply to Molyneux’s question.

3. Affirmative Replies

The viability of an affirmative reply to Molyneux’s question is a recent development, spurred largely by a mix of supportive studies in developmental science and neuroscience, and the increased popularity of direct realism in epistemology. Some philosophers are on record as answering on nativist grounds, that either an inborn spatial schemata is required for integrated sensory perception or that inborn mechanisms are necessary for matching otherwise heterogeneous concepts. Common sensibles, representations that are not tied to any sensory format, also ground an important response strategy, but vary with respect to the kind of commonality achieved─whether common concepts of shape properties, common behavioral responses to shapes, or common concepts of shapes. Finally, the use of geometry is viewed as crucial to how the once-blind reason, and directly perceive the external world. These categories of reply are presented below in the diagram.

molyneux-2

Though Immanuel Kant himself never explicitly considered Molyneux’s question in writing, his contemporaries considered the question an important test for Kant’s theory that a unified spatial organization is a prerequisite for having any perceptual experience. It seems at first that Kant’s view predicts that the visual perception of the once-blind is ordered in the same way as their tactile perception of the shapes—that is, that both experiences employ the same spatial concepts. Kant’s contemporaries' speculation of an affirmative reply, however, fails to consider that further time and experience might be required to coordinate the unification of visual and tactile perception. For more discussion, see the work of Sassen (2004).

Rather than employing a conceptual commonality, Adam Smith argued in his essay “On the External Senses” that inborn mechanisms automatically generate correlations between touch and the other senses. Like the rules of perspective that visual artists employ to render depth in paintings, this mechanism must utilize innately known rules for purposes of recognition, a mechanism Smith called “instinctive suggestion.” In a similar vein, Jesse Prinz argued that innately synchronized processing of heterogeneous visual and tactile content creates “convergence zones,” areas that bind input from multiple sensory specific cortexes and project the bound input back to these processors in recall tasks such that each sense-specific representation gets activated at the same time.

Not all philosophers view the heterogeneity of the senses as entailing a heterogeneous conceptual repertoire. Edward Synge, an acquaintance of Molynuex, presented his own affirmative reply, which hinged on distinguishing between “images” which are heterogeneous in the mind and “notions” or “ideas” which allow those at first sight to cognize common features in the tactile and visual images, such as the smooth surface of the sphere and the cornered appearance of the cube. Judith Jarvis Thompson imagines a similar answer, but her argument involves an indirect strategy; she argues that, metaphysically, there is no possible world in which the properties of felt cubes would appear to sight as the properties of spheres, a possibility she took to be entailed by a negative answer, all things being equal.

Gareth Evans’ influential paper of 1985 considered the proposal that the commonality between seeing and touching shapes had to be an ability to egocentrically localize the parts of shape—to know where the parts of a shape are with respect to the subject’s locus of action. In other words, for Evans we perceive shape by where we find parts of shapes to be in egocentric space. To perceive a square, for instance, there must be an internal representation of a corner that is felt “to the right” and then along the edge “to the left,” another corner felt “down,” and then another “to the right.” Our perception of a felt square must be based on the same egocentric relations gained from perception of the seen square. Since the same egocentric relations must be used, our perception of the square is the same for touch and sight. This intuition provides a behavioral basis for an affirmative answer to Molyneux’s question, one which Alva Noë takes as a test case for his enactivist account of perception: the claim that we perceive with our bodily activity, not with our brain.

Francis Hutcheson’s affirmative reply in 1727 to Molyneux’s question was based on the existence of homogeneous shape concepts that are unconnected to the senses by which they are acquired. Shapes can be perceptually characterized in a number of ways, such as bounded colors or collections of textures. However, shape concepts are themselves distinct from these sensory representations. Hutcheson demonstrated this intuition with a set of creative (though by no means persuasive) thought experiments: would a blind, paralytic (and presumably deaf) man, whose sense of reality is based on smell, understand number and its application in geometrical reasoning? Would a blind man unacquainted with the feel of a stringed instrument be able to derive its musical scale upon only hearing its sounds? Hutcheson’s intuitions suggested a “yes” answer to both of these scenarios, and by extension, an affirmative reply to Molyneux’s question.

Like Synge, Leibniz argued for a distinction between images and ideas, claiming that the latter are homogeneous across the senses because of their geometrical content. The importance of rational thought is emphasized by Leibniz’s modifications that the newly sighted be told the names of the shapes presented to them and be acquainted with their new visual experiences so as to be able to apply geometrical inferences to them.

Thomas Reid agreed with Locke that at first sight, shapes would look flat, appearing like a painting lacking perspective. This led him to claim that the once-blind could immediately recognize two-dimensional shapes. If, however, the shapes of seen objects have volume—are three-dimensional—they would look different from how they felt, and would be unrecognizable. But since these differences can be represented geometrically, a blind mathematician would be able to calculate which visible shapes correlate with which tactile shapes. For more discussion, see (Van Cleve, 2007).

John Campbell argued that since our senses achieve a direct relationship to the external world, sensory experience must parallel external features rather than those of particular sense modalities. In particular, the geometrical properties of objects constitute one’s sensory experience. This guarantees that the perception of shape by sight and touch will be uniform in structure because perception provides one with a direct, unmediated relationship to objects in the world. If the external object itself provides the character of the experience, then experiences of objects by sight and touch must have the same character, resulting in the once-blind’s immediate recognition of the shapes at first sight.

John Campbell argued that since our senses achieve a direct relationship to the external world, sensory experience must parallel external features rather than those of particular sense modalities. In particular, the geometrical properties of objects constitute one’s sensory experience. This guarantees that the perception of shape by sight and touch will be uniform in structure because perception provides one with a direct relationship to objects in the world with no mediation. If the external object itself provides the character of the experience, then experiences of objects by sight and touch must have the same character, resulting in the once-blind’s immediate recognition of the shapes at first sight.

4. Development as a Thought Experiment

The diversity of answers to Molyneux’s question further indicates a lack of specificity and the need to carefully articulate precise issues of interest within the question itself. This pressure has thus provoked a number of philosophers to retool the question and control variables in an attempt to isolate specific issues. These developments can be organized around changes made to three crucial features of the experiment: simplifying the three-dimensional shape stimuli, stipulating aspects of the subject engaged in the task, and modifying the experimental procedure.

Furthermore, this focused treatment follows the more traditional view that Molyneux’s question is a thought experiment, though many philosophers invoke experimental paradigms for inclusion into the query being posed. These developments are represented in the diagram below.

molyneux-3

Denis Diderot argued in 1749 that if the stimulus shapes were simplified from three-dimensional cubes and spheres to two-dimensional squares and circles, an affirmative answer would not require accounting for Locke’s preoccupation with perceptual learning within the sense of sight. Gareth Evans suggested a further simplification to account for the possibility that the organs of sight might themselves remain ineffective: the once-blind subject may view four internal visual points (phosphenes) configured in a square shape generated by direct neural stimulation of the visual cortex. Evans’ development has generated an entirely new approach to the problem of making Molyneux’s question amenable to empirical experimentation, one that has produced results that are favorable (though not conclusive) to his affirmative reply.

James Van Cleve developed a less invasive strategy for testing the once-blind. He suggested using a single raised Braille dot in proximity to a pair of raised dots for visual presentation to the once-blind, who should then be able to immediately identify which is the single and which the paired dots. This strategy of simplifying shapes, however, comes at the cost of decreasing the amount of available information for recognizing the shapes, increasing the ambiguity of which shape is being represented for the subject. This is a problem that future modifications may address.

Subject Stipulation
Given the ambiguity problem of simplifying shapes, it may seem ironic that Diderot also took the level of intellectual aptitude of the subject to be determinative of recognitional ability. A “dullard”—presumably a subject with cognitive disabilities—would not identify two-dimensional shapes, whereas subjects with normal cognitive ability would, though they would lack reasons for how and would be generally uncertain of their visual judgment. By contrast, a “metaphysician” trained in philosophy would recognize the shapes with certainty but would be unable to articulate common features of the seen and touched shapes. A “geometer” would not only have certainty in his identification but also knowledge of the geometrical features common to sight and touch. Thomas Reid took kindly to this latter stipulation, and in 1764 added detail that included the precise mathematical strategies the geometer might employ.

Physical constraints have also been suggested for Molyneux’s hypothetical subject. Condillac’s “statue” modification required that when considering Molyneux’s question, individual sense modalities should be deployed one at a time without considering those of previous or future experience. This helped express Condillac’s intuition that each mode of sensation contributes to one’s sense of the spatiality of the external world, though the sensations of each sense are entirely distinct. H. P. Grice, by contrast, imagined sensory organs entirely alien to humans; by describing the unique experiences of color-type properties that these organs would produce, he was able to demonstrate just how unfamiliar colors are to the blind, and thereby the distinctiveness of experience for the once-blind at first sight.

Gallagher’s concern with the neurophysiological differences between the once-blind and always-sighted led him to suggest a hypothetical Molyneux subject with no neural degeneration from their blindness so as to compensate for a central variable: all visually deprived subjects face neural deterioration of visual processing centers. Such a subject would be similar to an infant, another subject suggested for inclusion by Gallagher and anticipated by Adam Smith, but distinct in that infants’ neural organization would be primed for sensory integration whereas the hypothetical subject would be neutral in this regard.

Procedural Modification
Though developments to the stimuli and subjects directly influence how the test itself proceeds, independent procedural developments are worth noting for their ability to constrain features left ambiguous by Molyneux’s own rendering of the question. Six years prior to the publication of his famous question, Molyneux raised a related query, “Whether he Could know by his sight, before he stretchd out his Hand, whether he Could not Reach them, tho they were Removed 20 or 1000 feet from him?” (Locke 1688/1978). Molyneux presumably would have also answered the distance variant of his question negatively. More importantly, this helps to qualify Molyneux’s popularized question to indicate that the presentation of shapes is proximate to the subject. Another implicit feature of the question was expressed by Leibniz, who stipulated that the once-blind should be told the names of the shapes being presented for recognition. This gives the subject a hint, so that recognizing the shapes is merely a matter of determining which shape was which, and also indicates that the stimuli are not simply paintings, but real bodies accessible to touch.

Leibniz also added an epistemic condition that the once-blind be allowed a familiarity of the experience of sight and an inferential ability on par with the normally sighted. These additional constraints would prevent circumstantial factors from affecting the test; the once-blind would not be “dazzled and confused by the strangenesss” of seeing. A further constraint by van Cleve emphasized this worry by advocating that that the primary stimuli be the visual appearances of the shapes rather than the presented shapes themselves. This condition controls for the possibility that the processing of visual information is systematically distorted such that to the once-blind, shape corners appear smooth, and smooth sides appear cornered.

Janet Levin recommended a further modification regarding the temporal immediacy with which the once-blind must be tested. We need not satisfy Molyneux’s requirement that the identification of the shapes occur “at first sight” if we can otherwise establish “epistemic immediacy” — knowing something without applying inference or other rational strategies. Levin suggested that epistemic immediacy might be assured by using shape stimuli that are more similar to one another than a cube and sphere. Identifying a square from a square-like shape with convex sides may control for simple inferences, a control that provides novel prospects for future modifications.

5. Development as an Empirical Problem

Experimental considerations of Molyneux’s question quickly followed its publication. However, as no immediate cure of blindness has been forthcoming, two provisos are required to make the question more amenable to empirical investigation: sight is achievable by a slow process of visual restoration, and subjects need not be congenitally blind adult males. In lieu of these conditions, three central developments have been in use: ocular and neural surgery, adaptation to Sensory Substitution Devices, and developmental experiments on infants. The diagram below charts these developments by kind.

molyneux-4

Surgery

Thirty-six years after the publication of Molyneux’s question, the English surgeon William Cheselden published a report of his successful cataract operation that was so persuasive George Berkeley  considered it confirmation of his negative reply, as did many French philosophers, such as Voltaire. Cheselden’s young subject, who was only partially blind (he was able to distinguish night from day), was not able to recognize objects at first sight, though he knew them by touch. Similar experiments throughout the 18th, 19th, and 20th centuries confirmed Cheselden’s findings for many scientists.

Twenty-first century research reveals more nuanced results. Visual deprivation results in deterioration of the visual cortex. (For instance, one carefully studied patient who was blind for 40 years until undergoing cataract surgery at the age of 43 was able to appreciate the distance and size of objects after about five months of recovery, but remained unable to recognize people by their facial features or to appreciate depth such as line drawings of cubes. See Fine, 2003). This indicates that the areas of the brain dedicated to processing some spatial information remain in a deteriorated state, and that therefore, analysis of the experiences of individuals who once had cataracts may be less relevant to the query posed by Molyneux’s question, which concerns the nature of ideas acquired by sensory perception rather than the separate issue of visual impairment.

Held et al. (2011) re-tooled the cataract paradigm by given newly sighted subjects’ a second chance to identify shapes a few days after their initial failed tests; in the second test, each subject succeeded. The authors conclude a more nuanced answer of "initially no but subsequently yes.” In other words, visual deprivation causes transfer failure rather than preventing the creation of cross-modal representations: “The rapidity of acquisition suggests that the neuronal substrates responsible for cross-modal interaction might already be in place before they become behaviorally manifest (Held 2011: 552).” Their summary conclusion is that the neuronal structure for cross-modal transfer is available, but not utilizable due to its degenerated state caused by visual deprivation. This modified cataract paradigm is in support of an affirmative reply if one’s concern is cross-modal transfer. However, if one’s interest in the question concerns the effects of long-term visual deprivation, the modified paradigm supports a negative answer.

Cataract surgery is not the only surgical paradigm that has been applied to Molyneux’s question. Evans suggested using visual prostheses to directly stimulate the visual cortex, or areas along lower visual pathways such as the optic nerve and retina. This invasive technique has the novel and shocking result of producing "phosphenes"—lightning-like flashes produced in the mind's eye. Blind subjects reportedly are able to spatially organize phosphenes, recognizing motion and simple shapes. After significant training, they are even able to integrate these mental percepts into their behavior: they can localize, identify, and even grasp the corresponding tactile objects presented to them. Such techniques, however, have yet to undergo clinical trial and so remain merely suggestive of an affirmative reply.

A related theoretical observation concerns whether areas of the brain functionally reserved for processing information from one sense modality like touch can process information from another, like sight. Mriganka Sur found that surgically rerouting information from the retina of ferrets to both their auditory cortex and somotosensory cortex elicited responses in both when the subject was visually stimulated. This provides evidence for “crossmodal plasticity,” the claim that senses are functionally organized to process certain kinds of information such as spatially or temporally organized stimuli, rather than organized solely by inborn connections to sensory organs. Crossmodal plasticity is also supported by the observation that when blind subjects process auditory information, the visual cortex is active; this suggests that cortical rewiring is a natural occurrence. Further support comes from the phenomenon known as “synesthesia,” in which perception by one sensory modality includes the experiential character of another sensory modality—where, for instance, one “hears” colors. Surgical research influenced by Molyneux’s question has significantly advanced our understanding of both the long-term negative effects of sensory deprivation and the cortical plasticity of the brain, allowing for improved visual restoration of the once-blind.

Sensory Substitution Devices (SSDs)

Bach-y-Rita’s invention of a device created to simulate sensory experiences of one sensory modality in another has generated a number of experiments related to Molyneux’s question. One such device, the “BrainPort,” transfers visual information from a mobile camera to an electrode array placed on the tongue. Using BrainPorts, blind subjects are able to recognize objects from a distance by the electric stimulations that they feel on their tongue. Aside from anecdotal reports from SSD users, who say that after practice with the BrainPort they no longer feel the stimulation on their tongue but rather simply “see” the objects before them, there is evidence that when congenitally blind individuals use the device, areas in the brain reserved for visual processing are recruited. In the same study, a control group of sighted subjects were not found to have activation in the visual cortex after practiced use of the device, a result that provides intriguing insight into crossmodal plasticity. The use of SSDs seems to be a kind of Molyneux experiment, one that shows that the blind might recognize tactually familiar shapes by using augmentation device. See (Reich et al., 2012.) The use of these devices, however, is an experimental analogue to Molyneux’s question only to the degree that the device presents information visually—an unlikely claim as they more closely approximate an extension of the sensory modality being used.

Developmental Science

Newborn infants offer a unique window into the development of sensory concepts of shape. Like Molyneux’s question, Andrew Meltzoff’s imitation studies involve testing whether stimuli familiar to touch (such as, in his research, the familiarity of feeling one’s own facial expressions or facial proprioception) transfer to the recognition ability of what is seen at first sight. He demonstrated that infants a few weeks old imitate another person’s facial expressions, such as tongue protrusion and mouth opening, suggesting evidence in support of an affirmative reply. These results, however, have been contested by further research that has not been able to replicate Meltzoff’s findings. In another experiment testing Molyneux’s question, Meltzoff shows that oral tactile familiarity of pacifier textures, whether “bumpy” or “smooth,” influences visual recognition of these shapes, suggesting an affirmative reply. Infants who were orally habituated to the feel of a bumpy pacifier attended to the visually presented bumpy shape more often than to the smoothly textured pacifier, and vice-versa.

These results are consistent with Arlette Streri’s experiments, which involved habituating newborns to the feel of shapes in their right hand while preventing them from seeing them. Both shapes were then presented visually to the infants while their length of gaze and number of gazes were recorded. The shapes that were not held were looked at longer and more often, suggesting that shape concepts acquired by touch were communicated from touch to sight. A control group of infants who were not tactually habituated to shapes looked at the visually presented shapes for equal amounts of time, suggesting that prior tactile experience guided the infant’s attention. These results are consistent with an affirmative reply to Molyneux’s question.

6. Conclusion

A philosopher’s muse, Molyneux’s question continues to inspire insight and direct understanding about the mind and its contents. Future prospects of an empirical solution continually remain just beyond the reach of the cognitive sciences, stretching its methodology while extending the question’s application to novel experimental paradigms. The philosophical rewards from such future work promises to be as rich as those of the past and present.

7. References and Further Reading

  • Berkeley, George. 1709/1975. “An Essay Toward a New Theory of Vision.” Philosophical Works, Including the Works on Vision. (Edited by Michael R. Ayers.) London: J. M. Dent.
    • Defends a negative answer to Molyneux’s question in sections 132-159.
  • Berman, D. 1974/2009. “Francis Hutcheson on Berkeley and the Molyneux Problem.” Berkeley and Irish Philosophy. New York: Continuum Books: 138–148.
    • Interprets Hutcheson’s affirmative answer to Molyneux’s question.
  • Bruno, M., Mandelbaum, E. 2010. “Locke’s Answer to Molyneux’s Thought Experiment.” History of Philosophy Quarterly 27: 165–180.
    • Interprets Locke’s negative answer to Molyneux’s question.
  • Buffon C. 1749/1971. De l’homme. (Translated by M. Duchet) Paris: Masparo.
    • Defends a negative answer to Molyneux’s question in the chapter, “Du sens de la vue.”
  • Campbell J. 1996. “Molyneux’s Question.” Perception: Philosophical Issues 7. (Edited by Enrique Villanueva.) Atascadero, California: Ridgeview Publishing Company: 301–318.
    • Defends direct realism with an affirmation of Molyneux’s question.
  • Cheseldon, W. 1728. “An Account of Some Observations Made by a Young Gentleman, Who Was Born Blind, or Lost His Sight so Early, That He Had no Remembrance of Ever Having Seen, and Was Couch’d between 13 and 14 Years of Age.” Philosophical Transactions 35: 447–50.
    • Presents a famous case of a patient healed of cataracts.
  • Condillac, E. B. 1754/1930. Condillac's Treatise on the Sensations. (Translated by Geraldine Carr.) London: Favil Press.
    • Defends a negative answer to Molyneux’s question using the famous statue example.
  • Degenaar, M. 1996. Molyneux's Problem: Three Centuries of Discussion on the Perception of Forms. (Translated by Michael J. Collins) Boston: Kluwer Academic Publishers.
    • Summarizes a history of answers to Molyneux’s Question.
  • Diderot, D. 1749/1972. “Letter on the Blind for the Use of Those who See.” Diderot's Early Philosophical Works. (Translated by Margaret Jourdain.) New York: Burt Franklin.
    • Recounts the life of Nicolas Saunderson, a blind mathematician, with a variety of proposed changes to Molyneux’s question.
  • Evans, G. 1985. “Molyneux’s Question.” Collected Papers. (Edited by John McDowell.) New York: Oxford UP.
    • Defends an affirmative answer to Molyneux’s question.
  • Fine, I., Wade, A., Brewer, A., May, M., Goodman, D., Boynton, G., Wandell, B., MacLeod, D. 2003. “Long-term Deprivation Affects Visual Perception and Cortex.” Nature Neuroscience 6 (9): 909–10.
    • Describes long-term effects of blindness after sight is restored by cataract surgery.
  • Gallagher, S. 2005. “Neurons and Neonates: Reflections on the Molyneux System.” How the Body Shapes the Mind. Oxford: Clarendon Press: 153–172.
    • Defends an affirmative answer to Molyneux’s question.
  • Glenney, B. 2013. “Philosophical Problems, Cluster Concepts and the Many Lives of Molyneux’s Question.” Biology and Philosophy 28 3: 541–558. DOI 10.1007/s10539-012-9355x
    • Defends a pluralist answer to Molyneux’s question.
  • Glenney, B. 2012. “Leibniz on Molyneux’s Question.” History of Philosophy Quarterly 29 3: 247–264.
    • Interprets Leibniz’s affirmative answer to Molyneux’s question.
  • Glenney, B. 2011. “Adam Smith and the Problem of the External World.” Journal of Scottish Philosophy 9 2: 205–223.
    • Interprets Smith’s affirmative answer to Molyneux’s question.
  • Grice, H. P. 1962/2011. “Some Remarks About the Senses.” The Senses: Classical and Contemporary Philosophical Perspectives. New York: Oxford UP: 83–100.
    • Defends the individuation of the senses based on experience.
  • Held, R., Ostrovsky, Y., deGelder, B., Gandhi, T., Ganesh, S., Mathur, U. and Sinha, P. 2011. 
Newly Sighted Cannot Match Seen with Felt.
Nature Neuroscience 14: 551–553.
    • Describes a new paradigm for testing visual identification of tactilely-familiar shapes by subjects recently cured of cataracts.
  • Levin, J. 2008. “Molyneux’s Question and the Individuation of Perceptual Concepts.” Philosophical Studies 139 1: 1–28.
    • Defends the claim that the same concepts are deployed when seeing and touching shapes.
  • Liu, Z., Kersten, D., Knill, D. 1999. “Dissociating Stimulus Information from Internal Representation—A Case Study in Object Recognition.” Vision Research 39: 603–612.
    • Describes a modification to Molyneux’s question using phosphenes with results that support an affirmative answer.
  • Locke, J., 1688/1978. The Correspondence of John Locke 3. (Edited by E. S. De Beer.) Oxford: Clarendon Press: 482–3.
    • Molyneux first poses his question in Letter 1064 (7 July, 1688).
  • Locke, J. 1694/1979. An Essay Concerning Human Understanding. (Edited by Peter H. Nidditch.) Oxford: Clarendon Press.
    • Defends a negative answer to Molyneux’s question in II.ix.
  • Lotze, H. 1887. Metaphysic. (Edited by Bernard Mosanquet.) Oxford: Clarendon Press. Vol. II.
    • Defends a negative answer to Molyneux’s question.
  • Macpherson, F. 2011. The Senses: Classical and Contemporary Philosophical Perspectives. New York: Oxford University Press.
    • Presents theories on how the senses are to be individuated.
  • Meltzoff, A. N. 1993. “Molyneux’s Babies: Cross-modal Perception, Imitation, and the Mind of the Preverbal Infant.” Spatial Representation. Cambridge: Blackwell: 219–235.
    • Describes visual identification of tactilely familiar shapes by infants.
  • Morgan, Michael J. 1977. Molyneux's question: vision, touch, and the philosophy of perception. New York: Cambridge UP.
    • Summarizes a history of answers to Molyneux’s Question.
  • Noë, A. 2004. Action in Perception. Cambridge, Massachusetts.: MIT Press.
    • Defends an affirmative answer to Molyneux’s question.
  • Prinz, J. 2002. Furnishing the Mind: Concepts and Their Perceptual Basis. Cambridge, Massachusetts: MIT Press.
    • Defends an affirmative answer to Molyneux’s question in Chapter 5.
  • Reich, L., Maidenbaum, S., Amedi, A. “The Brain as a Flexible Task Machine: Implications for Visual Rehabilitation using Noninvasive vs. Invasive Approaches. Current Opinion in Neurology 25 1 2012: 86-95.
    • Describes uses and implications of Sensory Substitution Devices.
  • Sassen, B. 2004, “Kant on Molyneux’s Problem.” British Journal for the History of Philosophy 12, 3: 471–485.
    • Interprets Kant’s possible answer to Molyneux’s question.
  • Streri, A., Gentaz, E. 2003. “Cross-modal Recognition of Shape from Hand to Eyes and Handedness in Human Newborns.” Somatosensory & Motor Research 20 1: 11–16.
    • Describes visual identification of tactilely-familiar shapes by infants.
  • Sur, M., Pallas, S., Roe, A. 1990. “Cross-modal Plasticity in Cortical Development: Differentiation and Specification of Sensory Neocortex.” Trends in Neurosciences 13 6: 227–233.
    • Describes an experiment re-routing sensory information from vision to audition and touch in ferrets.
  • Thomson, J. 1974. “Molyneux's Problem.” The Journal of Philosophy 71 18: 637–650.
    • Defends an affirmative answer to Molyneux’s question.
  • Van Cleve, J. 2007. “Reid’s Answer to Molyneux’s Question.” The Monist 90 2: 251–270.
    • Interprets Reid’s answer to Molyneux’s question.

 

Author Information

Brian Glenney
Email: brian.glenney@gordon.edu
Gordon College
U. S. A.

The Computational Theory of Mind

The Computational Theory of Mind (CTM) claims that the mind is a computer, so the theory is also known as computationalism. It is generally assumed that CTM is the main working hypothesis of cognitive science.

CTM is often understood as a specific variant of the Representational Theory of Mind (RTM), which claims that cognition is manipulation of representation. The most popular variant of CTM, classical CTM, or simply CTM without any qualification, is related to the Language of Thought Hypothesis (LOTH), that has been forcefully defended by Jerry Fodor. However, there are several other computational accounts of the mind that either reject LOTH—notably connectionism and several accounts in contemporary computational neuroscience—or do not subscribe to RTM at all. In addition, some authors explicitly disentangle the question of whether the mind is computational from the question of whether it manipulates representations. It seems that there is no inconsistency in maintaining that cognition requires computation without subscribing to representationalism, although most proponents of CTM agree that the account of cognition in terms of computation over representation is the most cogent. (But this need not mean that representation is reducible to computation.)

One of the basic philosophical arguments for CTM is that it can make clear how thought and content are causally relevant in the physical world. It does this by saying thoughts are syntactic entities that are computed over: their form makes them causally relevant in just the same way that the form makes fragments of source code in a computer causally relevant. This basic argument may be made more specific in various ways. For example, Allen Newell couched it in terms of the physical symbol hypothesis, according to which being a physical symbol system (a physical computer) is a necessary and sufficient condition of thinking. Haugeland framed the claim in formalist terms: if you take care of the syntax, the semantics will take care of itself. Daniel Dennett, in a slightly different vein, claims that while semantic engines are impossible, syntactic engines can approximate them quite satisfactorily.

This article focuses only on specific problems with the Computation Theory of Mind (CTM), while for the most part leaving RTM aside. There are four main sections. In the first section, the three most important variants of CTM are introduced: classical CTM, connectionism, and computational neuroscience. The second section discusses the most important conceptions of computational explanation in cognitive science, which are functionalism and mechanism. The third section introduces the skeptical arguments against CTM raised by Hilary Putnam, and presents several accounts of implementation (or physical realization) of computation. Common objections to CTM are listed in the fourth section.

Table of Contents

  1. Variants of Computationalism
    1. Classical CTM
    2. Connectionism
    3. Computational Neuroscience
  2. Computational Explanation
    1. Functionalism
    2. Mechanism
  3. Implementation
    1. Putnam and Searle against CTM
    2. Semantic Account
    3. Causal Account
    4. Mechanistic Account
  4. Other objections to CTM
  5. Conclusion
  6. References and Further Reading

1. Variants of Computationalism

The generic claim that the mind is a computer may be understood in various ways, depending on how the basic terms are understood. In particular, some theorists claimed that only cognition is computation, while emotional processes are not computational (Harnish 2002, 6), yet some theorists explain neither motor nor sensory processes in computational terms (Newell and Simon 1972). These differences are relatively minor compared to the variety of ways in which “computation” is understood.

The main question here is just how much of the mind’s functioning is computational. The crux of this question comes with trying to understand exactly what computation is. In its most generic reading, computation is equated with information processing; but in stronger versions, it is explicated in terms of digital effective computation, which is assumed in the classical version of CTM; in some other versions, analog or hybrid computation is admissible. Although Alan Turing defined effective computation using his notion of a machine (later called a ‘Turing machine’, see below section 1.a), there is a lively debate in philosophy of mathematics as to whether all physical computation is Turing-equivalent. Even if all mathematical theories of effective computation that we know of right now (for example, lambda calculus, Markoff algorithms, and partial recursive functions) turn out to be equivalent to Turing-machine computation, it is an open question whether they are adequate formalizations of the intuitive notion of computation. Some theorists, for example, claim that it is physically possible that hypercomputational processes (that is, processes that compute functions that a Turing machine cannot compute) exist (Copeland 2004). For this reason, the assumption that CTM has to assume Turing computation, frequently made in the debates over computationalism, is controversial.

One can distinguish several basic kinds of computation, such as digital, analog, and hybrid. As they are traditionally assumed in the most popular variants of CTM, they will be explicated in the following format: classical CTM assumes digital computation; connectionism may also involve analog computation; and in several theories in computational neuroscience, hybrid analog/digital processing is assumed.

a. Classical CTM

Classical CTM is understood as the conjunction of RTM (and, in particular, LOTH) and the claim that cognition is digital effective computation. The best-known account of digital, effective computation was given by Alan Turing in terms of abstract machines (which were originally intended to be conceptual tools rather than physical entities, though sometimes they are built physically simply for fun). Such abstract machines can only do what a human computer would do mechanically, given a potentially indefinite amount of paper, a pencil, and a list of rote rules. More specifically, a Turing machine (TM) has at least one tape, on which symbols from a finite alphabet can appear; the tape is read and written (and erased) by a machine head, and can also move left or right. The functioning of the machine is described by the machine table instructions, which  include five pieces of information: (1) the current state of the TM; (2) the symbol read from the tape; (3) the symbol written on the tape; (4) left or right movement of the head; (5) the next state of the TM. The machine table has to be finite; the number of states is also finite. In contrast, the length of tape is potentially unbounded.

As it turns out, all known effective (that is, halting, or necessarily ending their functioning with the expected result) algorithms can be encoded as a list of instructions for a Turing machine. For  example, a basic Turing machine can be built to perform logical negation of the input propositional letter. The alphabet may consist of all 26 Latin letters, a blank symbol and a tilde. Now, the machine table instructions need to specify the following operations: if the head scanner is at the tilde, erase the tilde (this effectively realizes the double negation rule); if the head scanner is at the letter and the state of the machine is not “1”, move the head left and change the state of the machine to 1; if the state is “1” and the head is at the blank symbol, write the tilde (note: This list of instructions is vastly simplified for presentation purposes. In reality, it would be necessary to rewrite symbols on the tape when inserting the tilde and decide when to stop operation. B—ased on the current list, it would simply cycle infinitely). Writing Turing machine programs is actually rather time-consuming and useful only for purely theoretical purposes, but all other digital effective computational formalisms are essentially similar in requiring  (1) a finite number of different symbols in what corresponds to a Turing machine alphabet (digitality); (2) that there are a finite number of steps from the beginning to the end of operation (effectiveness). (Correspondingly, one can introduce hypercomputation by positing an infinite number of symbols in the alphabet, infinite number of states or steps in the operation, or by introducing randomness in the execution of operations.) Note that digitality is not equivalent to binary code, it is just technologically easier to produce physical systems responsive to two states rather than ten. Early computers operated, for example, on decimal code, rather than binary code (Von Neumann 1958).

There is a particularly important variant of the Turing machine, which played a seminal role in justifying the CTM. This is the universal Turing machine. A Turing machine is a formally defined, mathematical entity. Hence, it has a unique description, which can identify a given TM. Since we can encode these descriptions on the tape of another TM, they can be operated upon, and one can make these operations conform to the definition of the first TM. This way, a TM that has the encoding of any other TM on its input tape will act accordingly, and will faithfully simulate the other TM. This machine  is then called universal. The notion of universality is very important in the mathematical theory of computability, as the universal TM is hypothesized to be able to compute all effectively computable mathematical functions. In addition, the idea of using a description of a TM to determine the functioning of another TM gave rise to the idea of programmable computers. At the same time, flexibility is supposed to be the hallmark of general intelligence, and many theorists supposed that this flexibility can be explained with universality (Newell 1980). This gave the universal TM a special role in the CTM; one that motivated an analogy between the mind and the computer: both were supposed to solve problems whose nature cannot be exactly predicted (Apter 1970).

These points notwithstanding, the analogy between the universal TM and the mind is not necessary to prove classical CTM true. For example, it may turn out that human memory is essentially much more bounded than the tape of the TM. In addition, the significance of the TM in modeling cognition is not obvious: the universal TM was never used directly to write computational models of cognitive tasks, and its role may be seen as merely instrumental in analyzing the computational complexity of algorithms posited to explain these tasks. Some theorists question whether anything at all hinges upon the notion of equivalence between the mind’s information-processing capabilities and the Turing machine (Sloman 1996) ——the CTM may leave the question whether all physical computation is Turing-equivalent open, or it might even embrace hypercomputation.

The first digital model of the mind was (probably) presented by Warren McCulloch and Walter Pitts (1943), who suggested that the brain’s neuron operation essentially corresponds to logical connectives (in other words, neurons were equated with what later was called ‘logical gates’ —the basic building blocks of contemporary digital integrated circuits). In philosophy, the first avowal of CTM is usually linked with Hilary Putnam (1960), even if the latter paper does not explicitly assert that the mind is equivalent to a Turing machine but rather uses the concept to defend his functionalism. The classical CTM also became influential in early cognitive science (Miller, Galanter, and Pribram 1967).

In 1975, Jerry Fodor linked CTM with LOTH. He argued that cognitive representations are tokens of the Language of Thought and that the mind is a digital computer that operates on these tokens. Fodor’s forceful defense of LOTH and CTM as inextricably linked prompted many cognitive scientists and philosophers to equate LOTH and CTM. In Fodor’s version, CTM furnishes psychology with the proper means for dealing with the question of how thought, framed in terms of propositional attitudes, is possible. Propositional attitudes are understood as relations of the cognitive agent to the tokens in its LOT, and the operations on these tokens are syntactic, or computational. In other words, the symbols of LOT are transformed by computational rules, which are usually supposed to be inferential. For this reason, classical CTM is also dubbed symbolic CTM, and the existence of symbol transformation rules is supposed to be a feature of this approach. However, the very notion of the symbol is used differently by various authors: some mean entities equivalent to symbols on the tape of the TM, some think of physically distinguishable states, as in Newell’s physical symbol hypothesis (Newell’s symbols, roughly speaking, point to the values of some variables), whereas others frame them as tokens in LOT. For this reason, major confusion over the notion of symbol is prevalent in current debate (Steels 2008).

The most compelling case for classical CTM can be made by showing its aptitude for dealing with abstract thinking, rational reasoning, and language processing. For example, Fodor argued that productivity of language (the capacity to produce indefinitely many different sentences) can be explained only with compositionality, and compositionality is a feature of rich symbol systems, similar to natural language. (Another argument is related to systematicity; see (Aizawa 2003).) Classical systems, such as production systems, excel in simulating human performance in logical and mathematical domains. Production systems contain production rules, which are, roughly speaking, rules of the form “if a condition X is satisfied, do Y”. Usually there are thousands of concurrently active rules in production systems (for more information on production systems, see (Newell 1990; Anderson 1983).)

In his later writings, however, Fodor (2001) argued that only peripheral (that is, mostly perceptual and modular) processes are computational, in contradistinction to central cognitive processes, which, owing to their holism, cannot be explained computationally (or in any other way, really). This pessimism about classical CTM seems to contrast with the successes of the classical approach in its traditional domains.

Classical CTM is silent about the neural realization of symbol systems, and for this reason it has been criticized by connectionists as biologically implausible. For example, Miller et al. (1967) supposed that there is a specific cognitive level which is best described as corresponding to reasoning and thinking, rather than to any lower-level neural processing. Similar claims have been framed in terms of an analogy between the software/hardware distinction and the mind/brain distinction. Critics stress that the analogy is relatively weak, and neurally quite implausible. In addition, perceptual and motor functioning does not seem to fit the symbolic paradigm of cognitive science.

b. Connectionism

In contrast to classical CTM, connectionism is usually presented as a more biologically plausible variant of computation. Although some artificial neural networks (ANNs) are vastly idealized (for an evaluation of neural plausibility of typical ANNs, see (Bechtel and Abrahamsen 2002, sec. 2.3)), many researchers consider them to be much more realistic than rule-based production systems. The connectionist systems do well in modeling perceptual and motor processes, which are much harder to model symbolically.

Some early ANNs are clearly digital (for example, the early proposal of McCulloch and Pitts, see section 1.a above, is both a neural network and a digital system), while some modern networks are supposed to be analog. In particular, the connection weights are continuous values, and even if these networks are usually simulated on digital computers, they are supposed to implement analog computation. Here an interesting epistemological problem is evident: because all measurement is of finite precision, we cannot ever be sure whether the measured value is actually continuous or discrete. The discreteness may just be a feature of the measuring apparatus. For this reason, continuous values are always theoretically posited rather than empirically discovered, as there is no way to empirically decide whether a given value is actually discrete or not. Having said that, there might be compelling reasons in some domains of science to assume that measurement values should be mathematically described as real numbers, rather than approximated digitally. (Note that a Turing machine cannot compute all real numbers but it can approximate any given real number to any desired degree, as the Nyquist-Shannon sampling theorem shows).

Importantly, the relationship between connectionism and RTM is more debatable here than in classical CTM. Some proponents of connectionist models are anti-representationalists or eliminativists: the notion of representation, according to them, can be discarded in connectionist cognitive science. Others claim that the mention of representation in connectionism is at best honorific (for an extended argument, see (Ramsey 2007)). Nevertheless, the position that connectionist networks are representational as a whole, by being homomorphic to their subject domain, has been forcefully defended (O’Brien and Opie 2006; O’Brien and Opie 2009). It seems that there are important and serious differences among various connectionist models in the way that they explain cognition.

In simpler models, the nodes of artificial neural networks may be treated as atomic representations (for example, as individual concepts). They are usually called ‘symbolic’ for that very reason. However, these representations represent only by fiat: it is the modeler who decides what they represent. For this reason, they do not seem to be biologically plausible, though some might argue that, at least in principle, individual neurons may represent complex features: in biological brains, so-called grandmother cells do exactly that (Bowers 2009; Gross 2002; Konorski 1967). More complex connectionist models do not represent individual representations as individual nodes; instead, the representation is distributed into multiple nodes that may be activated to a different degree. These models may plausibly implement the prototype theory of concepts (Wittgenstein 1953; Rosch and Mervis 1975). The distributed representation seems, therefore, to be much more biologically and psychologically plausible for proponents of the prototype theory (though this theory is also debated ——see (Machery 2009) for a critical review of theories of concepts in psychology).

The proponents of classical CTM have objected to connectionism by pointing out that distributed representations do not seem to explain productivity and systematicity of cognition, as these representations are not compositional (Fodor and Pylyshyn 1988). Fodor and Pylyshyn present connectionists with the following dilemma: If representations in ANNs are compositional, then ANNs are mere implementations of classical systems; if not, they are not plausible models of higher cognition. Obviously, both horns of the dilemma are unattractive for connectionism. This has sparked a lively debate. (For a review, see Connectionism and (Bechtel and Abrahamsen 2002, chap. 6)). In short, some reject the premise that higher cognition is actually as systematic and productive as Fodor and Pylyshyn assume, while others defend the view that implementing a compositional symbolic system by an ANN does not simply render it uninteresting technical gadgetry, because further aspects of cognitive processes can be explained this way.

In contemporary cognitive modeling, ANNs have become major standard tools. (See for example (Lewandowsky and Farrell 2011)). They are also prevalent in computational neuroscience, but there are some important hybrid digital/analog systems in the latter discipline that deserve separate treatment.

c. Computational Neuroscience

Computational neuroscience employs many diverse methods and it is hard to find modeling techniques applicable to a wide range of task domains. Yet it has been argued that, in general, computation in the brain is neither completely analog nor completely digital (Piccinini and Bahar 2013). This is because neurons, on one hand, seem to be digital, since they spike only when the input signal exceeds a certain threshold (hence, the continuous input value becomes discrete), but their spiking forms continuous patterns in time. For this reason, it is customary to describe the functioning of spiking neurons both as dynamical systems, which means that they are represented in terms of continuous parameters evolving in time in a multi-dimensional space (the mathematical representation takes the form of differential equations in this case), and as networks of information-processing elements (usually in a way similar to connectionism). Hybrid analog/digital systems are also often postulated as situated in different parts of the brain. For example, the prefrontal cortex is said to manifest bi-stable behavior and gating (O’Reilly 2006), which is typical of digital systems.

Unifying frameworks in computational neuroscience are relatively rare. Of special interest might be the Bayesian brain theory and the Neural Engineering Framework (Eliasmith and Anderson 2003). The Bayesian brain theory has become one of the major theories of brain functioning——here it is assumed that the brain’s main function is to predict probable outcomes (for example, causes of sensory stimulation) based on its earlier sensory input. One major theory of this kind is the free-energy theory (Friston, Kilner, and Harrison 2006; Friston and Kiebel 2011). This theory presupposes that the brain uses hierarchical predictive coding, which is an efficient way to deal with probabilistic reasoning (which is known to be computationally hard; this is one of the major criticisms of this approach ——it may even turn out that predictive coding is not Bayesian at all, compare (Blokpoel, Kwisthout, and Van Rooij 2012)). The predictive coding (also called predictive processing) is thought by Andy Clark to be a unifying theory of the brain (Clark 2013), where brains predict future (or causes of) sensory input in a top-down fashion and minimize the error of such predictions either by changing predictions about sensory input or by acting upon the world. However, as critics of this line of research have noted, such predictive coding models lack plausible neural implementation (usually they lack any implementation and remain sketchy, compare (Rasmussen and Eliasmith 2013)). Some suggest that a lack of implementation is true of the Bayesian models in general (Jones and Love 2011).

The Neural Engineering Framework (NEF) differs from the predictive brain approach in two respects: it does not posit a single function for the brain, and it offers detailed, biologically-plausible models of cognitive capacities. In a recent version (Eliasmith 2013) features the world’s largest functional brain model. The main principles of the NEF are: (1) Neural representations are understood as combinations of nonlinear encoding and optimal linear decoding (this includes temporal and population representations); (2) transformations of neural representations are functions of variables represented by a population; and (3) neural dynamics are described with neural representations as control-theoretic state variables. (‘Transformation’ is the term given for what would traditionally be called computation.) The NEF models are at the same time representational, computational, dynamical, and use the control theory (which is mathematically equivalent to dynamic systems theory). Of special interest is that the NEF enables the building of plausible architectures that tackle symbolic problems. For example, a 2.5-million neuron model of the brain (called ‘Spaun’) has been built, which is able to perform eight diverse tasks (Eliasmith et al. 2012). Spaun features so-called semantic pointers, which can be seen as elements of compressed neural vector space, and which enable the execution of higher cognition tasks. At the same time, the NEF models are usually less idealizing than classical CTM models, and they do not presuppose that the brain is as systematic and compositional as Fodor and Pylyshyn claim. The NEF models deliver the required performance but without positing an architecture that is entirely reducible to a classical production system.

2. Computational Explanation

The main aim of computational modeling in cognitive science is to explain and predict mental phenomena. (In neuroscience and psychiatry, therapeutic intervention is another major aim of the inquiry.) There are two main competing theories of computational explanation: functionalism, in particular David Marr’s account; and mechanism. Although some argue for the Deductive-Nomological account in cognitive science, especially proponents of dynamicism (Walmsley 2008), the dynamical models in question are contrasted with computational ones. What's more, the relation between mechanical and dynamical explanation is a matter of a lively debate (Zednik 2011; Kaplan and Craver 2011; Kaplan and Bechtel 2011).

a. Functionalism

One of the most prominent views of functional explanation (for a general overview see Causal Theories of Functional Explanation) was developed by Robert Cummins (Cummins 1975; Cummins 1983; Cummins 2000). Cummins rejects the idea that explanation in psychology is subsumption under a law. For him, psychology and other special sciences are interested in various effects, understood as exercises of various capacities. A given capacity is to be analyzed functionally, by decomposing it into a number of less problematic capacities, or dispositions, that jointly manifest themselves as the effect in question. In cognitive science and psychology, this joint manifestation is best understood in terms of flowcharts or computer programs. Cummins claims that computational explanations are just top-down explanations of a system’s capacity.

A specific problem with Cummins’ account is that the explanation is considered to be correct if dispositions are merely sufficient for the joint manifestation of the effect to be displayed. For example, a computer program that has the same output as a human subject, given the same input, is held to be explanatory of the subject’s performance. This seems problematic, given that computer simulations have been traditionally evaluated not only at the level of their inputs and outputs (in which case they would be merely ‘weakly equivalent’ in Fodor’s terminology, see (Fodor 1968)), but also at the level of the process that transforms the input data into the output data (in which case they are ‘strongly equivalent’ and genuinely explanatory, according to Fodor). Note, for example, that it is sufficient to kill U. S. President John F. Kennedy with an atomic bomb, but this fact is not explanatory of his actual assassination. In short, critics of functional explanation stress that it is too liberal and that it should require causal relevance as well. They argue that functional analyses devoid of causal relevance are in the best case incomplete, and in the worst case they may be explanatorily irrelevant (Piccinini and Craver 2011).

One way to make the functional account more robust is to introduce a hierarchy of explanatory levels. In the context of cognitive science, the most influential proposal for such a hierarchy comes from David Marr (1982), who proposes a three-leveled model of explanation. This model introduces several additional constraints that have since been widely accepted in modeling practice. In particular, Marr argued that the complete explanation of a computational system should feature the following levels: (1) The computational level; (2) the level of representation and algorithm; and (3) the level of hardware implementation.

At the computational level, the modeler is supposed to ask what operations the system performs and why it performs them. Interestingly, the term Marr proposed for this level has proved confusing to some. For this reason, it is usually characterized in semantic terms, such as knowledge or representation, but this may be also somewhat misleading. At this level, the modeler is supposed to assume that a device performs a task by carrying out a series of operations. She needs to identify the task in question and justify her explanatory strategy by ensuring that her specification mirrors the performance of the machine, and that the performance is appropriate in the given environment. Marrian “computation” refers to computational tasks and not to the manipulation of particular semantic representations. No wonder that other terms for this level have been put forth to prevent misunderstanding, perhaps the most appropriate of which is Sterelny’s (1990) “ecological level.” Sterelny makes it clear that the justification of why the task is performed includes the relevant physical conditions of the machine’s environment.

The level of representation and algorithm concerns the following questions: How can the computational task be performed? What is the representation of the input and output? And what is the algorithm for the transformation? The focus is on the formal features of the representation———which are required to develop an algorithm in a programming language —rather than on whether the inputs really represent anything. The algorithm is correct when it performs the specified task, given the same input as the computational system in question. The distinction between the computational level and the level of representation and algorithm amounts to the difference between what and how (Marr 1982, 28).

The level of hardware implementation refers to the physical machinery realizing the computation; in neuroscience, of course, this will be the brain. Marr’s methodological account is based on his own modeling in computational neuroscience, but stresses the relative autonomy of the levels, which are also levels of realization. There are multiple realizations of a given task (see Mind and Multiple Realizability), so Marr endorses the classical functionalist claim of relative autonomy of levels, which is supposed to underwrite antireductionism (Fodor 1974). Most functionalists subsequently embraced Marr’s levels as well (for example, Zenon Pylyshyn (1984) and Daniel Dennett (1987)).

Although Marr introduces more constraints than Cummins, because he requires the description of three different levels of realization, his theory also suffers from the abovementioned problems. That is, it does not require the causal relevance of the algorithm and representation level; sufficiency is all that is required. Moreover, it remains relatively unclear why exactly there are three, and not, say, five levels in the proper explanation (note that some philosophers proposed the introduction of intermediary levels). For these reasons, mechanists have criticized Marr’s approach (Miłkowski 2013).

b. Mechanism

According to mechanism, to explain a phenomenon is to explain its underlying mechanism. Mechanistic explanation is a species of causal explanation, and explaining a mechanism involves the discovery of its causal structure. While mechanisms are defined variously, the core idea is that they are organized systems, comprising causally relevant component parts and operations (or activities) thereof (Bechtel 2008; Craver 2007; Glennan 2002; Machamer, Darden, and Craver 2000). Parts of the mechanism interact and their orchestrated operation contributes to the capacity of the mechanism. Mechanistic explanations abound in special sciences, and it is hoped that an adequate description of the principles implied in explanations (those that are generally accepted as sound) will also furnish researchers with normative guidance. The idea that computational explanation is best understood as mechanistic has been defended by (Piccinini 2007b; Piccinini 2008) and (Miłkowski 2013). It is closely linked to causal accounts of computational explanation, too (Chalmers 2011).

Constitutive mechanistic explanation is the dominant form of computational explanation in cognitive science. This kind of explanation includes at least three levels of mechanism: a constitutive (-1) level, which is the lowest level in the given analysis; an isolated (0) level, where the parts of the mechanism are specified, along with their interactions (activities or operations); and the contextual (+1) level, where the function of the mechanism is seen in a broader context (for example, the context for human vision includes lighting conditions). In contrast to how Marr (1982) or Dennett (1987) understand them, levels here are not just levels of abstraction; they are levels of composition. They are tightly integrated, but not entirely reducible to the lowest level.

Computational models explain how the computational capacity of a mechanism is generated by the orchestrated operation of its component parts. To say that a mechanism implements a computation is to claim that the causal organization of the mechanism is such that the input and output information streams are causally linked and that this link, along with the specific structure of information processing, is completely described. Note that the link is sometimes cyclical and can be very complex.

In some respects, the mechanistic account of computational explanation may be viewed as a causally-constrained version of functional explanation. Developments in the theory of mechanistic explanation, which is now one of the most active fields in the philosophy of science, make it, however, much more sensitive to the actual scientific practice of modelers.

3. Implementation

One of the most difficult questions for proponents of CTM is how to determine whether a given physical system is an implementation of a formal computation. Note that computer science does not offer any theory of implementation, and the intuitive view that one can decide whether a system implements a computation by finding a one-to-one correspondence between physical states and the states of a computation may lead to serious problems. In what follows, I will sketch out some objections to the objectivity of the notion of computation, formulated by John Searle and Hilary Putnam, and examine various answers to their objections.

a. Putnam and Searle against CTM

Putnam and Searle’s objection may be summarized as follows. There is nothing objective about physical computation; computation is ascribed to physical systems by human observers merely for convenience. For this reason, there are no genuine computational explanations. Needless to say, such an objection invalidates most research that has been done in cognitive science.

In particular, Putnam (1991, 121–125) has constructed a proof that any open physical system implements any finite automaton (which is a model of computation that has lower computational power than a Turing machine; note that the proof can be easily extended to Turing machines as well). The purpose of Putnam’s argument is to demonstrate that functionalism, were it true, would imply behaviorism; for functionalism, the internal structure is completely irrelevant to deciding what function is actually realized. The idea of the proof is as follows. Any physical system has at least one state. This state obtains for some time, and the duration can be measured by an external clock. By an appeal to the clock, one can identify as many states as one wishes, especially if the states can be constructed by set-theoretic operations (or their logical equivalent, which is the disjunction operator). For this reason, one can always find as many states in the physical system as the finite machine requires (it has, after all, a finite number of states). Also, its evolution in time may be easily mapped onto a physical system thanks to disjunctions and the clock. For this reason, there is nothing explanatory about the notion of computation.

Searle’s argument is similar. He argues that being a digital computer is a matter of ascribing 0s and 1s to a physical system, and that for any program and any sufficiently complex object there is a description of the object under which it realizes the program (Searle 1992, 207–208). On this view, even an ordinary wall would be a computer. In essence, both objections are similar in making the point that given enough freedom, one can always map physical states —whose number can be adjusted by logical means or by simply making more measurements —to the formal system. If we talk of both systems in terms of sets, then all that matters is cardinality of both sets (in essence, these arguments are similar to the objection once made against Russell’s structuralism, compare (Newman 1928)). As the arguments are similar, the replies to these objections usually address both at the same time, and try to limit the admissible ways of carving physical reality. The view is that somehow reality should be carved at its joints, and then made to correspond with the formal model.

b. Semantic Account

The semantic account of implementation is by far the most popular among philosophers. It simply requires that there is no computation without representation (Fodor 1975). But the semantic account seems to beg the question, given that some computational models require no representation, notably in connectionism. Besides, other objections to CTM (in particular the arguments based on the Chinese Room experiment question the assumption that computer programs ever represent anything by themselves. For this reason, at least in this debate, one can only assume that programs represent just because they are ascribed meaning by external observers. But in such a case, the observer may just as easily ascribe meaning to a wall. Thus, the semantic account has no resources to deal with these objections.

I do not meant to suggest that the semantic account is completely wrong; indeed, the intuitive appeal of CTM is based on its close links with RTM. Yet the assumption that computation always represents has been repeatedly questioned (Fresco 2010; Piccinini 2006; Miłkowski 2013). For example, it seems that an ordinary logical gate (the computational entity that corresponds to a logical connective), for example an AND gate, does not represent anything. At least, it does not seem to refer to anything. Yet it is a simple computational device.

c. Causal Account

The causal account requires that the physical states taken to correspond to the mathematical description of computation are causally linked (Chalmers 2011). This means that there have to be counterfactual dependencies to satisfy (this requirement has been proposed by (Copeland 1996), but without requiring that the states be causally relevant) and that the methodological principles of causal explanations have to be followed. They include theoretical parsimony (used already by Fodor in his constraints of his semantic account of computation) and the causal Markov condition. In particular, states that are not related causally, be it in Searle’s wall, or Putnam’s logical constructs, are automatically discarded.

There are two open questions for the causal account, however. First, for any causal system, there will be a corresponding computational description. This means that even if it is no longer true that all physical systems implement all possible computations, they still implement at least one computation (if there are multiple causal models of a given system, the number of corresponding computations of course grows). Causal theorists usually bite the bullet by replying that this does not make computational explanation void; it just allows a weak form of pancomputationalism (which is the claim that everything is computational (Müller 2009; Piccinini 2007a)). The second question is how the boundaries of causal systems are to be drawn. Should we try to model a computer’s distal causes (including the operations at the production site of its electronic components) in the causal model brought into correspondence with the formal model of computation? This seems absurd, but there is no explicit reply to this problem in the causal account.

d. Mechanistic Account

The mechanistic account is a specific version of the causal account, defended by Piccinini and Miłkowski. The first move made by both is to take into account only functional mechanisms, which excludes weak pancomputationalisms. (The requirement that the systems should have the function —in some robust sense —of computing has also been defended by other authors, compare (Lycan 1987; Sterelny 1990)). Another is to argue that computational systems should be understood as multi-level systems, which fits naturally with the mechanistic account of computational explanation. Note that mechanists in the philosophy of science have already faced the difficult question of how to draw a boundary around systems, for example by including only components constitutively relevant to the capacity of the mechanism; compare (Craver 2007). For this reason, the mechanistic account is supposed to deliver a satisfactory approach to delineating computational mechanisms from their environment.

Another specific feature of the mechanistic account of computation is that it makes clear how the formal account of computation corresponds to the physical mechanism. Namely, the isolated level of the mechanism (level 0, see section 2.c above) is supposed to be described by a mechanistically adequate model of computation. The description of the model usually comprises two parts: (1) an abstract specification of a computation, which should include all the causally relevant variables (a formal model of the mechanism); (2) a complete blueprint of the mechanism at this level of its organization.

Even if one remains skeptical about causation or physical mechanisms, Putnam and Searle’s objections can be rejected in the mechanistic account of implementation, to the extent that these theoretical posits are admissible in special sciences. What is clear from this discussion is that implementation is not a matter of any simple mapping but of satisfying a number of additional constraints usually required by causal modeling in science.

4. Other objections to CTM

The objection discussed in section 3 is by no means the only objection discussed in philosophy, but it is special because of its potential to completely trivialize CTM. Another very influential objection against CTM (and against the very possibility of creating genuine artificial intelligence) stems from Searle’s Chinese Room thought experiment. The debate over this thought experiment is, at best, inconclusive, so it does not show that CTM is doomed (for more discussion on Chinese Room, see also (Preston and Bishop 2002)). Similarly, all arguments that purport to show that artificial intelligence (AI) is in principle impossible seem to be equally unconvincing, even if they were cogent at some point in time when related to some domains of human competence (for example, for a long time it has been thought that decent machine translation is impossible; it has been even argued that funding research into machine speech recognition is morally wrong, compare (Weizenbaum 1976, 176)). The relationship between AI and CTM is complex: even if non-human AI is impossible, it does not imply that CTM is wrong, as it may turn out that only biologically-inspired AI is possible.

One group of objections against CTM focuses on its alleged reliance on the claim that cognition should be explained merely in terms of computation. This motivates, for example, claims that CTM ignores emotional or bodily processes (see Embodied Cognition). Such claims are, however, unsubstantiated: proponents of CTM more often than not ignore emotions (though even early computer simulations focused on motivation and emotion; compare (Tomkins and Messick 1963; Colby and Gilbert 1964; Loehlin 1968)) or embodiment, though this is not at the core of their claims. Furthermore, according to the most successful theories of implementation, both causal and mechanistic, a physical computation always has properties that are over and above its computational features. It is these physical features that make this computation possible in the first place, and ignoring them (for example, ignoring the physical constitution of neurons) simply leaves the implementation unexplained. For this reason, it seems quite clear that CTM cannot really involve a rejection of all other explanations; the causal relevance of computation implies causal relevance of other physical features, which means that embodied cognition is implied by CTM, rather than excluded.

Jerry Fodor has argued that it is central cognition that cannot be explained computationally, in particular in the symbolic way (and that no other explanation is forthcoming). This claim seems to fly in the face of the success of production systems in such domains as reasoning and problem solving. Fodor justifies his claim by pointing out that central cognitive processes are cognitively impenetrable, which means that an agent’s knowledge and beliefs may influence any other of his other beliefs (which also means that beliefs are strongly holistic). But even if one accepts the claim that there is a substantial (and computational) difference between cognitively penetrable and impenetrable processes, this still wouldn’t rule out a scientific account of both (Boden 1988, 172).

Arguments against the possibility of a computational account of common sense (Dreyfus 1972) also appeal to Holism. Some also claim that it leads to the frame problem in AI, though this has been debated; while the meaning of the frame problem for CTM is unclear (Pylyshyn 1987; Shanahan 1997; Shanahan and Baars 2005).

A specific group of arguments against CTM is directed against the claim that cognition is digital effective computation: some propose that the mind is hypercomputational and try to prove this with reference to Gödel’s proof of undecidability (Lucas 1961; Penrose 1989). These arguments are not satisfactory because they assume without justification that human beliefs are not contradictory (Putnam 1960; Krajewski 2007). Even if they are genuinely contradictory, the claim that the mind is not a computational mechanism cannot be proven this way, as Krajewski has argued, showing that the proof leads to a contradiction.

5. Conclusion

The Computational Theory of Mind (CTM) is the working assumption of the vast majority of modeling efforts in cognitive science, though there are important differences among various computational accounts of mental processes. With the growing sophistication of modeling and testing techniques, computational neuroscience offers more and more refined versions of CTM, which are more complex than early attempts to model mind as a single computational device ( such as a Turing machine). What is much more plausible, at least biologically, is a complex organization of various computational mechanisms, some permanent and some ephemeral, in a structure that does not form a strict hierarchy. The general agreement in cognitive science is, however, that the generic claim that minds process information, even if it is an empirical hypothesis that might prove wrong, is highly unlikely to turn out false. Yet it is far from clear what kind of processing is involved.

6. References and Further Reading

  • Aizawa, Kenneth. 2003. The Systematicity Arguments. Boston: Kluwer Academic.
  • Anderson, John R. 1983. The Architecture of Cognition. Cambridge, Mass.: Harvard University Press.
  • Apter, Michael. 1970. The Computer Simulation of Behaviour. London: Hutchinson.
  • Arbib, Michael, Carl Lee Baker, Joan Bresnan, Roy G. D’Andrade, Ronald Kaplan, Samuel Jay Keyser, Donald A. Norman, et al. 1978. Cognitive Science, 1978.
  • Bechtel, William. 2008. Mental Mechanisms. New York: Routledge (Taylor & Francis Group).
  • Bechtel, William, and Adele Abrahamsen. 2002. Connectionism and the Mind. Blackwell.
  • Blokpoel, Mark, Johan Kwisthout, and Iris van Rooij. 2012. “When Can Predictive Brains Be Truly Bayesian?” Frontiers in Psychology 3 (November): 1–3.
  • Boden, Margaret A. 1988. Computer Models of Mind: Computational Approaches in Theoretical Psychology. Cambridge [England]; New York: Cambridge University Press.
  • Bowers, Jeffrey S. 2009. “On the Biological Plausibility of Grandmother Cells: Implications for Neural Network Theories in Psychology and Neuroscience.” Psychological Review 116 (1) (January): 220–51.
  • Chalmers, David J. 2011. “A Computational Foundation for the Study of Cognition.” Journal of Cognitive Science (12): 325–359.
  • Clark, Andy. 2013. “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science.” The Behavioral and Brain Sciences 36 (3) (June 10): 181–204.
  • Colby, Kenneth Mark, and John P Gilbert. 1964. “Programming a Computer Model of Neurosis.” Journal of Mathematical Psychology 1 (2) (July): 405–417.
  • Copeland, B. Jack. 1996. “What Is Computation?” Synthese 108 (3): 335–359.
  • Copeland, B. 2004. “Hypercomputation: Philosophical Issues.” Theoretical Computer Science 317 (1-3) (June): 251–267.
  • Craver, Carl F. 2007. Explaining the Brain. Mechanisms and the Mosaic Unity of Neuroscience. Oxford: Oxford University Press.
  • Cummins, Robert. 1975. “Functional Analysis.” The Journal of Philosophy 72 (20): 741–765.
  • Cummins, Robert. 1983. The Nature of Psychological Explanation. Cambridge, Mass.: MIT Press.
  • Cummins, Robert. 2000. “‘How Does It Work’ Versus ‘What Are the Laws?’: Two Conceptions of Psychological Explanation.” In Explanation and Cognition, ed. F Keil and Robert A Wilson, 117–145. Cambridge, Mass.: MIT Press.
  • Dennett, Daniel C. 1983. “Beyond Belief.” In Thought and Object, ed. Andrew Woodfield. Oxford University Press.
  • Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, Mass.: MIT Press.
  • Dreyfus, Hubert. 1972. What Computers Can’t Do: A Critique of Artificial Reason. New York: Harper & Row, Publishers.
  • Eliasmith, Chris. 2013. How to Build the Brain: a Neural Architecture for Biological Cognition. New York: Oxford University Press.
  • Eliasmith, Chris, and Charles H. Anderson. 2003. Neural Engineering. Computation, Representation, and Dynamics in Neurobiological Systems. Cambridge, Mass.: MIT Press.
  • Eliasmith, Chris, Terrence C Stewart, Xuan Choo, Trevor Bekolay, Travis DeWolf, Yichuan Tang, Charlie Tang, and Daniel Rasmussen. 2012. “A Large-scale Model of the Functioning Brain.” Science (New York, N.Y.) 338 (6111) (November 30): 1202–5.
  • Fodor, Jerry A. 1968. Psychological Explanation: An Introduction to the Philosophy of Psychology. New York: Random House.
  • Fodor, Jerry A. 1974. “Special Sciences (or: The Disunity of Science as a Working Hypothesis).” Synthese 28 (2) (October): 97–115.
  • Fodor, Jerry A. 1975. The Language of Thought. 1st ed. New York: Thomas Y. Crowell Company.
  • Fodor, Jerry A. 2001. The Mind Doesn’t Work That Way. Cambridge, Mass.: MIT Press.
  • Fodor, Jerry A., and Zenon W. Pylyshyn. 1988. “Connectionism and Cognitive Architecture: a Critical Analysis.” Cognition 28 (1-2) (March): 3–71.
  • Fresco, Nir. 2010. “Explaining Computation Without Semantics: Keeping It Simple.” Minds and Machines 20 (2) (June): 165–181.
  • Friston, Karl, and Stefan Kiebel. 2011. “Predictive Coding: A Free-Energy Formulation.” In Predictions in the Brain: Using Our Past to Generate a Future, ed. Moshe Bar, 231–246. Oxford: Oxford University Press.
  • Friston, Karl, James Kilner, and Lee Harrison. 2006. “A Free Energy Principle for the Brain.” Journal of Physiology, Paris 100 (1-3): 70–87.
  • Glennan, Stuart. 2002. “Rethinking Mechanistic Explanation.” Philosophy of Science 69 (S3) (September): S342–S353.
  • Gross, Charles G. 2002. “Genealogy of the ‘Grandmother Cell’.” The Neuroscientist 8 (5) (October 1): 512–518.
  • Harnish, Robert M. 2002. Minds, Brains, Computers : an Historical Introduction to the Foundations of Cognitive Science. Malden, MA: Blackwell Publishers.
  • Haugeland, John. 1985. Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press.
  • Jones, Matt, and Bradley C. Love. 2011. “Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition.” Behavioral and Brain Sciences 34 (04) (August 25): 169–188.
  • Kaplan, David Michael, and William Bechtel. 2011. “Dynamical Models: An Alternative or Complement to Mechanistic Explanations?” Topics in Cognitive Science 3 (2) (April 6): 438–444.
  • Kaplan, David Michael, and Carl F Craver. 2011. “The Explanatory Force of Dynamical and Mathematical Models in Neuroscience: A Mechanistic Perspective*.” Philosophy of Science 78 (4) (October): 601–627.
  • Konorski, Jerzy. 1967. Integrative Activity of the Brain; an Interdisciplinary Approach. Chicago: University of Chicago Press.
  • Krajewski, Stanisław. 2007. “On Gödel’s Theorem and Mechanism: Inconsistency or Unsoundness Is Unavoidable in Any Attempt to ‘Out-Gödel’ the Mechanist.” Fundamenta Informaticae 81 (1) (January 1): 173–181.
  • Lewandowsky, Stephan, and Simon Farrell. 2011. Computational Modeling in Cognition: Principles and Practice. Thousand Oaks: Sage Publications.
  • Loehlin, John. 1968. Computer Models of Personality. New York: Random House.
  • Lucas, JR. 1961. “Minds, Machines and Gödel.” Philosophy 9 (3) (April): 219–227.
  • Lycan, William G. 1987. Consciousness. Cambridge, Mass.: MIT Press.
  • Machamer, Peter, Lindley Darden, and Carl F Craver. 2000. “Thinking About Mechanisms.” Philosophy of Science 67 (1): 1–25.
  • Machery, Edouard. 2009. Doing Without Concepts. Oxford: Oxford University Press, USA.
  • Marr, David. 1982. Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. New York: W. H. Freeman and Company.
  • McCulloch, Warren S., and Walter Pitts. 1943. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Bulletin of Mathematical Biophysics 5: 115–133.
  • Miller, George A., Eugene Galanter, and Karl H. Pribram. 1967. Plans and the Structure of Behavior. New York: Holt.
  • Miłkowski, Marcin. 2013. Explaining the Computational Mind. Cambridge, Mass.: MIT Press.
  • Müller, Vincent C. 2009. “Pancomputationalism: Theory or Metaphor?” In The Relevance of Philosophy for Information Science, ed. Ruth Hagengruber. Berlin: Springer.
  • Von Neumann, John. 1958. The Computer and the Brain. New Haven: Yale University Press.
  • Newell, Allen. 1980. “Physical Symbol Systems.” Cognitive Science: A Multidisciplinary Journal 4 (2): 135–183.
  • Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, Mass. and London: Harvard University Press.
  • Newell, Allen, and Herbert A Simon. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall.
  • Newman, M H A. 1928. “Mr. Russell’s ‘Causal Theory of Perception’.” Mind 37 (146) (April 1): 137–148.
  • O’Brien, Gerard, and Jon Opie. 2006. “How Do Connectionist Networks Compute?” Cognitive Processing 7 (1) (March): 30–41.
  • O’Brien, Gerard, and Jon Opie. 2009. “The Role of Representation in Computation.” Cognitive Processing 10 (1) (February): 53–62.
  • O’Reilly, Randall C. 2006. “Biologically Based Computational Models of High-level Cognition.” Science 314 (5796) (October 6): 91–4.
  • Penrose, Roger. 1989. The Emperor’s New Mind. Quantum. London: Oxford University Press.
  • Piccinini, Gualtiero. 2006. “Computation Without Representation.” Philosophical Studies 137 (2) (September): 205–241.
  • Piccinini, Gualtiero. 2007a. “Computational Modelling Vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?” Australasian Journal of Philosophy 85 (1): 93–115.
  • Piccinini, Gualtiero. 2007b. “Computing Mechanisms.” Philosophy of Science 74 (4) (October): 501–526.
  • Piccinini, Gualtiero. 2008. “Computers.” Pacific Philosophical Quarterly 89 (1) (March): 32–73.
  • Piccinini, Gualtiero, and Sonya Bahar. 2013. “Neural Computation and the Computational Theory of Cognition.” Cognitive Science 37 (3) (April 5): 453–88.
  • Piccinini, Gualtiero, and Carl Craver. 2011. “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches.” Synthese 183 (3) (March 11): 283–311.
  • Preston, John, and Mark Bishop. 2002. Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford; New York: Clarendon Press.
  • Putnam, Hilary. 1960. “Minds and Machines.” In Dimensions of Mind, ed. Sidney Hook. New York University Press.
  • Putnam, Hilary. 1991. Representation and Reality. Cambridge, Mass.: The MIT Press.
  • Pylyshyn, Zenon W. 1984. Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge, Mass.: MIT Press.
  • Pylyshyn, Zenon W. 1987. Robot’s Dilemma: The Frame Problem in Artificial Intelligence. Norwood, New Jersey: Ablex Publishing Corporation.
  • Ramsey, William M. 2007. Representation Reconsidered. Cambridge: Cambridge University Press.
  • Rasmussen, Daniel, and Chris Eliasmith. 2013. “God, the Devil, and the Details: Fleshing Out the Predictive Processing Framework.” The Behavioral and Brain Sciences 36 (3) (June 1): 223–4.
  • Rosch, Eleanor, and Carolyn B Mervis. 1975. “Family Resemblances: Studies in the Internal Structure of Categories.” Cognitive Psychology 7 (4) (October): 573–605.
  • Searle, John R. 1992. The Rediscovery of the Mind. Cambridge, Mass.: MIT Press.
  • Shanahan, Murray, and Bernard Baars. 2005. “Applying Global Workspace Theory to the Frame Problem.” Cognition 98 (2) (December): 157–76.
  • Shanahan, Murray. 1997. Solving the Frame Problem: a Mathematical Investigation of the Common Sense Law of Inertia. Cambridge, Mass.: MIT Press.
  • Sloman, A. 1996. “Beyond Turing Equivalence.” In Machines and Thought: The Legacy of Alan Turing, ed. Peter Millican, 1:179–219. New York: Oxford University Press, USA.
  • Steels, Luc. 2008. “The Symbol Grounding Problem Has Been Solved, so What’ s Next?” In Symbols and Embodiment: Debates on Meaning and Cognition, ed. Manuel de Vega, Arthur M. Glenberg, and Arthur C. Graesser, 223–244. Oxford: Oxford University Press.
  • Sterelny, Kim. 1990. The Representational Theory of Mind: An Introduction. Oxford, OX, UK; Cambridge, Mass., USA: B. Blackwell.
  • Tomkins, Silvan, and Samuel Messick. 1963. Computer Simulation of Personality, Frontier of Psychological Theory,. New York: Wiley.
  • Walmsley, Joel. 2008. “Explanation in Dynamical Cognitive Science.” Minds and Machines 18 (3) (July 2): 331–348.
  • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W.H. Freeman.
  • Wittgenstein, Ludwig. 1953. Philosophical Investigations. New York: Macmillan.
  • Zednik, Carlos. 2011. “The Nature of Dynamical Explanation.” Philosophy of Science 78 (2): 238–263.

 

Author Information

Marcin Milkowski
Email: marcin.milkowski@gmail.com
Institute of Philosophy and Sociology
Polish Academy of Sciences
Poland

David Hume: Religion

David HumeDavid Hume (1711-1776) was called “Saint David” and “The Good David” by his friends, but his adversaries knew him as “The Great Infidel.” His contributions to religion have had a lasting impact and contemporary significance. Taken individually, Hume gives novel insights into many aspects of revealed and natural theology. When taken together, however, they provide his attempt at a systematic undermining of the justifications for religion. Religious belief is often defended through revealed theology, natural theology, or pragmatic advantage. However, through Hume’s various philosophical writings, he works to critique each of these avenues of religious justification.

Though Hume’s final view on religion is not clear, what is certain is that he was not a theist in any traditional sense. He gives a sweeping argument that we are never justified in believing testimony that a miracle has occurred, because the evidence for uniform laws of nature will always be stronger. If correct, this claim would undermine the veracity of any sacred text, such as the Bible, which testifies to miracles and relies on them as its guarantor of truth. As such, Hume rejects the truth of any revealed religion, and further shows that, when corrupted with inappropriate passions, religion has harmful consequences to both morality and society. Further, he argues, rational arguments cannot lead us to a deity. Hume develops what are now standard objections to the analogical design argument by insisting that the analogy is drawn only from limited experience, making it impossible to conclude that a cosmic designer is infinite, morally just, or a single being. Nor can we use such depictions to inform other aspects of the world, such as whether there is a dessert-based afterlife. He also defends what is now called “the Problem of Evil,” namely, that the concept of an all powerful, all knowing, and all good God is inconsistent with the existence of suffering.

Lastly, Hume is one of the first philosophers to systematically explore religion as a natural phenomenon, suggesting how religious belief can arise from natural, rather that supernatural means.

Table of Contents

  1. Hume’s Publications on Religious Belief
  2. Interpretations of Hume’s View
  3. Miracles
  4. Immortality of the Soul
  5. The Design Argument
  6. The Cosmological Argument
  7. The Problem of Evil
  8. The Psychology of Religious Belief
  9. The Harms of Religion
  10. References and Further Reading
    1. Hume’s Works on Religion
    2. Works in the History of Philosophy

1. Hume’s Publications on Religious Belief

Hume is one of the most important philosophers to have written in the English language, and many of his writings address religious subjects either directly or indirectly. His very first work had the charge of atheism leveled against it, and this led to his being passed over for the Chair of Moral Philosophy at the University of Edinburgh. In fact, Hume’s views on religion were so controversial that he never held a university position in philosophy.

Hume addressed most of the major issues within the philosophy of religion, and even today theists feel compelled to confront Hume’s challenges. He leveled moral, skeptical, and pragmatic objections against both popular religion and the religion of the philosophers. These run the gamut from highly specific topics, such as metaphysical absurdities entailed by the Real Presence of the Eucharist, to broad critiques like the impossibility of using theology to infer anything about the world.

Hume’s very first work, A Treatise of Human Nature, includes considerations against an immortal soul, develops a system of morality independent of a deity, attempts to refute occasionalism, and argues against a necessary being, to name but a few of the religious topics that it addresses. Hume’s Enquiry Concerning Human Understanding re-emphasizes several of the challenges from the Treatise, but also includes a section against miracles and a section against the fruitfulness of theology. Hume’s major non-philosophical work, The History of England, discusses specific religious sects, largely in terms of their (often bloody) consequences. He also wrote numerous essays discussing various aspects of religion, such as the anti-doctrinal essays “Of the Immortality of the Soul” and “Of Suicide,” and critiques of organized religion and the clergy in “Of Superstition and Enthusiasm” and “Of National Characters.” Hume also wrote two major works entirely dedicated to religion: The Natural History of Religion (Natural History) and the Dialogues concerning Natural Religion (Dialogues), which merit brief discussions of their own.

Hume wrote the Natural History roughly in tandem with the first draft of the Dialogues, but while the former was published during his lifetime (as one of his Four Dissertations), the latter was not. In the introduction to the Natural History, Hume posits that there are two types of inquiry to be made into religion: its foundations in reason and its origin in human nature. While the Dialogues investigate the former, the task of the Natural History is to explore the latter. In the Natural History, he focuses on how various passions can give rise to common or false religion. It is an innovative work that brings together threads from philosophy, psychology, and history to provide a naturalistic account of how the various world religions came about.

Though Hume began writing the Dialogues at roughly the same time as the Natural History, he ultimately arranged to have the former published posthumously. In the twenty-five years between the time at which he first wrote them and his death, the Dialogues underwent three sets of revisions, including a final revision from his deathbed. The Dialogues are a rich discussion of Natural Theology, and are generally considered to be the most important book ever written on the subject. Divided into twelve parts, the Dialogues follow the discussion of three thinkers debating the nature of God. Of the three characters, Philo takes up the role of the skeptic, Demea represents the orthodox theologian of Hume’s day, and Cleanthes follows a more philosophical, empirical approach to his theology. The work is narrated by Pamphilus, a professed student of Cleanthes.

Both Hume’s style and the fact of posthumous publication give rise to interpretive difficulties. Stylistically, Hume’s Dialogues are modeled after On the Nature of the Gods, a dialogue by the Roman philosopher Cicero. In Circero’s works, unlike the dialogues of Plato, Leibniz, and Berkeley, a victor is not established from the outset, and all characters make important contributions. Hume ridicules such one-sided dialogues on the grounds that they put “nothing but Nonsense into the Mouth of the Adversary” (L1, Letter 72). The combination of this stylistic preference with Hume’s use of irony, an infrequently discussed but frequently employed literary device in his writings, makes the work a delight to read, but creates interpretive difficulties in determining who speaks for Hume on any given topic.

In the Dialogues, all the characters make good Humean points, even Pamphilus and Demea. The difficulty comes in determining who speaks for Hume when the characters disagree. Hume has been interpreted as Cleanthes/Pamphilus, Philo, an amalgamation, and as none of them. The most popular view, though not without dissent, construes Hume as Philo. Philo certainly has the most to say in the Dialogues. His arguments and objections often go unanswered, and he espouses many opinions on both religion and on other philosophical topics that Hume endorses in other works, such as the hypothesis that causal inference is based on custom. The more significant challenge to interpreting Hume as Philo concerns the collection of remarks at the beginning of Part XII of the Dialogues, known as Philo’s Reversal. After spending the bulk of the Dialogues raising barrage of objections against the design argument, Part XII has Philo admitting, “A purpose, an intention, a design strikes everywhere the most careless, the most stupid thinker…” (D 12.2). Nonetheless, whether Philo’s Reversal is sincere or not is fundamentally tied to Hume’s own views on religion.

2. Interpretations of Hume’s View

Given the comprehensive critique that Hume levels against religion, it is clear that he is not a theist in any traditional sense. However, acknowledging this point does little to settle Hume’s considered views on religion. There remain three positions open to Hume: atheist naturalism, skeptical agnosticism, or some form of deism. The first position has Hume denying any form of supernaturalism, and is much more popular outside of Hume scholarship than within. The reason for this is that it runs contrary to Hume’s attitude regarding speculative metaphysics. It has him making a firm metaphysical commitment by allowing an inference from our having no good reason for thinking that there are supernatural entities, to a positive commitment that in fact there are none. However, Hume would not commit the Epistemic Fallacy and thereby allow the inference from “x is all we can know of subject y” to “x constitutes the real, mind-independent essence of y.” Indeed, in Part XII of the first Enquiry, Hume explicitly denies the inference from what we can know from our ideas to what is the case in reality.

These considerations against a full-fledged atheist position motivate the skeptical view. While atheism saddles Hume with too strong a metaphysical commitment, the skeptical view also holds that he does not affirm the existence of any supernatural entities. This view has Hume doubting the existence of supernatural entities, but still allowing their possibility. It has the advantage of committing Hume to the sparse ontology of the naturalist without actually committing him to potentially dogmatic metaphysical positions. Hence, Hume can be an atheist for all intents and purposes without actually violating his own epistemic principles.

Both the atheist and skeptical interpretations must, then, take Philo’s Reversal as insincere. Perhaps Hume feared the political consequences of publically denouncing theism; alternatively, he may have used Philo’s Reversal simply as a dialectical tool of the Dialogues. Many scholars tend to steer clear of the former for several reasons. First, while it was true that, early in his career, Hume edited his work to avoid giving offense, this was not the case later. For example, Hume excised the miracles argument from the Treatise, but it later found its way into print in the Enquiry. Second, Hume arranged to have the Dialogues published after his death, and therefore had no reason to fear repercussions for himself. Further, Hume did not seem to think that the content would bring grief to his nephew who brought it to publication, as he revealed in a letter to his publisher (L2, Appendix M). Third, it is not only in the Dialogues that we get endorsements of a deity or of a design argument. J.C.A. Gaskin (1988: 219) provides an extensive (though not exhaustive) list of several other places in which we get similar pro-deistic endorsements from Hume. Lastly, it is generally considered hermeneutically appropriate to invoke disingenuousness only if an alternative interpretation cannot be plausibly endorsed.

Norman Kemp Smith, in his commentary on the Dialogues, argues in favor of just such an alternative interpretation. Though he interprets Hume as Philo, he has the Reversal as insincerely made, not from fear, but as a dialectical tool. In his Ciceronian dialogue, Hume does not want the reader, upon finishing the piece, to interpret any of the characters as victorious, instead encouraging them to reflect further upon these matters. Thus, Philo’s Reversal is part of a “dramatic balance” intended to help mask the presence of a clear victor.

Nelson Pike, in his own commentary on the Dialogues, roundly criticizes Kemp Smith’s position. We should instead look for reasons to take the Reversal as genuine. One possibility he considers is the presence of the “irregular arguments” of Part III. Here, instead of presenting design arguments based on standard analogical reasoning, Cleanthes presents considerations in which design will, “immediately flow in upon you with a force like that of sensation” (D 3.7). Pike therefore interprets these “irregular arguments” as non-inferential. If this is right, and the idea of a designer comes upon us naturally rather than inferentially, as Ronald Butler, Stanley Tweyman, and others have argued, then Philo’s Reversal is not a reversal at all. He can consistently maintain that the inference of the design argument is insufficient for grounding one’s belief in God, and that nonetheless, we have a natural inclination to accept it.

There is, therefore, support for interpreting Hume as a deist of a limited sort. Gaskin calls this Hume’s “attenuated deism,” attenuated in that the analogy to something like human intelligence is incredibly remote, and that no morality of the deity is implied, due especially to the Problem of Evil. However, scholars that attribute weak deism to Hume are split in regard to the source of the belief. Some, like Gaskin, think that Hume’s objections to the design argument apply only to analogies drawn too strongly. Hence, Hume does not reject all design arguments, and , provided that the analogs are properly qualified, might allow the inference. This is different than the picture suggested by Butler and discussed by Pike in which the belief is provided by a natural, non-rational faculty and thereby simply strikes us, rather than as the product of an inferential argument. Therefore, though the defenders of a deistic Hume generally agree about the remote, non-moral nature of the deity, there is a fundamental schism regarding the justification and generation of this belief. Both sides, however, agree that the belief should not come from special revelation, such as miracles or revealed texts.

3. Miracles

Because Hume’s denial of all miracles in section X of the Enquiry entails a denial of all revealed theology, it is worthwhile to consider his arguments in detail. The section is divided into two parts. While Part I provides an argument against believing in miracles in general, Part II gives four specific considerations against miracles based on particular facts about the world. Therefore, we may refer to the argument of Part I as Hume’s Categorical Argument against miracles and those of Part II as the four Evidential Arguments against miracles. Identifying Hume’s intentions with these arguments is notoriously difficult. Though the Evidential Arguments are fairly straightforward in and of themselves, there are two major interpretive puzzles: what the Categorical Argument of Part I is supposed to be, and how it fits with the Evidential Arguments of Part II. Some see the two parts as entirely separable, while others insist that they provide two parts of a cohesive whole. The following reconstructions attempt to stay interpretively neutral on these disputes.

Hume begins Part I with rules for the appropriate proportioning of belief. First, he divides arguments that justify beliefs regarding cause and effect into proofs and probabilities. Proofs are arguments supported by evidence in which the effects have been constant, such as the sun rising every day. However, there are stronger and weaker proofs—consider a professor showing up for class every day versus the sun rising every day—and only the strongest proofs, those supporting our beliefs in the laws of nature, have been attested to “in all countries and all ages.” Effects, however, are not always constant. When faced with a “contrariety of effects,” we must instead use probabilities, which are evidentially weaker than proofs. Since the strength of both proofs and probabilities varies in degree, we have the potential for “all imaginable degrees of assurance.” Hume maintains that, “The wise man…proportions his beliefs to the evidence.” In cases where effects have been constant and therefore supported by proof, our beliefs are held with a greater degree of assurance than those supported by mere probability (EHU 10.1-4).

Having explained Hume’s model for proportioning beliefs, we can now consider its ramifications for attested miracles:

A miracle is a violation of the laws of nature; and as a firm and unalterable experience has established these laws, the proof against a miracle, from the very nature of the fact, is as entire as any argument from experience can possibly be imagined. (EHU 10.12)

Here, Hume defines a miracle as a “violation of the laws of nature” though he then “accurately” defines a miracle in a footnote as “a transgression of a law of nature by a particular volition of the Deity or by the interposition of some invisible agent.” As to which definition is more relevant, the second more adequately captures the notion of a miracle. In a 1761 letter to Blair, Hume indicates that, as an empirical fact, miracles always have religious content: “I never read of a miracle in my life that was not meant to establish some new point of religion” (L1, Letter 188). A Humean miracle is, therefore, a violation of a law of nature whose cause is an agent outside of nature, though the incompatibility with a law of nature is all that the Categorical Argument requires.

We must, therefore, consider Hume’s conception of the laws of nature. Following Donald Livingston, we may draw out some of the explicit features of Hume’s conception. They are universal, so any falsification of a supposed law or a law’s failure to be upheld would be sufficient to rob it of its nomological status. Laws, therefore, admit of no empirical counterexamples. Secondly, laws of nature are matters of fact, not relations of ideas, as their denial is always coherent. Indeed, like any other matter of fact, they must have some empirical content. As Livingston concludes, “…it must be possible to discipline theoretical talk about unobservable causal powers with empirical observations” (Livingston 1984: 203).

Utilizing this conception of the laws of nature, Hume draws his conclusion:

There must, therefore, be a uniform experience against every miraculous event, otherwise the event would not merit that appellation. And as the uniform experience amounts to a proof, then there is here a direct and full proof, from the nature of the fact, against the existence of any miracle; nor can such a proof be destroyed, or the miracle rendered credible, but by an opposite proof, which is superior….no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavors to establish…. (EHU 10.12-10.13; SBN 115-116, Hume’s emphasis)

The interpretation of this passage requires considerable care. As many commentators have pointed out, if Hume’s argument is: a miracle is a violation of a law of nature, but laws of nature do not admit of counterexamples, therefore there are no miracles, then Hume clearly begs the question. Call this the Caricature Argument. William Paley first attributed this to Hume, and the interpretation has had proponents ever since; but this cannot be Hume’s argument. The Caricature Argument faces three major obstacles, two of which are insurmountable. However, considering the inaccuracies of the Caricature Argument will help us to arrive at a more accurate reconstruction.

First, the Caricature Argument is an a priori, deductive argument from definition. This would make it a demonstration in Hume's vernacular, not a proof. Nonetheless, both the argument of Section X and the letter in which he elucidates it repeatedly appeal to the evidence against miracles as constituting a proof. If the Caricature Argument were correct, then the argument against miracles could not be labeled as such.

A second, related problem is that, if one accepts the Caricature Argument, then one must accept the entailed modality. From the conclusion of the a priori deductive argument, it follows that the occurrence of a miracle would be impossible. If this were the case, then no testimony could persuade a person to believe in the existence of a miracle. However, many take Hume to implicitly reject such an assumption. Such critics point to Hume’s acceptance of the claim that if a sufficient number of people testify to an eight-day darkness, then this constitutes a proof of its occurrence (EHU 10.36). Therefore, there are hypothetical situations in which our belief in a miracle could be established by testimony, implying that the conclusion of the Caricature Argument is too strong. This reply, however, is incorrect. Hume’s description of the proof for total darkness is generally interpreted as his establishing criteria for the rational justification of a belief, based on testimony, that a miracle has occurred. However, we must note that the passage that immediately precedes the example contains an ambiguous disjunct: “…there may possibly be miracles, or violations of the usual course of nature, of such a kind as to admit proof from human testimony” (EHU 10.36 emphasis added). From this passage alone, it is not clear whether Hume means for the darkness scenario to count as an example of the former, the latter, or both. Nevertheless, in Hume’s letter to Blair, he presents a similar example with an unambiguous conclusion. In considering Campbell’s complaint that it is a contradiction for Hume to introduce a fiction in which the testimony of miracle constitutes a proof, he has us consider his previous example concerning the

...supposition of testimony for a particular miracle [that might] amount to a full proof of it. For instance, the absence of the sun during 48 hours; but reasonable men would only conclude from this fact, that the machine of the globe was disordered during this time. (L1, Letter 188)

The conclusion Hume draws is that, even if testimony of a strange event were to amount to a full proof, it would be more reasonable to infer a hiccup in the natural regularity of things (on par with an eclipse, where apparent, but not the disturbance of a higher level regularity), rather than to conclude a miracle. Therefore, when presented with a situation that is either a miracle or a “violation of the usual course of nature,” we ought to infer the latter.

This preference for a naturalistic explanation is reemphasized in Hume’s discussion of Joan of Arc in the History of England. Hume states:

It is the business of history to distinguish between the miraculous and the marvelous; to reject the first in all narrations merely profane and human; to doubt the second; and when obliged by unquestionable testimony…to admit of something extraordinary, to receive as little of it as is consistent with the known facts and circumstances. (H 2.20, Hume’s emphasis )

Here, he once more suggests that we always reject the miraculous testimony and only accept as much of the marvelous as is required to remain consistent with the “unquestionable testimony.” For Hume, testimony of a miracle is always to be rejected in favor of the naturalistic interpretation. He therefore never grants a proof of a miracle as a real possibility, so the Caricature Argument may surmount at least this objection.

However, a final difficulty related to the modality of the conclusion concerns the observation that Hume couches his argument in terms of appropriate belief. Hume’s conclusion should, therefore, be interpreted as epistemic, but the Caricature Argument instead requires a metaphysical conclusion: miracles are impossible. The Caricature Argument cannot be correct, because Hume’s entire argument hinges on the way that we apportion our beliefs, and a fortiori, beliefs about testimony. Hume speaks of “our evidence” for the truth of miracles, belief in them being “contrary to the rules of just reasoning,” and miracles never being “established on…evidence.” “A miracle can never be proved” is a far cry from saying that a miracle has never occurred and never could occur. This gives us reason to reject the metaphysical conclusion of the Caricature Argument.

There are also logical implications against the metaphysical conclusion, such as Hume’s avowal that miracles have an essence, and that there can be un-witnessed miracles. Hume does not say that violations are impossible, only unknowable. Of course, it could be that Hume grants this merely for the sake of argument, but then the stronger conclusion would still have a problem. For whether or not Hume grants the occurrence of miracles, he certainly allows for their conceivability, something the Caricature Argument cannot allow since, for Hume, conceivability implies possibility. Finally, there is the fact that Part II exists at all. If Hume did indeed think that Part I established that miracles could never occur, the entire second part, where he shows that “…there never was a miraculous event established on… [sufficient] evidence” (EHU 10.14), would be logically superfluous. The proper conclusion is, therefore, the epistemic one.

In overcoming the weaknesses of the Caricature Argument, a more plausible Humean argument takes form. Hume’s Categorical Argument of Part I may be reconstructed as follows:

  1. Beliefs about matters of fact are supported only by proofs (stronger) or probabilities (weaker) that come in varying degrees of strength. [Humean Axiom- T 1.3.11.2, EHU 6.1, EHU 10.6]
  2. When beliefs about matters of fact conflict, assent ought to be given only to the sufficiently supported belief with the greatest degree of evidential support. [Humean Axiom- EHU 10.4, EHU 10.11]
  3. Belief in the occurrence of a miracle would be a matter of fact belief that conflicts with belief in at least one law of nature. [Humean Axiom- EHU 10.2]
  4. Laws of nature are matter of fact beliefs evidentially supported by proofs of the strongest possible type [Empirical Premise- EHU 10.2]
  5. Both testimonial probabilities supporting the occurrence of a miracle and (hypothetical) testimonial proofs supporting the occurrence of a miracle would be evidentially weaker than the proofs supporting the laws of nature. [Empirical Premise- EHU 10.2, EHU 10.13, EHU 10.36. The first clause is true by definition for probabilities, but Hume also establishes it more clearly in Part II.]
  6. Therefore, we should never believe testimony that a miracle has occurred.

There is much to be said for this reconstruction. First, in addition to Humean axioms, we have empirical premises rather than definitions that support the key inferences. Hence, the reconstruction is a proof, not a demonstration. Second, given that Hume has ancillary arguments for these empirical premises, there is no question-begging of the form that the Caricature Argument suggests. For instance, he argues for (4) by drawing on his criterion of “in all countries and all ages.” He does not simply assert that laws of nature automatically meet this criterion.

However, there is a separate worry of question-begging in (4) that needs to be addressed before moving on to the arguments of Part II. The challenge is that, in maintaining Hume’s position that men in all ages testify to the constancy of the laws of nature, any testimony to the contrary (that is, testimony of the miraculous) must be excluded. However, there are people that do testify to miracles. The worry is that, in assigning existence to laws of nature without testimonial exception, Hume may beg the question against those that maintain the occurrence of miracles.

This worry can be overcome, however, if we follow Don Garrett in realizing what Hume is attempting to establish in the argument:

… [when] something has the status of “law of nature”- that is, plays the cognitive role of a “law of nature”- for an individual judger…it has the form of a universal generalization, is regarded by the judger as causal, and is something for which the judger has firm and unalterable experience….This is, of course, compatible with there actually being exceptions to it, so long as one of those exceptions has, for the judger, the status of experiments within his or her experience. (Garrett 1997: 152, Hume’s emphasis)

Garrett rightly points out that in Hume’s argument laws of nature govern our belief, and fulfill a certain doxastic role for the judger. Nonetheless, once this is realized, we can strengthen Garrett’s point by recognizing that this role is, in fact, a necessary condition for testimony of a miracle. To believe in a miracle, the witness must believe that a law of nature has been violated. However, this means that, in endorsing the occurrence of the miracle, the witness implicitly endorses two propositions: that there is an established law of nature in place and that it has been broken. Thus, in order for a witness to convince me of a miracle, we must first agree that there is a law in place. The same testimony which seeks to establish the miracle reaffirms the nomological status of the law as universally believed.

This leads to the second point that Garrett raises. Only after this common ground is established do we consider, as an experiment, whether we should believe that the said law has been violated. Hence, even such a testimonial does not count against the universality of what we, the judges, take to be a law of nature. Instead, we are setting it aside as experimental in determining whether we should offer assent to the purported law or not. If this is right, then (4) does not beg the question. This leaves us with empirical premise (5), which leads to Part II.

Hume begins Part II by stating that, in granting that the testimonies of miracles may progress beyond mere probability, “we have been a great deal too liberal in our concession…” (EHU 10.14). He then gives four considerations as to why this is the case, three of which are relatively straightforward.

First, Hume tells us that, as an empirical fact, “there is not to be found, in all history, any miracle attested by a sufficient number of men, of such unquestioned good sense, education, and learning…” to secure its testimony (EHU 10.15). To be persuaded of a miracle, we would need to be sure that no natural explanation, such as delusion, deception, and so forth, was more likely than the miraculous, a task which, for Hume, would simply take more credible witnesses than have ever attested to a miracle.

Second, it is a fact of human nature that we find surprise and wonder agreeable. We want to believe in the miraculous, and we are much more likely to pass along stories of the miraculous than of the mundane. For Hume, this explains why humans tend to be more credulous with attested miracles than should reasonably be the case, and also explains why the phenomenon is so widespread.

His third, related presumption against miracles is that testimony of their occurrence tends to be inversely proportionate to education: miracles “are observed chiefly to abound among ignorant and barbarous nations” (EHU 10.20). Hume’s explanation for this is that purported miracles are generally born of ignorance. Miracles are used as placeholders when we lack the knowledge of natural causes. However, as learning progresses, we become increasingly able to discover natural causes, and no longer need to postulate miraculous explanations.

Hume’s fourth consideration is also his most difficult:

Every miracle, therefore, pretended to have wrought in any of these religions…as its direct scope is to establish the particular system to which it is attributed; so has it the same force, though more indirectly, to overthrow every other system. In destroying a rival system, it likewise destroys the credit of those miracles, on which that system was established; so that all the [miracles] of different religions are to be regarded as contrary facts, and evidence of these…as opposite to each other. (EHU 10.24)

His general idea is that, since multiple, incompatible religions testify to miracles, they cancel each other out in some way, but scholars disagree as to how this is supposed to happen. Interpreters such as Gaskin (1988: 137-138) and Keith Yandell (1990: 334) focus on Hume’s claim that miracles are generally purported to support or establish a particular religion. Therefore, a miracle wrought by Jesus is opposed and negated by one wrought by Mohammed, and so forth. However, as both Gaskin and Yandell point out, this inference would be flawed, because miracles are rarely such that they entail accepting one religion exclusively. Put another way, the majority of miracles can be interpreted and accepted by most any religion.

However, there is a more charitable interpretation of Hume’s fourth Evidential Argument. As the rest of the section centers around appropriate levels of doxastic assent, we should think that the notion is at play here too. A less problematic reconstruction therefore has his fourth consideration capturing something like the following intuition: the testifiers of miracles have a problem. In the case of their own religion, their level of incredulity is sufficiently low so as to accept their own purported miracles. However, when they turn to those attested by other religions, they raise their level of incredulity so as to deny these miracles of other faiths. Thus, by participating in a sect that rejects at least some miracles, they thereby undermine their own position. In claiming sufficient grounds for rejecting the miracles of the other sects, they have thereby rejected their own. For Hume, the sectarians cannot have their cake and eat it. Intellectual honesty requires a consistent level of credulity. By rejecting their opponent’s claims to miracles, they commit to the higher level of incredulity and should thereby reject their own. Hence, Hume’s later claim that, in listening to a Christian’s testimony of a miracle, “we are to regard their testimony in the same light as if they had mentioned that Mahometan miracle, and had in express terms contradicted it, with the same certainty as they have for the miracle they relate” (EHU 10.24). Thus, the problem for Hume is not that the sectarians cannot interpret all purported miracles as their own but that they, in fact, do not.

These are the four evidential considerations against miracles Hume provides in Part II. However, if the above reconstruction of Part I is correct, and Hume thinks that the Categorical Argument has established that we are never justified in believing the testimony of miracles, we might wonder why Part II exists at all. Its presence can be justified in several ways. First, on the reconstruction above, Part II significantly bolsters premise (5). Second, even if Part II were logically superfluous, Michael Levine rightly points out that the arguments of Part II can still have a buttressing effect for persuading the reader to the conclusion of Part I, thereby softening the blow of its apparently severe conclusion. A third, related reason is a rhetorical consideration. In order for one’s philosophical position to be well-grounded, it is undesirable to hang one’s hat on a single consideration. As Hume himself acknowledges, resting one part of his system on another would unnecessarily weaken it (T 1.3.6.9). Therefore, the more reasons he can present, the better. Fourth, Hume, as a participant in many social circles, is likely to have debated miracles in many ways against many opponents, each with his or her own favored example. Part II, therefore, gives him the opportunity for more direct and specific redress, and he does indeed address many specific miracles there. Finally, the considerations of Part II, the second and third especially, have an important explanatory effect. If Hume is right that no reasonable person would believe in the existence of miracles based on testimony, then it should seem strange that millions have nevertheless done so. Like the Natural History discussed below, Part II can disarm this worry by explaining why, if Hume is right, we have this widespread phenomenon despite its inherent unreasonableness.

4. Immortality of the Soul

In his essay, “Of the Immortality of the Soul,” Hume presents many pithy and brief arguments against considerations of an afterlife. He offers them under three broad headings, metaphysical, moral, and physical. Written for a popular audience, they should be treated as challenges or considerations against, rather than decisive refutations of, the doctrine.

Hume’s metaphysical considerations largely target the rationalist project of establishing a mental substance a priori (such as the discovery of the “I” in DescartesMeditations ). His first two considerations against this doctrine draw on arguments from his Treatise, referring to his conclusion that we have only a confused and insufficient idea of substance. If this is the case, however, then it becomes exceedingly difficult to discover the essence of such a notion a priori. Further, Hume says, we certainly have no conception of cause and effect a priori, and are therefore in no position to make a priori conclusions about the persistence conditions of a mental substance, or to infer that this substance grounds our thoughts. Indeed, even if we admit a mental substance, there are other problems.

Assuming that there is a mental substance, Hume tells us that we must treat it as relevantly analogous to physical substance. The physical substance of a person disperses after death and loses its identity as a person. Why think that the mental substance would behave otherwise? If the body rots, disperses, and ceases to be human, why not say the same thing of the soul? If we reply by saying that mental substances are simple and immortal, then for Hume, this implies that they would also be non-generable, and should not come into being either. If this were true, we should have memories from before our births, which we clearly do not. Note that here we see Hume drawing on his considerations against miracles; implicitly rejecting the possibility of a system whereby God continuously and miraculously brings souls into existence. Finally, if the rationalists are right that thought implies eternal souls, then animals should have them as well since, in the Treatise, Hume argued that mental traits such as rationality obtain by degree throughout the animal world, rather than by total presence or total absence; but this is something that the Christians of Hume’s day explicitly denied. In this way, Hume’s metaphysical considerations turn the standard rationalist assumptions of the theists, specifically the Christian theists of his day, against them.

The moral considerations, however, require no such presuppositions beyond the traditional depictions of heaven and hell. Hume begins by considering two problems involving God’s justice: first, he addresses the defender of an afterlife who posits its existence as a theodicy, maintaining that there is an afterlife so that the good can be appropriately rewarded and the wicked appropriately punished. For reasons considered in detail below, Hume holds that we cannot infer God’s justice from the world, which means we would need independent reasons for positing an alternate existence. However, the success of the arguments discussed above would largely undercut the adequacy of such reasons. Second, Hume points out that this system would not be just regardless. Firstly, Hume claims it is unwarranted to put so much emphasis on this world if it is so fleeting and minor in comparison to an infinite afterlife. If God metes out infinite punishment for finite crimes, then God is omni-vindictive, and it seems equally unjust to give infinite rewards for finitely meritorious acts. According to Hume, most men are somewhere between good and evil, so what sense is there in making the afterlife absolute? Further, Hume raises difficulties concerning birth. If all but Christians of a particular sect are doomed to hell, for instance, then being born in, say, Japan, would be like losing a cosmic lottery, a notion difficult to reconcile with perfect justice. Finally, Hume emphasizes that punishment without purpose, without some chance of reformation, is not a satisfactory system, and should not be endorsed by a perfect being. Hence, Hume holds that considerations of an afterlife seem to detract from, rather than bolster, God’s perfection.

Lastly are the physical (empirical) considerations, which Hume identifies as the most relevant. First, he points out how deeply and entirely connected the mind and body are. If two objects work so closely together in every other aspect of their existence, then the end of one should also be the end of the other. Two objects so closely linked, and that began to exist together, should also cease to exist together. Second, again in opposition to the rationalist metaphysicians, he points out that dreamless sleep establishes that mental activity can be at least temporarily extinguished; we therefore have no reason to think that it cannot be permanently extinguished. His third consideration is that we know of nothing else in the universe that is eternal, or at least that retains its properties and identity eternally, so it would be strange indeed if there were exactly one thing in all the cosmos that did so. Finally, Hume points out that nature does nothing in vain. If death were merely a transition from one state to another, then nature would be incredibly wasteful in making us dread the event, in providing us with mechanisms and instincts that help us to avoid it, and so forth. That is, it would be wasteful for nature to place so much emphasis on survival. Because of these skeptical considerations, Hume posits that the only argument for an immortal soul is from special revelation, a source he rejects along with miracles.

5. The Design Argument

Having discussed Hume’s rejection of revealed theology, we now turn to his critiques of the arguments of Natural Theology, the most hopeful of which, for Hume, is the Design Argument. His assaults on the design argument come in two very different types. In the Dialogues, Hume’s Philo provides many argument-specific objections, while Section XI of the Enquiry questions the fruitfulness of this type of project generally.

In the Dialogues, Cleanthes defends various versions of the design argument (based on order) and the teleological argument (based on goals and ends). Generally, he does not distinguish between the two, and they are similar in logical form: both are arguments by analogy. In analogical arguments, relevant similarities between two or more entities are used as a basis for inferring further similarities. In this case, Cleanthes is draws an analogy between artifacts and nature: artifacts exhibit certain properties and have a designer/creator; parts, or the totality, of nature exhibit similar properties, therefore, we should infer a relevantly analogous designer/creator. Hume’s Philo raises many objections against such reasoning, most of which are still considered as legitimate challenges to be addressed by contemporary philosophers of religion. Replies, however, will not be addressed here. Though Philo presents numerous challenges to this argument, they can be grouped under four broad headings: the scope of the conclusion, problems of weak analogy, problems with drawing the inference, and problems with allowing the inference. The first two types of problem are related in many cases, but not all. After the objections from the Dialogues are discussed, we will turn to Hume’s more general critique from the first Enquiry.

Scope of the Conclusion: Philo points out that, if the analogy is to be drawn between an artifact and some experienced portion of the universe, then the inferred designer must be inferred only from the phenomena. That is, we can only make merited conclusions about the creator based on the experienced part of the universe that we treat as analogous to an artifact, and nothing beyond this. As Philo argues in Part V, since the experienced portion of the world is finite, then we cannot reasonably infer an infinite creator. Similarly, our limited experience would not allow us to make an inference to an eternal creator, since everything we experience in nature is fleeting. An incorporeal creator is even more problematic, because Hume maintains that the experienced world is corporeal. In fact, even a unified, single creator becomes problematic if we are drawing an analogy between the universe and any type of complex artifact. If we follow someone like William Paley, who maintains that the universe is relevantly similar to a watch, then we must further pursue the analogy in considering how many people contributed to that artifact’s coming to be. Crafting a watch requires that many artificers work on various aspects of the artifact in order to arrive at a finished project. Finally, Philo insists that we also lack the ability to infer a perfect creator or a morally estimable creator, though the reasons for this will be discussed below in the context of the Problem of Evil. Given these limitations that we must place on the analogy, we are left with a very vague notion of a designer indeed. As Philo claims, a supporter of the design analogy is only “…able, perhaps, to assert, or conjecture, that the universe, sometime, arose from something like design: But beyond that position, he cannot ascertain one single circumstance, and is left afterward to fix every point on his [revealed] theology…” (D 5.12). This is Gaskin’s “attenuated deism” mentioned above. However, even weakening the conclusion to this level of imprecision still leaves a host of problems.

Problems of Weak Analogy: As mentioned above, many of Philo’s objections can be classified as either a problem with the scope of the conclusion or as a weak analogy. For instance, concluding an infinite creator from a finite creation would significantly weaken the analogy by introducing a relevant disanalogy, but the argument is not vulnerable in this way if the scope of the conclusion is properly restricted. However, beyond these problems of scope, Philo identifies two properties that serve to weaken the analogy but that cannot be discharged via a sufficient limitation of the conclusion. In Part X, Philo points out the apparent purposelessness of the universe. Designed artifacts are designed for a purpose. An artifact does something. It works toward some goal. Thus, there is a property that all artifacts have in common but that we cannot locate in the universe as a whole. For Philo, the universe is strikingly disanalogous to, for instance, a watch, precisely because the former is not observed to work toward some goal. This weakness cannot be discharged by restricting the conclusion, and any attempt to posit a purpose to the universe will either rely on revealed theology or is simply implausible. To show why Philo thinks this, take a few simplified examples: If we say that the universe exists “for the glory of God,” we not only beg the question about the existence of God, but we also saddle our conception of God with anthropomorphized attributes Hume would find unacceptable, such as pride and the need for recognition. Similar problems exist if we say that the universe was created for God’s amusement. However, if we change tactics and claim that the universe was created for the flourishing of humans, or any other species, then for Hume, we end up ignoring the phenomena in important ways, such as the numerous aspects of the universe that detract from human flourishing (such as mosquitoes) rather than contribute to it, and the vast portions of the universe that seem utterly irrelevant to human existence.

Beyond this, Philo finds another intractably weak analogy between artifacts and natural objects. This is the fundamental difference between nature and artifices. Philo holds that the more we learn about nature, the more striking the disanalogy between nature and artifacts. They are simply too fundamentally different. Consider, for instance, that many aspects of nature are self-maintaining and even self-replicating. Even if there are important analogies to be drawn between a deer and a watch, the dissimilarities, for Philo, will always outweigh them.

Problems with Drawing the Inference: There are further problems with the design inference that go beyond the mere dissimilarity of the analogs. Hume’s Philo raises two such objections based on experience. First, there is no clear logical relationship between order and a designer. In Part VII, Philo argues that we do in fact experience order without agency: an acorn growing into an oak tree shows that one does not need knowledge or intent to bestow order. Nor can we reply that the acorn was designed to produce a tree, for this is the very issue in question, and to import design in this way would beg the question. But if we can have order without a designer, then the mere presence of order cannot allow us to infer presence of design.

His second problem with making the design inference is that, like all inductive inferences, the design argument essentially involves a causal component. However, for Hume, knowledge of causal efficacy requires an experienced constant conjunction of phenomena; that is, only after we have seen that events of type B always follow events of type A do we infer a causal relationship from one to the other (see Hume: Causation). However, the creation of the universe necessarily would be a singular event. Since we do not have experience of multiple worlds coming into existence, causal inferences about any cosmogony become unfathomable for Hume in an important sense. This objection is often interpreted as peculiar to Hume’s own philosophical framework, relying heavily on his account of causation, but the point can be made more generally while still raising a challenge for the design argument. Because of our limited knowledge of the origins, if any, of the universe (especially in the 18th century), it becomes metaphysical hubris to think that we can make accurate inferences pertaining to issues such as: its initial conditions, persistence conditions, what it would take to cause a universe, whether the event has or requires a cause, and so forth. This relates to Philo’s next objection.

Problems when the Inference is Allowed: The previous two objections teach us that there are multiple origins of order, and that we are in a poor epistemic state to make inferences about speculative cosmogony. Taking these two points together, it becomes possible to postulate many hypothetical origins of the universe that are, for Hume, on as solid a footing as that of a designer, but instead rely on a different principle of order. Though Philo indicates that there are many, he specifically identifies only four principles which have been experienced to produce order in our part of the universe alone: reason (that is, rational agency), instinct, generation, and vegetation. Though Cleanthes defends reason as the only relevant principle of order, Philo develops alternative cosmogonies based on vegetation, where the universe grows from a seed, and generation, where the universe is like an animal or is like something created instinctively, such as a spider’s web; but Philo should not be taken as endorsing any of these alternative cosmogonies. Instead, his point is that, since we have just as much reason to think that order can arise from vegetation as it can from rational agency, as we have experience of both, there is no obvious reason to think that the inference to the latter, as the source of the order of the universe, is any better than the inference from the former, since we can make just as good an analogy with any of these. If order can come from multiple sources, and we know nothing about the creation of the universe, then Cleanthes is not in a position to give one a privileged position over the others. This means that, if we are to follow Cleanthes in treating the design inference as satisfactory, then we should treat the other inferences as satisfactory as well. However, since we cannot accept multiple conflicting cosmogonies, Philo maintains that we should refrain from attempting any such inferences. As he says in a different context: “A total suspense of judgement is here our only reasonable resource” (D 8.12).

A second problem Philo raises with allowing the design inference is that doing so can lead to a regress. Let us assume that the designer inference is plausible, that is, that a complex, purposive system requires a designing mind as its principle of order. But wait! Surely a creative mind is itself a complex, purposive system as well. A mind is complex, and its various parts work together to achieve specific goals. Thus, if all such purposive systems require a designing mind as their principle of order, then it follows that we would need a designing mind for the designing mind as well. Using the same inference, we would need a designing mind for that mind, and so on. Hence, allowing that complex, purposive systems require a designing mind as their principle of order leads to an infinite regress of designing minds. In order to stop this regress while still maintaining the design inference, one must demand that the designer of the universe does not require a designer, and there are two ways to make this claim. Either one could say that the designing mind that created the universe is a necessary being whose existence does not require a causal explanation, or one could simply say that the designer’s existence is brute. Cleanthes rejects the former option in his refutation of Demea’s “argument a priori” and, more generally, Hume does not think that this form of necessity is coherent. The only option then is to declare that the designer’s existence is brute, and therefore does not require a designer for its explanation. However, if this is the case, and we are allowing brute, undesigned existences into our ontology, then Philo asks why not declare that the universe itself is the brute existence instead? If we are allowing one instance where complexity and purposiveness does not imply a designer, then why posit an extraneous entity based on what is for Philo a dubious inference when parsimony should lead us to prefer a brute universe?

Setting aside the Problem of Evil for later, these are the major specific challenges Hume raises for the design argument in the Dialogues. However, Hume generalizes our inability to use theology to make analogical inferences about the world in Section XI of the Enquiry. Call it the Inference Problem. Rather than raising specific objections against the design argument, the Inference Problem instead questions the fruitfulness of the project of natural theology generally. Roughly stated, the Inference Problem is that we cannot use facts about the world to argue for the existence of some conception of a creator, and then use that conception of the creator to reveal further facts about the world, such as the future providence of this world, and so forth.

First, it is important to realize that the Inference Problem is a special case of an otherwise unproblematic inference. In science, we make this type of inference all the time; for instance, using phenomena to infer laws of nature and then using those laws of nature to make further predictions. Since Hume is clearly a proponent of scientific methodology, we must ask why the creator of the universe is a special and problematic case. The short answer is because of the worry of the Dialogues discussed above, that the creation of the cosmos is necessarily a singular event. This means that the Inference Problem for a creator is a special case for two reasons: first, when inferring the existence and attributes of a creator deity, Hume demands that we use all available data, literally anything available in the cosmos that might be relevant to our depiction of the creator rather than limiting the scope of our inquiry to a specific subset of phenomena. Hence, the deity we posit would represent our best guess based on all available information, unlike the case of discovering specific laws. Second, because the creation was a singular event, Hume insists that we cannot use analogy, resemblance, and so forth, to make good inductive inferences beyond what we have already done in positing the deity to begin with. On account of these two unique factors, there is a special Inference Problem that will arise whenever we try to use our inferred notion of a creator in order to discover new facts about the world.

In order to better understand the Inference Problem, let us take a concrete example, inferring a creator deity who is also just. There are only two possibilities: either the totality of the available evidence of the experienced cosmos does not imply the existence of a just creator or it does. If it does not, then we simply are not merited in positing a just deity and we therefore are not justified in assuming, for instance, that the deity’s justice will be discovered later, say in an afterlife. But if the evidence does imply a just creator deity (that is, the world is sufficiently just such as to allow the inference to a just creator), then Hume says we have no reason to think that a just afterlife is needed in order to supplement and correct an unjust world. In either case, says Hume, we are not justified in inferring further facts about the world based on our conception of the deity beyond what we have already experienced. Mutatis mutandis, this type of reasoning will apply to any conclusion drawn from natural theology. Our conception of the deity should be our best approximation based on the totality of available evidence. This means that for Hume, there are only two possibilities: either any relevant data is already considered and included in inferring our conception of the creator to begin with, and we therefore learn nothing new about the world; or the data is inconclusive and simply insufficient to support the inference to the conception of the deity. Hence, we cannot reasonably make it. If the data is not already there, then it cannot be realized from a permissible inference from the nature of the deity. However, if this is right, then the religious hypothesis of natural theology supplies no new facts about the world and is therefore explanatorily impotent.

6. The Cosmological Argument

Hume couches his concerns about theological inference as emanating from problems with drawing an analogical design inference. Since this is not the only type of argument in natural theology, we must now consider Hume’s reasons for rejecting other arguments that support the existence of a creator deity. Hume never makes a clear distinction between what Immanuel Kant later dubbed ontological and cosmological arguments, instead Hume lumps them together under the heading of arguments a priori. Note that this is not as strange as it might first appear, because although cosmological arguments are now uniformly thought of as a posteriori rather than a priori, this was not the case in Hume’s day. It took Hume’s own insights about the a posteriori nature of causation and of the Principle of Sufficient Reason to make us realize this. For Hume, what is common among such ontological and cosmological arguments is that they infer the existence of a necessary being. Hume seems to slip here, failing to distinguish between the logical necessity of the deity concluded by ontological arguments and the metaphysical necessity of the deity concluded by cosmological arguments. He therefore uniformly rejects all such arguments due to the incoherence of a necessary being, a rejection found in both the Dialogues and the first Enquiry.

In Part IX of the Dialogues, Demea presents his “argument a priori,” a cosmological argument based on considerations of necessity and contingency. The argument was intentionally similar to a version proffered by Samuel Clarke, but is also similar to arguments defended by both Leibniz and Aquinas. Before discussing the rejection of this argument, it is significant to note that it is not Philo that rejects Demea’s “argument a priori” but Cleanthes. Philo simply sits back and lets the assault occur without his help. This is telling because Cleanthes is a theist, though for Hume, ultimately misguided about the success of the design argument. The implication, then, is that for Hume, even the philosophical theist who erroneously believes that natural theology can arrive at an informative conception of a deity should still reject the cosmological argument as indefensible.

Cleanthes’ rejection of the argument a priori is ultimately fourfold. The first problem he suggests is a Category Mistake involved in trying to show that the existence of God is something that can be known a priori. For Hume and for Cleanthes, claims about existence are matters of fact, and matters of fact can never be demonstrated a priori. The important distinction between relations of ideas and matters of fact is that the denial of the former is inconceivable, whereas the denial of the latter is not. Hume maintains that we can always imagine a being not existing without contradiction; hence, all existential claims are matters of fact. Cleanthes finds this argument, “entirely decisive” and is “willing to rest the whole controversy upon it” (D 9.5), and it is a point Philo affirms in Part II. Hume argues similarly in the first Enquiry, maintaining that, “The non-existence of any being, without exception, is as clear and distinct an idea as its existence” (EHU 12.28). Hence, its denial is conceivable, and must be a matter of fact.

A related objection is that, since, for Hume, we can always conceive of a being not existing, there can be nothing essential about its existence. It is therefore not the type of property that can be found in a thing’s essence. Hume’s Cleanthes goes so far as to imply that the appellation “necessary existence” actually has no “consistent” meaning and therefore cannot be used in a philosophically defensible argument.

Thirdly, there is the worry mentioned above of allowing the design inference. Even if the inference is correct and we must posit a causeless being, this does not imply that this being is the deity. The inference is only to a necessary being, and for Philo, it is at least as acceptable to posit the universe as necessary in this way rather than positing an extra entity above and beyond it. This is true whether we posit a necessary being in order to stop a designer regress as above, or if we posit it to explain the contingent beings in the universe.

Finally, Hume thinks there is the dubiousness of the inference itself. A crucial premise of the argument a priori is that an infinite regress is impossible, because it violates the Principle of Sufficient Reason. However, Cleanthes takes contention with this claim. Imagine an infinitely long chain in which each event in that chain is explained through the previous members of the series. Note that in this picture, every member of the series is explained, because for any given member, there is always a prior set of members that fully explains it; but if each member of the series has been explained, then you have explained the series. It is unnecessary and inappropriate to insist on an explanation of the series as a whole. For these reasons, Hume concludes that, “The existence, therefore, of any being can only be proved by arguments from its cause or its effect” (EHU 12.29).

7. The Problem of Evil

In addition to his refutations of the arguments of natural theology, Hume gives positive reasons for rejecting a theistic deity with the Problem of Evil. Hume holds that the evidence of the Problem of Evil counts much more significantly against the theist’s case than the other objections that he raises against a designer, and it is in this area that Philo claims to “triumph” over Cleanthes. Hume’s discussion of the Problem takes place mainly in Parts X and XI of the Dialogues. The discussion is quite thorough, and includes presentations of both the Logical Problem of Evil and the Evidential Problem of Evil. Philo also considers and ultimately rejects several general approaches to solutions.

In Part X, Demea becomes Philo’s unwitting accomplice in generating the Problem of Evil. The two join together to expound an eloquent presentation of moral and natural evil, but with different motives. Demea presents evil as an obstacle that can only be surmounted with the assistance of God. Religion becomes the only escape from this brutish existence. Philo, however, raises the old problem of Epicurus, that the existence of evil is incompatible with a morally perfect and omnipotent deity. Hence, in Part X, Philo defends a version of the logical Problem. Although Philo ultimately believes that, “Nothing can shake the solidity of this reasoning, so short, so clear, so decisive”, he is “contented to retire still from this entrenchment” and, for the sake of argument, is willing to “allow, that pain or misery in man is compatible with infinite power and goodness in the deity” (D 10.34-35, Hume’s emphasis). Philo does not believe that a solution to the logical Problem of Evil is possible but, by granting this concession, he shifts the discussion to the evidential Problem in Part XI.

Hume generally presents the evidential Problem of Evil in two ways: in terms of prior probability and in terms of the likelihood of gratuitous evil. Taking them in order, Demea first hypothesizes a stranger to this world who is dropped into it and shown its miseries. Philo continues along these lines with a similar example in which someone is first shown a house full of imperfections, and is then assured that each flaw prevents a more disastrous structural flaw. For Hume, the lesson of both examples is the same. Just as the stranger to the world would be surprised to find that this world was created by a perfect being, the viewer of the house would be surprised to learn that he was considered a great or perfect architect. Philo asks, “Is the world considered in general…different from what a man…would, beforehand, expect from a very powerful, wise, and benevolent Deity?” (D 11.4, Hume’s emphasis). Since it would be surprising rather than expected, we have reason to think that a perfect creator is unlikely, and that the phenomena do not support such an inference. Moreover, pointing out that each flaw prevents a more disastrous problem does not improve matters, according to Philo.

Apart from these considerations from prior probability, Philo also argues the likelihood of gratuitous evil. To this end, Philo presents four circumstances that account for most of the natural evil in the world. Briefly, these are a) the fact that pain is used as a motivation for action, b) that the world is conducted by general laws, c) that nature is frugal in giving powers, and d) that nature is “inaccurate,” that is, more or less than the optimum level of a given phenomenon, such as rain, can and does occur. As Philo presents these sources of evil during the discussion of the evidential Problem of Evil, his point must be interpreted accordingly. In presenting these sources, all Philo needs to show is that it is likely that at least one of these circumstances could be modified so as to produce less suffering. For instance, in the third circumstance, it seems that, were humans more resistant to hypothermia, this would lead to a slightly better world. In this way, Philo bolsters the likelihood of gratuitous evil by arguing that things could easily have been better than they are.

Having presented the Problem of Evil in these ways, Hume explicitly rejects some approaches to a solution while implicitly rejecting others. First, Demea appeals to Skeptical Theism by positing a deity that is moral in ways that we cannot fathom, but Hume rebuffs this position in several ways. First, Cleanthes denies any appeal to divine mystery, insisting that we must be empiricists rather than speculative theologians. Second, Hume’s Cleanthes insists that, if we make God too wholly other, then we ultimately abandon religion. Hence, in Part XI Cleanthes presents the theist as trapped in a dilemma: either the theist anthropomorphizes the morality of the deity and, in doing so, is forced to confront the Problem of Evil, or he abandons human analogy and, thereby “abandons all religion, and retain[s] no conception of the great object of our adoration” (D 11.1). For Cleanthes, if we cannot fathom the greatness of God, then the deity cannot be an object of praise, nor can we use God to inform some notion of morality. But without these interactions, there is little left for religion to strive toward. We might add a third rejection of the skeptical theist approach: to rationally reject the Problem of Evil without providing a theodicy, we must have independent grounds for positing a good deity. However, Hume has been quite systematic in his attempts to remove these other grounds, rejecting the design and cosmological arguments earlier in the Dialogues, rejecting miracles (and therefore divine revelation) in the Enquiry, and rejecting any pragmatic justification in many works by drawing out the harms of religion. Hence, for Hume, an appeal to divine mystery cannot satisfactorily discharge the Problem of Evil.

Turning to other solutions, Hume does not consider specific theodicies in the Dialogues. Instead, he seems to take the arguments from prior probability and the four circumstances as counting against most or all of them. Going back to the house example, Hume doesn’t seem to think that pointing out that the flaws serve a purpose by preventing more disastrous consequences is sufficient to exonerate the builder. A perfect being should at least be able to reduce the number of flaws or the amount of suffering from its current state. Furthermore, recall that, in focusing on the empirical and in rejecting revealed texts, Hume would not accept any possible retreat to doctrine-specific theodicies such as appeals to the Fall Theodicy or the Satan Theodicy.

Given the amount of evil in the world, Philo ultimately holds that an indifferent deity best explains the universe. There is too much evil for a good deity, too much good for an evil deity, and too much regularity for multiple deities.

8. The Psychology of Religious Belief

Hume wrote the Dialogues roughly in tandem with another work, the Natural History. In its introduction, Hume posits that there are two types of inquiry to be made into religion: its foundations in reason and its origin in human nature. While the Dialogues investigates the former, the explicit task of the Natural History is to explore the latter. In the Natural History, he discharges the question of religion’s foundations in reason by gesturing at the design argument (and the interpretive puzzles discussed above regarding Hume’s views still apply) before focusing on his true task: how various passions give rise to vulgar or false religion.

According to Hume, all religion started as polytheistic. This was due largely to an ignorance of nature and a tendency to assign agency to things. In barbarous times, we did not have the time or ability to contemplate nature as a whole, as uniform. On account of this, we did not understand natural causes generally. In the absence of such understanding, human nature is such that we tend to assign agency to effects, since that is the form of cause and effect that we are most familiar with. This has been well documented in children who will, for instance, talk of a hammer wanting to pound nails. This is especially true of effects that seem to break regularity. Seeing two hundred pounds of meat seemingly moving in opposition to the laws of gravity, is not a miracle, but just a person walking. Primitive humans focused on these breaks in apparent regularity rather than focusing on the regularity itself. While focusing on the latter would lead us to something like a design argument, focusing on the former brings about polytheism. Irregularity can be beneficial, such as a particularly bountiful crop, or detrimental, such as a drought. Thus, on his account, as we exercise our propensity to assign agency to irregularities, a variety of effects gives rise to a variety of anthropomorphized agents. We posit deities that help us and deities that oppose us.

Eventually, Hume says, polytheism gives way to monotheism not through reason, but through fear. In our obsequious praising of these deities, motivated by fear rather than admiration, we dare not assign them limitations, and it is from this fawning praise that we arrive at a single, infinite deity who is perfect in every way, thus transforming us into monotheists. Were this monotheism grounded in reason, its adherence would be stable. Since it is not, there is “flux and reflux,” an oscillation back and forth between anthropomorphized deities with human flaws and a perfect deity. This is because, as we get farther from anthropomorphism, we make our deity insensible to the point of mysticism. Indeed, as Hume’s Cleanthes points out, this is to destroy religion. Therefore, to maintain a relatable deity, we begin to once more anthropomorphize and, when taken too far, we once more arrive at vulgar anthropomorphic polytheism.

Hume insists that monotheism, while more reasonable than polytheism, is still generally practiced in the vulgar sense; that is, as a product of the passions rather than of reason. As he repeatedly insists, the corruption of the best things lead to the worst, and monotheism has two ugly forms which Hume calls “superstition” and “enthusiasm.” Discussed in both the Natural History and the essay, “On Superstition and Enthusiasm”, both of these corrupt forms of monotheism are grounded in inappropriate passions rather than in reason. If we believe that we have invisible enemies, agents who wish us harm, then we try to appease them with rituals, sacrifices, and so forth. This gives rise to priests that serve as intermediaries and petitioners for these invisible agents. This emphasis on fear and ritual is the hallmark of Hume’s “superstition,” of which the Catholicism of his day was his main example. Superstition arises from the combination of fear, melancholy, and ignorance.

Enthusiasm, on the other hand, comes from excessive adoration. In the throes of such obsequious praise, one feels a closeness to the deity, as if one were a divine favorite. The emphasis on perceived divine selection is the hallmark of Hume’s “enthusiasm,” a view Hume saddled to many forms of Protestantism of his day. Enthusiasm thereby arises from the combination of hope, pride, presumption, imagination, and ignorance.

In this way, Hume identifies four different forms of “false” or “vulgar” religion. The first is polytheism, which he sometimes calls “idolatry.” Then there are the vulgar monotheisms, superstition, enthusiasm, and mysticism. Though Hume does not call the last a vulgar religion explicitly, he does insist that it must be faith-based, and therefore does not have a proper grounding in reason. True religion, by contrast, supports the “principles of genuine theism,” and seems to consist mainly in assigning a deity as the source of nature’s regularity. Note that this entails that breaks in reality, such as miracles, count against genuine theism rather than for it. In the Dialogues, Philo has the essence of true religion as maintaining, “that the cause or causes of order in the universe probably bear some remote analogy to human intelligence” (D 12.33). This deity is stripped of the traits that make the design analogy weak, and is further stripped of human passions as, for Philo, it would be absurd to think that the deity has human emotions, especially a need to be praised. Cleanthes, however, supplements his version of true religion by adding that the deity is “perfectly good” (D 12.24). However, because of this added moral component, Cleanthes sees religion as giving morality and order, a position that both Philo and Hume, in the Enquiry Concerning the Principles of Morals, deny. Instead, the true religion described by both Hume and Philo is independent of morality. As Yandell (1990: 29) points out, it does not superimpose new duties and motives to the moral framework. True religion does not, therefore, affect morality, and does not lead to “pernicious consequences.” In fact, it does not seem to inform our actions at all. Because true religion cannot guide our actions, Philo holds that the dispute between theists and atheists is “merely verbal.”

9. The Harms of Religion

A historian by profession, Hume spent much effort in his writings examining religion in its less savory aspects. He deplored the Crusades, and saw Great Britain torn asunder on multiple occasions over the disputes between Catholicism and Protestantism. Based on these historical consequences, Hume saw enthusiasm as affecting society like a violent storm, doing massive damage quickly before petering out. Superstition, however, he saw as a more lingering corruption, involving the invasion of governments, and so forth. Hume argued that, because both belief systems are monotheistic, both must be intolerant by their very nature. They must reject all other deities and ways of appeasing those deities, unlike polytheism which, having no fixed dogma, sits lighter on men’s minds. Generally, Hume held that religion, especially popular monotheism, does more harm than good and he thereby develops a critique of religion based on its detrimental consequences.

Yandell (1990: 283) questions the methodology of such an attack. For him, it is not clear what religion’s socio-political consequences tell us about its truth. However, if we view Hume’s attack against religion as systematic, then consequence-based critiques fulfill a crucial role. Setting aside faith-based accounts, there seem to be three ways to justify one’s belief in religion: through revealed theology, through natural theology, or via pragmatic advantage. Hume denies revealed theology, as his argument against miracles, if successful, entails the unsustainability of most divine experiences and of revealed texts. The Dialogues are his magnum opus on natural theology, working to undermine the reasonability of religion and therefore the appeal to natural theology. If these Humean critiques are successful, then the only remaining path for justifying religious belief is from a practical standpoint, that we are somehow better off for having it or for believing it. Cleanthes argues this way in Part XII of the Dialogues, insisting that corrupt religion is better than no religion at all. However, if Hume is right that religion detracts from rather than contributes to morality, and that its consequences are overall negative, then Hume has closed off this avenue as well, leaving us nothing but faith, or perhaps human nature, on which to rest our beliefs.

10. References and Further Reading

Hume wrote all of his philosophical works in English, so there is no concern about the accuracy of an English translation. For the casual reader, any edition of his work should be sufficient. However, Oxford University Press has recently begun to produce the definitive Clarendon Edition of most of his works. For the serious scholar, these are a must have, because they contain copious helpful notes about Hume’s changes in editions, and so forth. The general editor of the series is Tom L. Beauchamp.

a. Hume’s Works on Religion

  • Hume, David. A Treatise of Human Nature. Clarendon Press, Oxford, U.K., 2007, edited by David Fate Norton and Mary J. Norton. (T)
  • Hume, David. An Enquiry Concerning Human Understanding. Clarendon Press, Oxford, U.K., 2000, edited by Tom L. Beauchamp. (EHU)
  • Hume, David. An Enquiry Concerning the Principles of Morals. Reprinted in David Hume Enquiries. L.A. Selby-Bigge, Third Edition, Clarendon Press, Oxford, U.K. 2002. (EPM)
  • Hume, David. Dialogues Concerning Natural Religion. In David Hume Dialogues and Natural History of Religion. Oxford University Press, New York, New York, 1993. (D)
  • Hume, David. Essays: Moral, Political, and Literary. Edited by Eugene F Miller. Liberty Fund Inc., Indianapolis, Indiana, 1987. (ES)
  • Hume, David. Natural History of Religion. Reprinted in A Dissertation on the Passions, The Natural History of Religion, The Clarendon Edition of the Works of David Hume, Oxford University Press, 2007. (NHR)
  • Hume, David. New Letters of David Hume. Edited by Raymond Klibansky and Ernest C. Mossner. Oxford University Press, London, England, 1954. (NL)
  • Hume, David. The History of England. Liberty Classics, the Liberty Fund, Indianapolis, Indiana, 1983. (In six volumes) (H1-6)
  • Hume, David. The Letters of David Hume. Edited by J. Y. T. Greig, Oxford University Press, London, England, 1932. (In two volumes) (L1-2)

b. Works in the History of Philosophy

  • Broad, C. D. “Hume’s Theory of the Credibility of Miracles”, Proceedings of the Aristotelian Society, New Series, Volume 17 (1916-1917), pages 77-94.
    • This is one of the earliest contemporary analyses of Hume’s essay on miracles. It raises objections that have become standard difficulties, such as the circularity of the Caricature Argument and the seeming incompatibility of Hume’s strong notion of the laws of nature with his previous insights about causation.
  • Butler, Ronald J. “Natural Belief and Enigma in Hume,” Archiv fur Geschichte der Philosophie. 1960, pages 73-100.
    • Butler is the first scholar to argue that religious belief, for Hume, is natural or instinctual. This would mean that, though adherence to a deity is not a product of reason, it may nevertheless be supported as doxastically appropriate. The argument itself has been roundly criticized due to problematic entailments, such as there being no atheists, but the originality of the idea makes the piece merit-worthy.
  • Coleman, Dorothy. “Baconian Probability and Hume’s Theory of Testimony.” Hume Studies, Volume 27, Number 2, November 2001, pages 195-226.
    • Coleman is an extremely careful, accurate, and charitable reader of Hume on miracles. She excels at clearing up misconceptions. In this article, she refocuses Hume’s argument from an anachronistic Pascalian/Bayesian model to a Baconian one, and argues that the “straight rule” of Earman and others is irrelevant to Hume, who insists that probability is only invoked when there has been a contrariety of phenomena.
  • Coleman, Dorothy. “Hume, Miracles, and Lotteries”. Hume Studies. Volume 14, Number 2, November 1988, pages 328-346.
    • Coleman is an extremely careful, accurate, and charitable reader of Hume on miracles. She excels at clearing up misconceptions. In this article, she responds to criticisms of Hambourger and others that Hume’s probability calculus in support of the miracles argument commits him to absurdities.
  • Earman, John. Hume’s Abject Failure—The Argument Against Miracles. Oxford University Press, New York, New York, 2000.
    • In this extremely critical work, Earman argues that the miracles argument fails on multiple levels, especially with regard to the “straight rule of induction.” The work is highly technical, interpreting Hume’s argument using contemporary probability theory.
  • Fogelin, Robert J. A Defense of Hume on Miracles. Princeton University Press, Princeton New Jersey, 2003.
    • In this book, Fogelin takes on two tasks, that of reconstructing Hume’s argument of Part X, and defending it from the recent criticisms of Johnson and Earman. He provides a novel reading in which Part I sets epistemic standards of credulity while Part II shows that miracles fall short of this standard. The subsequent defense relies heavily on this reading, and largely stands or falls based on how persuasive the reader finds Fogelin’s interpretation.
  • Garrett, Don. Cognition and Commitment in Hume’s Philosophy. Oxford University Press. New York, New York, 1997.
    • This is a great introduction to some of the central issues of Hume’s work. Garrett surveys the various positions on each of ten contentious issues in Hume scholarship, including the miracles argument, before giving his own take.
  • Gaskin, J.C.A. Hume’s Philosophy of Religion—Second Edition. Palgrave-MacMillan, 1988.
    • This is perhaps the best work on Hume’s philosophy of religion to date on account of both its scope and careful analysis. This work is one of only a few to provide an in-depth treatment of the majority of Hume’s writings on religion rather than focusing on one work. Though points of disagreement were voiced above, this should not detract from the overall caliber of Gaskin’s analysis, which is overall fair, careful, and charitable. The second edition is recommended because, in addition to many small improvements, there are significant revisions involving Philo’s Reversal.
  • Geisler, Norman L. “Miracles and the Modern Mind”, in In Defense of Miracles- A Comprehensive Case of God’s Action in History, edited by Douglas Geivett and Gary R. Habermas, InterVarsity Press, Downers Grove, Illinois, 1997, pages 73-85.
    • In this article, Geisler raises an important worry that Hume cannot draw a principled distinction between the miraculous and the merely marvelous. Since this is the case, then Hume must reject the marvelous as well, but this would have the disastrous consequence of stagnating science.
  • Hambourger, Robert. “Belief in Miracles and Hume’s Essay.” Nous. N 80; 14: 587-604.
    • In this essay, Hambourger lays out a problem known as the lottery paradox, in which he tries to show that a commitment to Humean probabilistic doxastic assent leads to counterintuitive consequences.
  • Holden, Thomas. Spectres of False Divinity. Oxford University Press, Oxford, U.K., 2010.
    • In this careful work, Holden argues that Hume goes beyond mere skepticism to “moral atheism,” the view that the deity cannot have moral attributes. He gives a valid argument supporting this and shows how Hume supports each premise, drawing on a wide variety of texts.
  • Huxley, Thomas Henry. Hume. Edited by John Morley, Dodo Press, U.K., 1879.
    • Huxley is an early commentator on Hume, and this work is the first to raise several worries with Hume’s miracles argument.
  • Johnson, David. Hume, Holism, and Miracles. Cornell University Press, Ithaca, New York, 1999.
    • This is another recent critique of Hume’s account of miracles. Johnson’s work is more accessible than Earman’s, and it is novel in the sense that it addresses several different historical and contemporary reconstructions of Hume’s argument.
  • Kemp Smith, Norman. (ed.) Dialogues Concerning Natural Religion. The Bobbs-Merrill Company, Inc., Indianapolis, Indiana, 1947.
    • In Kemp Smith’s edition of Hume’s Dialogues, he provides extensive interpretation and commentary, including his argument that Hume is represented entirely by Philo and that seeming evidence to the contrary is building stylistic “dramatic balance.”
  • Levine, Michael. Hume and the Problem of Miracles: A Solution. Kluwer Academic Publishers, Dordrecht, Netherlands, 1989.
    • Levine argues that Hume’s miracles argument cannot be read independently of his treatment of causation, and that the two are inconsistent. Nevertheless, a Humean argument can be made against belief in the miraculous.
  • Livingston, Donald W. Hume’s Philosophy of Common Life. University of Chicago Press, Chicago, Illinois, 1984.
    • This is one of the standard explications of Humean causal realism. It stresses Hume’s position that philosophy should conform to and explain common beliefs rather than conflict with them. It is included here because, in the course of his project, Livingston includes a helpful discussion of Humean laws of nature.
  • Paley, William. A View of the Evidences of Christianity, in The Works of William Paley, Edinburgh, 1830.
    • Paley is the first to attribute the Caricature Argument to Hume.
  • Pike, Nelson. Dialogues Concerning Natural Religion, Bobbs-Merrill Company Inc., Indianapolis, IN, 1970.
    • In Pike’s edition of Hume’s Dialogues, he provides extensive interpretation and commentary, as well as a text-based critique of Kemp Smith’s position.
  • Penelhum, Terence. “Natural Belief and Religious Belief in Hume’s Philosophy.” The Philosophical Quarterly, Volume 33, Number 131, 1983.
    • Penelhum previously offered a careful argument that some form of religious belief, for Hume, is natural. However, unlike Butler, he is not committed to the view that religious beliefs are irresistible and necessary for daily life. In this more recent work, he confronts some difficulties with the view and updates his position.
  • Swinburne, Richard. The Concept of Miracle. Macmillan, St. Martin’s Press, London, U.K., 1970.
    • Though Swinburne is generally critical of Hume’s position, he is a careful and astute reader. In this general defense of miracles, his reconstruction and critique of Hume is enlightening.
  • Tweyman, Stanley. "Scepticism and Belief in Hume’s Dialogues Concerning Natural Religion." International Archives of the History of Ideas, Martinus Nyhoff Publishers, 1986.
    • Tweyman presents a holistic reading of the Dialogues, starting with a dogmatic Cleanthes who is slowly exposed to skeptical doubt, a doubt that must ultimately be corrected by the common life. Tweyman ultimately argues that belief in a designer is natural for Hume.
  • Wieand, Jeffery. “Pamphilus in Hume’s Dialogues”, The Journal of Religion, Volume 65, Number 1, January 1985, pages 33-45.
    • Wieand is one of the few recent scholars that argues against Hume as Philo and for a Hume as Cleanthes/Pamphilus view. This interpretation focuses largely on the role of the narrator and Pamphilus’ discussion about the dialogue form.
  • Yandell, Keith E. Hume’s “Inexplicable Mystery”—His Views on Religion. Temple University Press, Philadelphia, Pennsylvania, 1990.
    • Apart from Gaskin, Yandell’s work is the only other major comprehensive survey of Hume on religion. The work is highly technical and highly critical, and is sometimes more critical than accurate. However, he at least provides the general form of some theistic responses to Hume and identifies a few important lapses on Hume’s part, such as a lack of response to religious experience.
  • Yoder, Timothy S. Hume on God. Continuum International Publishing, New York, New York, 2008.
    • Yoder’s text is an extended argument, defending Hume’s “amoral theism”. He makes important contributions in his treatment of false/vulgar religion, the background for English deism, and Hume’s use of irony.

 

Author Information

C. M. Lorkowski
Email: clorkows@kent.edu
Kent State University- Trumbull Campus
U. S. A.

Phenomenal Conservatism

Phenomenal Conservatism is a theory in epistemology that seeks, roughly, to ground justified beliefs in the way things “appear” or “seem” to the subject who holds a belief. The theory fits with an internalistic form of foundationalism—that is, the view that some beliefs are justified non-inferentially (not on the basis of other beliefs), and that the justification or lack of justification for a belief depends entirely upon the believer’s internal mental states. The intuitive idea is that it makes sense to assume that things are the way they seem, unless and until one has reasons for doubting this.

This idea has been invoked to explain, in particular, the justification for perceptual beliefs and the justification for moral beliefs. Some believe that it can be used to account for all epistemic justification. It has been claimed that the denial of Phenomenal Conservatism (PC) leaves one in a self-defeating position, that PC naturally emerges from paradigmatic internalist intuitions, and that PC provides the only simple and natural solution to the threat of philosophical skepticism. Critics have objected that appearances should not be trusted in the absence of positive, independent evidence that appearances are reliable; that the theory allows absurd beliefs to be justified for some subjects; that the theory allows irrational or unreliable cognitive states to provide justification for beliefs; and that the theory has implausible implications regarding when and to what degree inferences produce justification for beliefs.

Table of Contents

  1. Understanding Phenomenal Conservatism
    1. Species of Appearance
    2. Defeasibility
    3. Kinds of Justification
    4. Comparison to Doxastic Conservatism
  2. The Nature of Appearance
    1. The Belief that P
    2. The Disposition to Believe that P
    3. The Belief that One Has Evidence for P
    4. The Experience View
    5. Appearance versus Acquaintance
  3. Arguments for Phenomenal Conservatism
    1. Intuitive Internalist Motivation
    2. An Internal Coherence Argument
    3. The Self-Defeat Argument
    4. Avoiding Skepticism
    5. Simplicity
  4. Objections
    1. Crazy Appearances
    2. Metajustification
    3. Cognitive Penetration and Tainted Sources
    4. Inferential Justification
  5. Summary
  6. References and Further Reading

1. Understanding Phenomenal Conservatism

a. Species of Appearance

The following is a recent formulation of the central thesis of phenomenal conservatism:

PC If it seems to S that P, then, in the absence of defeaters, S thereby has at least some justification for believing that P (Huemer 2007, p. 30; compare Huemer 2001, p. 99).

The phrase “it seems to S that P” is commonly understood in a broad sense that includes perceptual, intellectual, memory, and introspective appearances. For instance, as I look at the squirrel sitting outside the window now, it seems to me that there is a squirrel there; this is an example of a perceptual appearance (more specifically, a visual appearance). When I think about the proposition that no completely blue object is simultaneously red, it seems to me that this proposition is true; this is an intellectual appearance (more specifically, an intuition). When I think about my most recent meal, I seem to remember eating a tomatillo cake; this is a mnemonic (memory) appearance. And when I think about my current mental state, it seems to me that I am slightly thirsty; this is an introspective appearance.

b. Defeasibility

Appearances sometimes fail to correspond to reality, as in the case of illusions, hallucinations, false memories, and mistaken intuitions. Most philosophers agree that logically, this could happen across the board – that is, the world as a whole could be radically different from the way it appears. These observations do not conflict with phenomenal conservatism. Phenomenal conservatives do not hold that appearances are an infallible source of information, or even that they are guaranteed to be generally reliable. Phenomenal conservatives simply hold that to assume things are the way they appear is a rational default position, which one should maintain unless and until grounds for doubt (“defeaters”) appear. This is the reason for the phrase “in the absence of defeaters” in the above formulation of PC (section 1a).

These defeaters may take two forms. First, there might be rebutting defeaters, that is, evidence that what appears to be the case is in fact false. For instance, one might see a stick that appears bent when half-submerged in water. But one might then feel the stick, and find that it feels straight. The straight feel of the stick would provide a rebutting defeater for the proposition that the stick is bent.

Second, there might be undercutting defeaters, that is, evidence that one’s appearance (whether it be true or false) is unreliable or otherwise defective as a source of information. For instance, suppose one learns that an object that appears red is in fact illuminated by red lights. The red lighting is not by itself evidence that the object isn’t also red; however, the red lighting means that the look of the object is not a reliable indicator of its true color. Hence, the information about the unusual lighting conditions provides an undercutting defeater for the proposition that the object is red.

c. Kinds of Justification

Epistemologists commonly draw a (misleadingly named) distinction between “propositional justification” and “doxastic justification”, where propositional justification is justification that one has for believing something (whether or not one in fact believes it) and doxastic justification is justification that an actual belief possesses. The distinction is commonly motivated by pointing out that a person might have good reasons to believe a proposition and yet not believe it for any of those reasons, but instead believe it for some bad reason. For instance, I might be in possession of powerful scientific evidence supporting the theory of evolution, but yet my belief in the theory of evolution might actually be based entirely upon trust in the testimony of my tarot card reader. In that case, I would be said to have “propositional justification” but not “doxastic justification” for the theory of evolution.

It is commonly held that to have doxastic justification for P, an individual must satisfy two conditions: first, the individual must have propositional justification for P; second, the individual must base a belief that P on that propositional justification (or whatever confers that propositional justification). If we accept this view, then the phenomenal conservative should hold (i) that the appearance that P gives one propositional justification, in the absence of defeaters, for believing that P, and (ii) that if one believes that P on the basis of such an undefeated appearance, one thereby has doxastic justification for P.

Phenomenal conservatism was originally advanced as an account of foundational, or noninferential, justification (Huemer 2001, chapter 5). That is, it was advanced to explain how a person may be justified in believing that P without basing the belief that P on any other beliefs. Some hold that a variation of phenomenal conservatism may also be used to account for inferential justification – that is, that even when a person believes that P on the basis of other beliefs, the belief that P is justified in virtue of appearances (especially the “inferential appearance” that in the light of certain premises, P must be or is likely to be true) (Huemer 2013b, pp. 338-41); this last suggestion, however, remains controversial even among those sympathetic to PC.

d. Comparison to Doxastic Conservatism

A related but distinct view, sometimes called “epistemic conservatism” but better labeled “doxastic conservatism”, holds that a person’s merely believing that P gives that person some justification for P, provided that the person has no grounds for doubting that belief (Swinburne 2001, p. 141). (Etymological note: the term “doxastic” derives from the Greek word for belief [doxa], while “phenomenal” derives from the Greek word for appearance [phainomenon].)

Doxastic conservatism is an unpopular view, as it seems to endorse circular reasoning, or something very close to it. A thought experiment due to Richard Foley (1983) illustrates the counterintuitiveness of doxastic conservatism: suppose that S has some evidence for P which is almost but not quite sufficient to justify P. Suppose that S forms the belief that P anyway. If doxastic conservatism is correct, it seems, then as soon as S formed this belief, it would immediately become justified, since in addition to the evidence S already had for P, S would now have his belief that P serving as a source of justification, which would push S over the threshold for justified belief.

The phenomenal conservative aims to avoid this sort of implausibility. PC does not endorse circular reasoning, since it does not hold that a belief (or any other mental state) may justify itself; it holds that an appearance may justify a belief. Provided that no appearance is a belief, this view avoids the most obviously objectionable form of circularity, and it avoids the Foley counterexample. Suppose that S has almost enough justification to believe that P, and then, in addition, S acquires an appearance that P. Assume also that S has no defeaters for a belief in P. In this case, it is not counterintuitive to hold that S would then be justified in believing that P.

2. The Nature of Appearance

Phenomenal conservatism ascribes justificatory significance to appearances. But what are appearances? Philosophers have taken a number of different views about the nature of appearances, and which view one takes may dramatically affect the plausibility of PC. In this section, we consider some views philosophers have taken about what it is for it to “seem to one that P.”

a. The Belief that P

Here is a very simple theory: to say that it seems to one that P is to report a tentative sort of belief that P (Chisholm [1957, ch. 4] suggested something in this neighborhood). This, however, is not how “seems” is understood by phenomenal conservatives when they state that if it seems to one that P and one lacks defeaters for P, then one has justification for P.

To motivate the distinction between its seeming to one that P and one’s believing that P, notice that in some cases, it seems to one that P even though one does not believe that P. For instance, when one experiences perceptual illusions, the illusions typically persist even when one learns that they are illusions. That is to say, things continue to appear a certain way even when one does not believe that things are as they appear, indeed, even when one knows that things are not as they appear. This shows that an appearance that P is not a belief that P.

b. The Disposition to Believe that P

Some thinkers suggest that an appearance that P might be identified with a mere inclination or disposition to believe that P (Sosa 1998, pp. 258-9; Swinburne 2001, pp. 141-2; Armstrong 1961). Typically, when it appears to one that P, one will be disposed to believe that P. However, one may be disposed to believe that P when it doesn’t seem to one that P. For instance, if one is inclined to believe that P merely because one wants P to be true, or because one thinks that a virtuous person would believe that P, this would not be a case in which it seems to one that P. Even in cases where it seems to one that P, its seeming to one that P is not to be identified with the disposition to believe that P, since one is disposed to believe that P because it seems to one that P, and not the other way around. Thus, its seeming to one that P is merely one possible ground for the disposition to believe that P.

c. The Belief that One Has Evidence for P

Some philosophers hold that its seeming to one that P is a matter of one’s believing, or being disposed to believe, that some mental state one has is evidence for P (Conee 2013; Tooley 2013). This would undermine the plausibility of PC, since it is not very plausible to think that one’s merely being disposed to believe (whether rightly or wrongly) that one has evidence for P actually gives one justification for believing P.

Fortunately, phenomenal conservatives can reasonably reject that sort of analysis, on grounds similar to those used to reject the idea that its seeming to one that P is just a matter of one’s being disposed to believe that P. Suppose that Jon is disposed to believe that he has evidence for the reality of life after death merely because Jon wants it to be true that he has evidence for life after death (a case of pure wishful thinking). This surely would not count as its seeming to Jon that there is life after death.

d. The Experience View

Most phenomenal conservatives hold that its seeming to one that P is a matter of one’s having a certain sort of experience, which has propositional content but is not analyzable in terms of belief (for discussion, see Tucker 2013, section 1). Sensory experiences, intellectual intuitions, (apparent) memories, and introspective states are either species of this broad type of experience, or else states that contain an appearance as a component.

Some philosophers have questioned this view of appearance, on the ground that intellectual intuitions, perceptual experiences, memories, and episodes of self-awareness are extremely different mental states that have nothing interesting in common (DePaul 2009, pp. 208-9).

In response, one can observe that intuitions, perceptual experiences, memories, and states of self-awareness are all mental states of a kind that naturally incline one to believe something (namely, the content of that very mental state, or, the thing that appears to one to be the case). And it is not merely that one is inclined to believe that proposition for some reason or other. We can distinguish many different reasons why one might be inclined to believe P: because one wants P to be true, because one thinks a good person would believe P, because one wants to fit in with the other people who believe P, because being a P-believer will annoy one’s parents . . . or because P just seems to one to be the case. When we reflect on these various ways of being disposed to believe P, we can see that the last one is interestingly different from all the others and forms a distinct (non-disjunctive) category. Admittedly, I have not just identified a new characteristic or set of characteristics that all and only appearances have in common; I have not defined “appearance”, and I do not believe it is possible to do so. What I have done, I hope, is simply to draw attention to the commonality among all appearances by contrasting appearances with various other things that tend to produce beliefs. When Jon believes [for all numbers x and y, x+y = y+x] because that proposition is intuitively obvious, and Mary believes [the cat is on the couch] because she seems to see the cat on the couch, these two situations are similar to each other in an interesting respect – which we see when we contrast both of those cases with cases such as that in which Sally thinks her son was wrongly convicted because Sally just cannot bear the thought that her son is a criminal (Huemer 2009, pp. 228-9).

e. Appearance versus Acquaintance

Appearances should be distinguished from another sort of non-doxastic mental state sometimes held to provide foundational justification for beliefs, namely, the state of acquaintance (Russell 1997, chs. 5, 9; Fumerton 1995, pp. 73-9). Acquaintance is a form of direct awareness of something. States of acquaintance differ from appearances in that the occurrence of an episode of acquaintance entails the existence of an object with which the subject is acquainted, whereas an appearance can occur without there being any object that appears. For example, if a person has a fully realistic hallucination of a pink rat, we can say that the person experiences an appearance of a pink rat, but we cannot say the person is acquainted with a pink rat, since there is no pink rat with which to be acquainted. In other words, an appearance is an internal mental representation, whereas acquaintance is a relation to some object.

3. Arguments for Phenomenal Conservatism

a. Intuitive Internalist Motivation

Richard Foley (1993) has advanced a plausible account of rationality, on which, roughly, it is rational for S to do A provided that, from S’s own point of view, doing A would seem to be a reasonably effective way of satisfying S’s goals. Foley goes on to suggest that epistemic rationality is rationality from the standpoint of the goal of now believing truths and avoiding falsehoods. Though Foley does not draw this consequence, his account of epistemic rationality lends support to PC, for if it seems to S that P is true and S lacks grounds for doubting P, then from S’s own point of view, believing P would naturally seem to be an effective way of furthering S’s goal of believing truths and avoiding falsehoods. Therefore, it seems, it would be epistemically rational for S to believe that P (Huemer 2001, pp. 103-4; compare McGrath 2013, section 1).

b. An Internal Coherence Argument

Internalism in epistemology is, roughly, the view that the justification or lack of justification of a belief is entirely a function of the internal mental states of the believer (for a fuller account, see Fumerton 1995, pp. 60-9). Externalism, by contrast, holds that a belief’s status as justified or unjustified sometimes depends upon factors outside the subject’s mind.

The following is one sort of argument for internalism and against externalism. Suppose that externalism is true, and that the justification of a belief depends upon some external factor, E. There could be two propositions, P and Q, that appear to one exactly alike in all epistemically relevant respects—for instance, P and Q appear equally true, equally justified, and equally supported by reliable belief-forming processes; however, it might be that P is justified and Q unjustified, because P but not Q possesses E. Since E is an external factor, this need have no impact whatsoever on how anything appears to the subject. If such a situation occurred, the externalist would presumably say that one ought to believe that P, while at the same time either denying Q or withholding judgment concerning Q.

But if one took this combination of attitudes, it seems that one could have no coherent understanding of what one was doing. Upon reflecting on one’s own state of mind, one would have to hold something like this: “P and Q seem to me equally correct, equally justified, and in every other respect equally worthy of belief. Nevertheless, while I believe P, I refuse to believe Q, for no apparent reason.” But this seems to be an irrational set of attitudes to hold. Therefore, we ought to reject the initial externalist assumption, namely, that the justificatory status of P and Q depends on E.

If one accepts this sort of motivation for internalism, then it is plausible to draw a further conclusion. Not only does the justificatory status of a belief depend upon the subject’s internal mental states; it depends, more specifically, on the subject’s appearances (that is, on how things seem to the subject). On this view, it is impossible for P and Q to seem the same to one in all relevant respects and yet for P to be justified and Q unjustified. This is best explained by something like PC (Huemer 2006).

c. The Self-Defeat Argument

One controversial argument claims that PC is the only theory of epistemic justification that is not self-defeating (Huemer 2007; Skene 2013). The first premise of this argument is that all relevant beliefs (all beliefs that are plausible candidates for being doxastically justified) are based on appearances. I think there is a table in front of me because it appears that way. I think three plus three is six because that seems true to me. And so on. There are cases of beliefs not based on how things seem, but these are not plausible candidates for justified beliefs to begin with. For instance, I might believe that there is life after death, not because this seems true but because I want it to be true (wishful thinking) – but this would not be a plausible candidate for a justified belief.

The second premise is that a belief is doxastically justified only if what it is based on is a source of propositional justification. Intuitively, my belief is justified only if I not only have justification for it but also believe it because of that justification.

From here, one can infer that unless appearances are a source of propositional justification, no belief is justified, including the belief that appearances are not a source of propositional justification. Therefore, to deny that appearances are a source of propositional justification would be self-defeating. Huemer (2007) interprets this to mean that the mere fact that something appears to one to be the case must (in the absence of defeaters) suffice to confer justification. Some critics maintain, however, that one need only hold that some appearances generate justification, allowing that perhaps other appearances fail to generate justification even in the absence of defeaters (BonJour 2004, p. 359).

A related objection holds that there may be “background conditions” for a belief’s justification – conditions that enable an appearance to provide justification for a belief but which are not themselves part of the belief’s justification. Thus, PC might be false, not because appearances fail to constitute a source of justification, but because they only do so in the presence of these background conditions, which PC neglects to mention. And these background conditions need not themselves be causally related to one’s belief in order for one’s belief to be doxastically justified. (For this objection, see Markie 2013, section 2; for a reply, see Huemer 2013b, section 4.)

Other critics hold that the first premise of the self-defeat argument is mistaken, because it often happens that one justifiedly believes some conclusion on the basis of an inference from other (justified) beliefs, where the conclusion of the inference does not itself seem true; hence, one can be justified in believing P without basing that belief on a seeming that P (Conee 2013, pp. 64-5). In reply, the first premise of the self-defeat argument need not be read as holding that the belief that P (in relevant cases) is always based on an appearance that P. It might be held that the belief that P (in relevant cases) is always based either on the appearance that P or on some ultimate premises which are themselves believed because they seem correct.

d. Avoiding Skepticism

Skeptics in epistemology maintain that we don’t know nearly as much as we think we do. There are a variety of forms of skepticism. For instance, external world skeptics hold that no one knows any contingent propositions about the external world (the world outside one’s own mind). These skeptics argue that to know anything about the external world, one would need to be able to figure out what the external world is likely solely on the basis of facts about one’s own experiences, but that in fact nothing can be legitimately inferred about non-experiential reality solely from one’s own experiences (Hume 1975, section XII, part 1). Most epistemologists consider this conclusion to be implausible on its face, even absurd, so they have sought ways of rebutting the skeptic’s arguments. However, rebutting skeptical arguments has proved very difficult, and there is no generally accepted refutation of external world skepticism.

Another form of skepticism is moral skepticism, the view that no one knows any substantive evaluative propositions. On this view, no one ever knows that any action is wrong, that any event is good, that any person is vicious or virtuous. Again, this idea seems implausible on its face, but philosophers have found it difficult to explain how, in general, someone can know what is right, wrong, good, or bad. Skeptical views may also be held in a variety of other areas – skeptics may challenge our knowledge of the past, of other people’s minds, or of all things not presently observed. As a rule, epistemologists seek to avoid skeptical conclusions, yet it is often difficult to do so plausibly.

Enter phenomenal conservatism. Once one accepts something in the neighborhood of PC, most if not all skeptical worries are easily resolved. External world skepticism is addressed by noting that, when we have perceptual experiences, there seem to us to be external objects of various sorts around us. In the absence of defeaters, this is good reason to think there are in fact such objects (Huemer 2001). Moral skepticism is dealt with in a similarly straightforward manner. When we think about certain kinds of situations, our ethical intuitions show us what is right, wrong, good, or bad. For instance, when we think about pushing a man in front of a moving train, the action seems wrong. In the absence of defeaters, this is good enough reason to think that pushing the man in front of the train would be wrong (Huemer 2005). Similar observations apply to most if not all forms of skepticism. Thus, the ability to avoid skepticism, long considered an elusive desideratum of epistemological theories, is among the great theoretical advantages of phenomenal conservatism.

e. Simplicity

If we accept phenomenal conservatism, we have a single, simple principle to account for the justification of multiple very different kinds of belief, including perceptual beliefs, moral beliefs, mathematical beliefs, memory beliefs, beliefs about one’s own mind, beliefs about other minds, and so on. One may even be able to unify inferential and non-inferential justification (Huemer 2013b, pp. 338-41). To the extent that simplicity and unity are theoretical virtues, then, we have grounds for embracing PC. There is probably no other (plausible) theory that can account for so many justified beliefs in anything like such a simple manner.

4. Objections

a. Crazy Appearances

Some critics have worried that phenomenal conservatism commits us to saying that all sorts of crazy propositions could be non-inferentially justified. Suppose that when I see a certain walnut tree, it just seems to me that the tree was planted on April 24, 1914 (this example is from Markie 2005, p. 357). This seeming comes completely out of the blue, unrelated to anything else about my experience – there is no date-of-planting sign on the tree, for example; I am just suffering from a brain malfunction. If PC is true, then as long as I have no reason to doubt my experience, I have some justification for believing that the tree was planted on that date.

More ominously, suppose that it just seems to me that a certain religion is true, and that I should kill anyone who does not subscribe to the one true religion. I have no evidence either for or against these propositions other than that they just seem true to me (this example is from Tooley 2013, section 5.1.2). If PC is true, then I would be justified (to some degree) in thinking that I should kill everyone who fails to subscribe to the “true” religion. And perhaps I would then be morally justified in actually trying to kill these “infidels” (as Littlejohn [2011] worries).

Phenomenal conservatives are likely to bravely embrace the possibility of justified beliefs in “crazy” (to us) propositions, while adding a few comments to reduce the shock of doing so. To begin with, any actual person with anything like normal background knowledge and experience would in fact have defeaters for the beliefs mentioned in these examples (people can’t normally tell when a tree was planted by looking at it; there are many conflicting religions; religious beliefs tend to be determined by one’s upbringing; and so on).

We could try to imagine cases in which the subjects had no such background information. This, however, would render the scenarios even more strange than they already are. And this is a problem for two reasons. First, it is very difficult to vividly imagine these scenarios. Markie’s walnut tree scenario is particularly hard to imagine – what is it like to have an experience of a tree’s seeming to have been planted on April 24, 1914? Is it even possible for a human being to have such an experience? The difficulty of vividly imagining a scenario should undermine our confidence in any reported intuitions about that scenario.

The second problem is that our intuitions about strange scenarios may be influenced by what we reasonably believe about superficially similar but more realistic scenarios. We are particularly unlikely to have reliable intuitions about a scenario S when (i) we never encounter or think about S in normal life, (ii) S is superficially similar to another scenario, S', which we encounter or think about quite a bit, and (iii) the correct judgment about S' is different from the correct judgment about S. For instance, in the actual world, people who think they should kill infidels are highly irrational in general and extremely unjustified in that belief in particular. It is not hard to see how this would incline us to say that the characters in Tooley’s and Littlejohn’s examples are also irrational. That is, even if PC were true, it seems likely that a fair number of people would report the intuition that the hypothetical religious fanatics are unjustified.

A further observation relevant to the religious example is that the practical consequences of a belief may impact the degree of epistemic justification that one needs in order to be justified in acting on the belief, such that a belief with extremely serious practical consequences may call for a higher degree of justification and a stronger effort at investigation than would be the case for a belief with less serious consequences. PC only speaks of one’s having some justification for believing P; it does not entail that this is a sufficient degree of justification for taking action based on P.

b. Metajustification

Some argue that its merely seeming to one that P cannot suffice (even in the absence of defeaters) to confer justification for believing P; in addition, one must have some reason for thinking that one’s appearances are reliable indicators of the truth, or that things that appear to one to be the case are likely to actually be the case (BonJour 2004, pp. 357-60; Steup 2013). Otherwise, one would have to regard it as at best an accident that one managed to get to the truth regarding whether P. We can refer to this alleged requirement on justified belief as the “metajustification requirement”. (When one has an alleged justification for P, a “metajustification” is a justification for thinking that one’s alleged justification for P actually renders P likely to be true [BonJour 1985, p. 9].)

While perhaps superficially plausible, the metajustification requirement threatens us with skepticism. To begin with, if we think that appearance-based justifications require metajustifications (to wit, evidence that appearances are reliable indicators of the truth), it is unclear why we should not impose the same requirement on all justifications of any kind. That is, where someone claims that belief in P is justified because of some state of affairs X, we could always demand a justification for thinking that X – whatever it is – is a reliable indicator of the truth of P. And suppose X' explains why we are justified in thinking that X is a reliable indicator of the truth of P. Then we’ll need a reason for thinking that X' is a reliable indicator of X’s being a reliable indicator of the truth of P. And so on, ad infinitum.

One can avoid this sort of infinite regress by rejecting any general metajustification requirement. The phenomenal conservative will most likely want to maintain that one need not have positive grounds for thinking one’s appearances to be reliable; one is simply entitled to rely upon them unless and until one acquires grounds for doubting that they are reliable.

c. Cognitive Penetration and Tainted Sources

Another class of objection to PC adverts to cases of appearances that are produced by emotions, desires, irrational beliefs, or other kinds of sources that would normally render a belief unjustified (Markie 2006, pp. 119-20; Lyons 2011; Siegel 2013; McGrath 2013). That is, where a belief produced by a particular source X would be unjustified, the objector contends that an appearance produced by X should not be counted as conferring justification either (even if the subject does not know that the appearance has this source).

Suppose, for instance, that Jill, for no good reason, thinks that Jack is angry (this example is from Siegel 2013). This is an unjustified belief. If Jill infers further conclusions from the belief that Jack is angry, these conclusions will also be unjustified. But now suppose that Jill’s belief that Jack is angry causes Jill to see Jack’s facial expression as one of anger. This “seeing as” is not a belief but a kind of experience – that is, Jack’s face just looks to Jill like an angry face. This is, however, a misinterpretation on Jill’s part, and an ordinary observer, without any preexisting beliefs about Jack’s emotional state, would not see Jack as looking angry. But Jill is not aware that her perception has been influenced by her belief in this way, nor has she any other defeaters for the proposition that Jack is angry. If PC is true, Jill will now have justification for believing that Jack is angry, arising directly from the mere appearance of Jack’s being angry. Some find this result counter-intuitive, since it allows an initially unjustified belief to indirectly generate justification for itself.

Phenomenal conservatives try to explain away this intuition. Skene (2013, section 5.1) suggests that the objectors may confuse the evaluation of the belief with that of the person who holds the belief in the sort of example described above, and that the person should be adjudged irrational but her belief judged rational. Tucker (2010, p. 540) suggests that the person possesses justification but lacks another requirement for knowledge and is epistemically blameworthy (compare Huemer 2013a, pp. 747-8). Huemer (2013b, pp. 343-5) argues that the subject has a justified belief in this sort of case by appealing to an analogy involving a subject who has a hallucination caused (unbeknownst to the subject) by the subject’s own prior action.

d. Inferential Justification

Suppose S bases a belief in some proposition P on (his belief in) some evidence E. Suppose that the inference from E to P is fallacious, such that E in fact provides no support at all for P (E neither entails P nor raises the probability of P). S, however, incorrectly perceives E as supporting P, and thus, S’s belief in E makes it seem to S that P must be true as well. (It does not independently seem to S that P is true; it just seems to S that P must be true given E.) Finally, assume that S has no reason for thinking that the inference is fallacious, even though it is, nor has S any other defeaters for P. It seems that such a scenario is possible. If so, one can raise the following objection to PC:

1. In the described scenario, S is not justified in believing P.

2. If PC is true, then in this scenario, S is justified in believing P.

3. So PC is false.

Many would accept premise (1), holding that an inferential belief is unjustified whenever the inference on which the belief is based is fallacious. (2) is true, since in the described scenario, it seems to S that P, while S has no defeaters for P. (For an objection along these lines, see Tooley 2013, p. 323.)

One possible response to this objection would be to restrict the principle of phenomenal conservatism to the case of non-inferential beliefs and to hold a different view (perhaps some variation on PC) of the conditions for inferential beliefs to be justified.

Another alternative is to maintain that in fact, fallacious inferences can result in justified belief. Of course, if a person has reason to believe that the inference on which he bases a given belief is fallacious, then this will constitute a defeater for that belief. It is consistent with phenomenal conservatism that the belief will be unjustified in this case. So the only cases that might pose a problem are those in which a subject makes an inference that is in fact fallacious but that seems perfectly good to him, and he has no reason to suspect that the inference is fallacious or otherwise defective. In such a case, one could argue that the subject rationally ought to accept the conclusion. If the subject refused to accept the conclusion, how could he rationally explain this refusal? He could not cite the fact that the inference is fallacious, nor could he point to any relevant defect in the inference, since by stipulation, as far as he can tell the inference is perfectly good. Given this, it would seem irrational for the subject not to accept the conclusion (Huemer 2013b, p. 339).

Here is another proposed condition on doxastic justification: if S believes P on the basis of E, then S is justified in believing P only if S is justified in believing E. This condition is very widely accepted. But again, PC seems to flout this requirement, since all that is needed is for S’s belief in E to cause it to seem to S that P (while S lacks defeaters for P), which might happen even if S’s belief in E is unjustified (McGrath 2013, section 5; Markie 2013, section 2).

A phenomenal conservative might try to avoid this sort of counterexample by claiming that whenever S believes P on the basis of E and E is unjustified, S has a defeater for P. This might be true because (i) per epistemological internalism, whenever E is unjustified, the subject has justification for believing that E is unjustified, (ii) whenever S’s belief that P is based on E, the subject has justification for believing that his belief that P is based on E, and (iii) the fact that one’s belief that P is based on an unjustified premise would be an undercutting defeater for the belief that P.

Alternately, and perhaps more naturally, the phenomenal conservative might again restrict the scope of PC to noninferential beliefs, while holding a different (but perhaps closely related) view about the justification of inferential beliefs (McGrath 2013, section 5; Tooley 2013, section 5.2.1). For instance, one might think that in the case of a non-inferential belief, justification requires only that the belief’s content seem true and that the subject lack defeaters for the belief; but that in the case of an inferential belief, justification requires that the premise be justifiedly believed, that the premise seem to support the conclusion, and that the subject lack defeaters for the conclusion (Huemer 2013b, p. 338).

5. Summary

Among the most central, fundamental questions of epistemology is that of what, in general, justifies a belief. Phenomenal Conservatism is among the major theoretical answers to this question: at bottom, beliefs are justified by “appearances,” which are a special type of experience one reports when one says “it seems to me that P” or “it appears to me that P.” This position is widely viewed as possessing important theoretical virtues, including the ability to offer a very simple account of many kinds of justified belief while avoiding troublesome forms of philosophical skepticism. Some proponents lay claim to more controversial advantages for the theory, such as the unique ability to avoid self-defeat and to accommodate central internalist intuitions.

The theory remains controversial among epistemologists for a variety of reasons. Some harbor doubts about the reality of a special type of experience called an “appearance.” Others believe that an appearance cannot provide justification unless one first has independent evidence of the reliability of one’s appearances. Others cite alleged counterexamples in which appearances have irrational or otherwise unreliable sources. And others object that phenomenal conservatism seems to flout widely accepted necessary conditions for inferential justification.

6. References and Further Reading

  • Armstrong, David. 1961. Perception and the Physical World. London: Routledge & Kegan Paul.
  • BonJour, Laurence. 1985. The Structure of Empirical Knowledge. Cambridge: Harvard University Press.
  • BonJour, Laurence. 2004. “In Search of Direct Realism.” Philosophy and Phenomenological Research 69, 349-367.
    • Early objections to phenomenal conservatism.
  • Brogaard, Berit. 2013. “Phenomenal Seemings and Sensible Dogmatism.” In Chris Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 270-289). Oxford: Oxford University Press.
    • Objections to phenomenal conservatism.
  • Chisholm, Roderick. 1957. Perceiving: A Philosophical Study. Ithaca: Cornell University Press.
  • Chapter 4 offers a widely cited discussion of three uses of “appears” and related terms.
  • Conee, Earl. 2013. “Seeming Evidence.” In Chris Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 52-68). Oxford: Oxford University Press.
    • Objections to phenomenal conservatism.
  • Cullison, Andrew. 2010. “What Are Seemings?” Ratio 23, 260-274.
  • DePaul, Michael. 2009. “Phenomenal Conservatism and Self-Defeat.” Philosophy and Phenomenological Research 78, 205-212.
    • Objections to phenomenal conservatism, especially the self-defeat argument.
  • DePoe, John. 2011. “Defeating the Self-defeat Argument for Phenomenal Conservativism.” Philosophical Studies 152, 347–359.
    • Objections to phenomenal conservatism, especially the self-defeat argument.
  • Foley, Richard. 1983. “Epistemic Conservatism.” Philosophical Studies 43, 165-182.
    • Objections to doxastic conservatism.
  • Foley, Richard. 1993. Working without a Net. New York: Oxford University Press.
  • Fumerton, Richard. 1995. Metaepistemology and Skepticism. Lanham: Rowman & Littlefield.
  • Hanna, Nathan. 2011. “Against Phenomenal Conservatism.” Acta Analytica 26, 213-221.
    • Objections to phenomenal conservatism.
  • Huemer, Michael. 2001. Skepticism and the Veil of Perception. Lanham: Rowman & Littlefield.
    • Chapter 5 defends phenomenal conservatism and contains a version of the self-defeat argument. This is the original source of the term “phenomenal conservatism.”
  • Huemer, Michael. 2005. Ethical Intuitionism. New York: Palgrave Macmillan.
    • Chapter 5 uses phenomenal conservatism to explain moral knowledge.
  • Huemer, Michael. 2006. “Phenomenal Conservatism and the Internalist Intuition.” American Philosophical Quarterly 43, 147-158.
    • Defends phenomenal conservatism using internalist intuitions.
  • Huemer, Michael. 2007. “Compassionate Phenomenal Conservatism.” Philosophy and Phenomenological Research 74, 30-55.
    • Defends phenomenal conservatism using the self-defeat argument. Responds to BonJour 2004.
  • Huemer, Michael. 2009. “Apology of a Modest Intuitionist.” Philosophy and Phenomenological Research 78, 222-236.
    • Responds to DePaul 2009.
  • Huemer, Michael. 2013a. “Epistemological Asymmetries Between Belief and Experience.” Philosophical Studies 162, 741-748.
    • Responds to Siegel 2013.
  • Huemer, Michael. 2013b. “Phenomenal Conservatism Uber Alles.” In Chris Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 328-350). Oxford: Oxford University Press.
    • Responds to several critiques of phenomenal conservatism found in the same volume.
  • Hume, David. 1975. “An Enquiry Concerning Human Understanding.” In L. A. Selby-Bigge (ed.), Enquiries Concerning Human Understanding and Concerning the Principles of Morals. Oxford: Clarendon.
  • Littlejohn, Clayton. 2011. “Defeating Phenomenal Conservatism.” Analytic Philosophy 52, 35-48.
    • Argues that PC may lead one to endorse terrorism and cannibalism.
  • Lycan, William. 2013. “Phenomenal Conservatism and the Principle of Credulity.” In Chris Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 293-305). Oxford: Oxford University Press.
  • Lyons, Jack. 2011. “Circularity, Reliability, and the Cognitive Penetrability of Perception.” Philosophical Issues 21, 289-311.
  • Markie, Peter. 2005. “The Mystery of Direct Perceptual Justification.” Philosophical Studies 126, 347-373.
    • Objections to phenomenal conservatism.
  • Markie, Peter. 2006. “Epistemically Appropriate Perceptual Belief.” Noûs 40, 118-142.
    • Objections to phenomenal conservatism.
  • Markie, Peter. 2013. “Searching for True Dogmatism.” In Chris Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 248-268). Oxford: Oxford University Press.
    • Objections to phenomenal conservatism.
  • McGrath, Matthew. 2013. “Phenomenal Conservatism and Cognitive Penetration: The ‘Bad Basis’ Counterexamples.” In Chris Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 225-247). Oxford: Oxford University Press.
    • Uses the cognitive penetration counterexamples to motivate a modification of phenomenal conservatism.
  • Russell, Bertrand. 1997. The Problems of Philosophy. New York: Oxford University Press.
  • Siegel, Susanna. 2013. “The Epistemic Impact of the Etiology of Experience.” Philosophical Studies 162, 697-722.
    • Criticizes phenomenal conservatism and related views using the tainted source objection.
  • Skene, Matthew. 2013. “Seemings and the Possibility of Epistemic Justification.” Philosophical Studies 163, 539-559.
    • Defends the self-defeat argument for phenomenal conservatism and offers an account of why epistemic justification must derive from appearances.
  • Sosa, Ernest. 1998. “Minimal Intuition.” In Michael DePaul and William Ramsey (eds.), Rethinking Intuition (pp. 257-270). Lanham: Rowman & Littlefield.
  • Steup, Matthias. 2013. “Does Phenomenal Conservatism Solve Internalism’s Dilemma?” In Chris Tucker (eds.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 135-153). Oxford: Oxford University Press.
  • Swinburne, Richard. 2001. Epistemic Justification. Oxford: Oxford University Press.
  • Tolhurst, William. 1998. “Seemings.” American Philosophical Quarterly 35, 293-302.
    • Discusses the nature of seemings.
  • Tooley, Michael. 2013. “Michael Huemer and the Principle of Phenomenal Conservatism.” In Chris Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 306-327). Oxford: Oxford University Press.
    • Objections to phenomenal conservatism.
  • Tucker, Chris. 2010. “Why Open-Minded People Should Endorse Dogmatism.” Philosophical Perspectives 24, 529-545.
    • Defends phenomenal conservatism, appealing to its explanatory power.
  • Tucker, Chris. 2013. “Seemings and Justification: An Introduction.” In Chris Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 1-29). Oxford: Oxford University Press.

 

Author Information

Michael Huemer
Email: owl232@earthlink.net
University of Colorado
U. S. A.

Time

clock2Time is what we use a clock to measure. Despite 2,500 years of investigation into the nature of time, many issues about it are unresolved. Here is a list in no particular order of the most important issues that are discussed in this article: •What time actually is; •Whether time exists when nothing is changing; •What kinds of time travel are possible; •How time is related to mind; •Why time has an arrow; •Whether the future and past are as real as the present; •How to correctly analyze the metaphor of time’s flow; •Whether contingent sentences about the future have truth values now; •Whether future time will be infinite; •Whether there was time before our Big Bang; •Whether tensed or tenseless concepts are semantically basic; •What the proper formalism or logic is for capturing the special role that time plays in reasoning; •What neural mechanisms account for our experience of time; •Which aspects of time are conventional; and •Whether there is a timeless substratum from which time emerges.

Consider this one issue upon which philosophers are deeply divided: What sort of ontological differences are there among the present, the past and the future? There are three competing theories. Presentists argue that necessarily only present objects and present experiences are real, and we conscious beings recognize this in the special vividness of our present experience compared to our memories of past experiences and our expectations of future experiences. So, the dinosaurs have slipped out of reality. However, according to the growing-past theory, the past and present are both real, but the future is not real because the future is indeterminate or merely potential. Dinosaurs are real, but our death is not. The third theory is that there are no objective ontological differences among present, past, and future because the differences are merely subjective. This third theory is called “eternalism.”

Table of Contents

  1. What Should a Philosophical Theory of Time Do?
  2. How Is Time Related to Mind?
  3. What Is Time?
    1. The Variety of Answers
    2. Time vs. “Time”
    3. Linear and Circular Time
    4. The Extent of Time
    5. Does Time Emerge from Something More Basic?
    6. Time and Conventionality
  4. What Does Science Require of Time?
  5. What Kinds of Time Travel are Possible?
  6. Does Time Require Change? (Relational vs. Substantival Theories)
  7. Does Time Flow?
    1. McTaggart's A-Series and B-Series
    2. Subjective Flow and Objective Flow
  8. What are the Differences among the Past, Present, and Future?
    1. Presentism, the Growing-Past, Eternalism, and the Block-Universe
    2. Is the Present, the Now, Objectively Real?
    3. Persist, Endure, Perdure, and Four-Dimensionalism
    4. Truth Values and Free Will
  9. Are There Essentially-Tensed Facts?
  10. What Gives Time Its Direction or Arrow?
    1. Time without an Arrow
    2. What Needs To Be Explained
    3. Explanations or Theories of the Arrow
    4. Multiple Arrows
    5. Reversing the Arrow
  11. What is Temporal Logic?
  12. Supplements
    1. Frequently Asked Questions
    2. What Science Requires of Time
    3. Special Relativity: Proper Times, Coordinate Systems, and Lorentz Transformations (by Andrew Holster)
  13. References and Further Reading

1. What Should a Philosophical Theory of Time Do?

Philosophers of time tend to divide into two broad camps on some of the key philosophical issues, although many philosophers do not fit into these pigeonholes. Members of  the A-camp say that McTaggart's A-series is the fundamental way to view time; events are always changing, the now is objectively real and so is time's flow; ontologically we should accept either presentism or the growing-past theory; predictions are not true or false at the time they are uttered; tenses are semantically basic; and the ontologically fundamental entities are 3-dimensional objects. Members of the B-camp say that McTaggart's B-series is the fundamental way to view time; events are never changing; the now is not objectively real and neither is time's flow; ontologically we should accept eternalism and the block-universe theory; predictions are true or false at the time they are uttered; tenses are not semantically basic; and the fundamental entities are 4-dimensional events or processes. This article provides an introduction to this controversy between the camps.

However, there are many other issues about time whose solutions do not fit into one or the other of the above two camps. (i) Does time exist only for beings who have minds? (ii) Can time exist if no event is happening anywhere? (iii) What sorts of time travel are possible? (iv) Why does time have an arrow? (v) Is the concept of time inconsistent?

A full theory of time should address this constellation of philosophical issues about time. Narrower theories of time will focus on resolving one or more members of this constellation, but the long-range goal is to knit together these theories into a full, systematic, and detailed theory of time. Philosophers also ask whether to adopt  a realist or anti-realist interpretation of a theory of time, but this article does not explore this subtle metaphysical question.

2. How Is Time Related to Mind?

Physical time is public time, the time that clocks are designed to measure. Biological time, by contrast, is indicated by an organism's circadian rhythm or body clock, which is normally regulated by the pattern of sunlight and darkness. Psychological time is different from both physical time and biological time. Psychological time is private time. It is also called phenomenological time, and it is perhaps best understood as awareness of physical time. Psychological time passes relatively swiftly for us while we are enjoying an activity, but it slows dramatically if we are waiting anxiously for the  pot of water to boil on the stove. The slowness is probably due to focusing our attention on short intervals of physical time. Meanwhile, the clock by the stove is measuring physical time and is not affected by any person’s awareness or by any organism's biological time.

When a physicist defines speed to be the rate of change of position with respect to time, the term “time” refers to physical time, not psychological time or biological time. Physical time is more basic or fundamental than psychological time for helping us understand our shared experiences in the world, and so it is more useful for doing physical science, but psychological time is vitally important for understanding many mental experiences.

Psychological time is faster for older people than for children, as you notice when your grandmother says, "Oh, it's my birthday again." That is, an older person's psychological time is faster relative to physical time. Psychological time is slower or faster depending upon where we are in the spectrum of conscious experience: awake normally, involved in a daydream,  sleeping normally, drugged with anesthetics, or in a coma. Some philosophers claim that psychological time is completely transcended in the mental state called nirvana because psychological time slows to a complete stop. However, there is general agreement among philosophers that, when we are awake normally, we experience time as being continuous; we do not experience it as stopping and starting.

A major philosophical problem is to explain the origin and character of our temporal experiences. Philosophers continue to investigate, but so far do not agree on, how our experience of temporal phenomena produces our consciousness of our experiencing temporal phenomena. With the notable exception of Husserl, most philosophers say our ability to imagine other times is a necessary ingredient in our having any consciousness at all. Many philosophers also say people in a coma have a low level of consciousness, yet when a person awakes from a coma they can imagine other times but have no good sense about how long they've been in the coma.

We make use of our ability to imagine other times when we experience a difference between our present perceptions and our present memories of past perceptions.  Somehow the difference between the two gets interpreted by us as evidence that the world we are experiencing is changing through time, with some events succeeding other events. Locke said our train of ideas produces our idea that events succeed each other in time, but he offered no details on how this train does the producing.

Philosophers also want to know which aspects of time we have direct experience of, and which we have only indirect experience of. Is our direct experience only of the momentary present, as Aristotle, Thomas Reid, and Alexius Meinong believed, or do we have direct experience of what William James called a "specious present," a short stretch of physical time? James said, "The tiniest feeling that we can possibly have comes with an earlier and a later part and with a sense of their continuous precession." Anything with an earlier part and a later part cannot possibly be instantaneous in physical time. If a sequence of events occurs over a short enough duration of physical time, then we experience all the events as being simultaneous in psychological time. Among those accepting the notion of a specious present, there is continuing controversy about whether the individual specious presents can overlap each other and about how the individual specious presents combine to form our stream of consciousness.

The brain takes an active role in building a mental scenario of what is taking place beyond the brain. For example, try tapping your nose with one hand and your knee with your other hand at the same time. Even though it takes longer for the signal from your knee to reach your brain than the signal from your nose to reach your brain, you will have the experience of the two tappings being simultaneous—thanks to the brain's manipulation of the data. Neuroscientists suggest that your brain waits about 80 milliseconds for all the relevant input to come in before you experience a “now.” Craig Callender surveyed the psycho-physics literature on human experience of the present, and concluded that, if the duration in physical time between two experienced events is less than about a quarter of a second (250 milliseconds), then humans will say both events happened simultaneously, and this duration is slightly different for different people but is stable within the experience of any single person. Also, "our impression of subjective present-ness...can be manipulated in a variety of ways" such as by what other sights or sounds are present at nearby times. See (Callender 2003-4, p. 124) and (Callender 2008).

Within the field of cognitive science, researchers want to know what are the neural mechanisms that account for our experience of time—for our awareness of change, for our sense of time’s flow, for our ability to place events into the proper time order (temporal succession), and for our ability to notice, and often accurately estimate, durations (persistence). The most surprising experimental result about our experience of time is Benjamin Libet’s claim in the 1970s that his experiments show that the brain events involved in initiating our free choice occur about a third of a second before we are aware of our choice. Before Libet’s work, it was universally agreed that a person is aware of deciding to act freely, then later the body initiates the action. Libet's work has been used to challenge this universal claim about decisions. However, Libet's own experiments have been difficult to repeat because he drilled through the skull and inserted electrodes to shock the underlying brain tissue. See (Damasio 2002) for more discussion of Libet's experiments.

Neuroscientists and psychologists have investigated whether they can speed up our minds relative to a duration of physical time. If so, we might become mentally more productive, and get more high quality decision making done per fixed amount of physical time, and learn more per minute. Several avenues have been explored: using cocaine, amphetamines and other drugs; undergoing extreme experiences such as jumping backwards off a tall bridge with bungee cords attached to one's ankles; and trying different forms of meditation. So far, none of these avenues have led to success productivity-wise.

Any organism’s sense of time is subjective, but is the time that is sensed also subjective, a mind-dependent phenomenon? Throughout history, philosophers of time have disagreed on the answer. Without minds in the world, nothing in the world would be surprising or beautiful or interesting. Can we add that nothing would be in time? The majority answer is "no." The ability of the concept of time to help us make sense of our phenomenological evidence involving change, persistence, and succession of events is a sign that time may be objectively real. Consider succession, that is, order of events in time. We all agree that our memories of events occur after the events occur. If judgments of time were subjective in the way judgments of being interesting vs. not-interesting are subjective, then it would be too miraculous that everyone can so easily agree on the ordering of events in time. For example, first Einstein was born, then he went to school, then he died. Everybody agrees that it happened in this order: birth, school, death. No other order. The agreement on time order for so many events, both psychological events and physical events, is part of the reason that most philosophers and scientists believe physical time is an objective and not dependent on being consciously experienced.

Another large part of the reason to believe time is objective is that our universe has so many different processes that bear consistent time relations, or frequency of occurrence relations, to each other. For example, the frequency of rotation of the Earth around its axis is a constant multiple of the frequency of oscillation of a fixed-length pendulum, which in turn is a constant multiple of the half life of a specific radioactive uranium isotope, which in turn is a multiple of the frequency of a vibrating violin string; the relationship of these oscillators does not change as time goes by (at least not much and not for a long time, and when there is deviation we know how to predict it and compensate for it). The existence of these sorts of relationships makes our system of physical laws much simpler than it otherwise would be, and it makes us more confident that there is something objective we are referring to with the time-variable in those laws. The stability of these relationships over a long time makes it easy to create clocks. Time can be measured easily because we have access to long-term simple harmonic oscillators that have a regular period or “regular ticking.” This regularity shows up in completely different stable systems: rotations of the Earth, a swinging ball hanging from a string (a pendulum), a bouncing ball hanging from a coiled spring, revolutions of the Earth around the Sun, oscillating electric circuits, and vibrations of a quartz crystal. Many of these systems make good clocks. The existence of these possibilities for clocks strongly suggests that time is objective, and is not merely an aspect of consciousness.

The issue about objectivity vs. subjectivity is related to another issue: realism vs. idealism. Is time real or instead just a useful instrument or just a useful convention or perhaps an arbitrary convention? This issue will appear several times throughout this article, including in the later section on conventionality.

Aristotle raised this issue of the mind-dependence of time when he said, “Whether, if soul (mind) did not exist, time would exist or not, is a question that may fairly be asked; for if there cannot be someone to count there cannot be anything that can be counted…” (Physics, chapter 14). He does not answer his own question because, he says rather profoundly, it depends on whether time is the conscious numbering of movement or instead is just the capability of movements being numbered were consciousness to exist.

St. Augustine, adopting a subjective view of time, said time is nothing in reality but exists only in the mind’s apprehension of that reality. The 13th century philosophers Henry of Ghent and Giles of Rome said time exists in reality as a mind-independent continuum, but is distinguished into earlier and later parts only by the mind. In the 13th century, Duns Scotus clearly recognized both physical and psychological time.

At the end of the 18th century, Kant suggested a subtle relationship between time and mind–that our mind actually structures our perceptions so that we can know a priori that time is like a mathematical line. Time is, on this theory, a form of conscious experience, and our sense of time is a necessary condition of our having experiences such as sensations. In the 19th century, Ernst Mach claimed instead that our sense of time is a simple sensation, not an a priori form of sensation. This controversy took another turn when other philosophers argued that both Kant and Mach were incorrect because our sense of time is, instead, an intellectual construction (see Whitrow 1980, p. 64).

In the 20th century, the philosopher of science Bas van Fraassen described time, including physical time, by saying, “There would be no time were there no beings capable of reason” just as “there would be no food were there no organisms, and no teacups if there were no tea drinkers.”

The controversy in metaphysics between idealism and realism is that, for the idealist, nothing exists independently of the mind. If this controversy is settled in favor of idealism, then physical time, too, would have that subjective feature.

It has been suggested by some philosophers that Einstein’s theory of relativity, when confirmed, showed us that physical time depends on the observer, and thus that physical time is subjective, or dependent on the mind. This error is probably caused by Einstein’s use of the term “observer.” Einstein’s theory implies that the duration of an event depends on the observer’s frame of reference or coordinate system, but what Einstein means by “observer’s frame of reference” is merely a perspective or coordinate framework from which measurements could be made. The “observer” need not have a mind. So, Einstein is not making a point about mind-dependence.

To mention one last issue about the relationship between mind and time, if all organisms were to die, there would be events after those deaths. The stars would continue to shine, for example, but would any of these events be in the future? This is a controversial question because advocates of McTaggart’s A-theory will answer “yes,” whereas advocates of McTaggart’s B-theory will answer “no” and say “whose future?”

For more on the consciousness of time and related issues, see the article “Phenomenology and Time-Consciousness.” For more on whether the present, as opposed to time itself, is subjective, see the section called "Is the Present, the Now, Objectively Real?"

3. What Is Time?

Physical time seems to be objective, whereas psychological time is subjective. Many philosophers of science argue that physical time is more fundamental even though psychological time is discovered first by each of us during our childhood, and even though psychological time was discovered first as we human beings evolved from our animal ancestors. The remainder of this article focuses more on physical time than psychological time.

Time is what we use a clock or calendar to measure. We can say time is composed of all the instants or all the times, but that word "times" is ambiguous and also means measurements of time. Think of our placing a coordinate system on our spacetime (this cannot be done successfully in all spacetimes) as our giving names to spacetime points. The measurements we make of time are numbers variously called times, dates, clock readings, and temporal coordinates; and these numbers are relative to time zones and reference frames and conventional agreements about how to define the second, the conventional unit for measuring time. It is because of what time is that we can succeed in assigning time numbers in this manner. Another feature of time is that we can place all events in a single reference frame into a linear sequence one after the other according to their times of occurrence; for any two instants, they are either simultaneous or else one happens before the other but not vice versa. A third feature is that we can succeed in coherently specifying with real numbers how long an event lasts; this is the duration between the event's beginning instant and its ending instant. These are three key features of time, but they do not quite tell us what time itself is.

In discussion about time, the terminology is often ambiguous. We have just mentioned that care is often not taken in distinguishing time from the measure of time. Here are some additional comments about terminology: A moment is said to be a short time, a short event, and to have a short duration or short interval ("length" of time). Comparing a moment to an instant, a moment is brief, but an instant is even briefer. An instant is usually thought to have either a zero duration or else a duration so short as not to be detectable.

a. The Variety of Answers

We cannot trip over a moment of time nor enclose it in a box, so what exactly are moments? Are they created by humans analogous to how, according to some constructivist philosophers, mathematical objects are created by humans, and once created then they have well-determined properties some of which might be difficult for humans to discover? Or is time more like a Platonic idea? Or is time an emergent feature of changes in analogy to how a sound wave is an emergent features the molecules of a vibrating tuning fork, with no single molecule making a sound? When we know what time is, then we can answer all these questions.

One answer to our question, “What is time?” is that time is whatever the time variable t is denoting in the best-confirmed and most fundamental theories of current science. “Time” is given an implicit definition this way. Nearly all philosophers would agree that we do learn much about physical time by looking at the behavior of the time variable in these theories; but they complain that the full nature of physical time can be revealed only with a philosophical theory of time that addresses the many philosophical issues that scientists do not concern themselves with.

Physicists often say time is a sequence of moments in a linear order. Presumably a moment is a durationless instant. Michael Dummett’s constructive model of time implies instead that time is a composition of intervals rather than of durationless instants. The model is constructive in the sense that it implies there do not exist any times which are not detectable in principle by a physical process.

One answer to the question "What is time?" is that it is a general feature of the actual changes in the universe so that if all changes are reversed then time itself reverses. This answer is called "relationism" and "relationalism." A competing answer is that time is more like a substance in that it exists independently of relationships among changes or events. These two competing answers to our question are explored in a later section.

A popular post-Einstein answer to "What is time?" is that time is a single dimension of spacetime.

Because time is intimately related to change, the answer to our question is likely to depend on our answer to the question, "What is change?" The most popular type of answer here is that change is an alteration in the properties of some enduring thing, for example, the alteration from green to brown of an enduring leaf. A different type of answer is that change is basically a sequence of states, such as a sequence containing a state in which the leaf is green and a state in which the leaf is brown. This issue won't be pursued here, and the former answer will be presumed at several places later in the article.

Before the creation of Einstein's special theory of relativity, it might have been said that time must provide these four things: (1) For any event, it specifies when it occurs. (2) For any event, it specifies its duration—how long it lasts. (3) For any event, it fixes what other events are simultaneous with it. (4) For any pair of events that are not simultaneous, it specifies which happens first. With the creation of the special theory of relativity in 1905, it was realized that these questions can get different answers in different frames of reference.

Bothered by the contradictions they claimed to find in our concept of time, Zeno, Plato, Spinoza, Hegel, and McTaggart answer the question, “What is time?” by replying that it is nothing because it does not exist (LePoidevin and MacBeath 1993, p. 23). In a similar vein, the early 20th century English philosopher F. H. Bradley argued, “Time, like space, has most evidently proved not to be real, but a contradictory appearance….The problem of change defies solution.” In the mid-twentieth century, Gödel argued for the unreality of time because Einstein's equations allow for physically possible worlds in which events precede themselves.  In the twenty-first century some physicists such as Julian Barbour say that in order to reconcile general relativity with quantum mechanics either time does not exist or else it is not fundamental in nature; see (Callender 2010) for a discussion of this. However, most philosophers agree that time does exist. They just can not agree on what it is.

Let’s briefly explore other answers that have been given throughout history to our question, “What is time?” Aristotle claimed that “time is the measure of change” (Physics, chapter 12). He never said space is a measure of anything. Aristotle emphasized “that time is not change [itself]” because a change “may be faster or slower, but not time…” (Physics, chapter 10). For example, a specific change such as the descent of a leaf can be faster or slower, but time itself can not be faster or slower. In developing his views about time, Aristotle advocated what is now referred to as the relational theory when he said, “there is no time apart from change….” (Physics, chapter 11). In addition, Aristotle said time is not discrete or atomistic but “is continuous…. In respect of size there is no minimum; for every line is divided ad infinitum. Hence it is so with time” (Physics, chapter 11).

René Descartes had a very different answer to “What is time?” He argued that a material body has the property of spatial extension but no inherent capacity for temporal endurance, and that God by his continual action sustains (or re-creates) the body at each successive instant. Time is a kind of sustenance or re-creation ("Third Meditation" in Meditations on First Philosophy).

In the 17th century, the English physicist Isaac Barrow rejected Aristotle’s linkage between time and change. Barrow said time is something which exists independently of motion or change and which existed even before God created the matter in the universe. Barrow’s student, Isaac Newton, agreed with this substantival theory of time. Newton argued very specifically that time and space are an infinitely large container for all events, and that the container exists with or without the events. He added that space and time are not material substances, but are like substances in not being dependent on anything except God.

Gottfried Leibniz objected. He argued that time is not an entity existing independently of actual events. He insisted that Newton had underemphasized the fact that time necessarily involves an ordering of any pair of non-simultaneous events. This is why time “needs” events, so to speak. Leibniz added that this overall order is time. He accepted a relational theory of time and rejected a substantival theory.

In the 18th century, Immanuel Kant said time and space are forms that the mind projects upon the external things-in-themselves. He spoke of our mind structuring our perceptions so that space always has a Euclidean geometry, and time has the structure of the mathematical line. Kant’s idea that time is a form of apprehending phenomena is probably best taken as suggesting that we have no direct perception of time but only the ability to experience things and events in time. Some historians distinguish perceptual space from physical space and say that Kant was right about perceptual space. It is difficult, though, to get a clear concept of perceptual space. If physical space and perceptual space are the same thing, then Kant is claiming we know a priori that physical space is Euclidean. With the discovery of non-Euclidean geometries in the 1820s, and with increased doubt about the reliability of Kant’s method of transcendental proof, the view that truths about space and time are a priori truths began to lose favor.

The above discussion does not exhaust all the claims about what time is. And there is no sharp line separating a definition of time, a theory of time, and an explanation of time.

b. Time vs. “Time”

Whatever time is, it is not “time.” “Time” is the most common noun in all documents on the Internet's web pages; time is not. Nevertheless, it might help us understand time if we improved our understanding of the sense of the word “time.” Should the proper answer to the question “What is time?” produce a definition of the word as a means of capturing its sense? No. At least not if the definition must be some analysis that provides a simple paraphrase in all its occurrences. There are just too many varied occurrences of the word: time out, behind the times, in the nick of time, and so forth.

But how about narrowing the goal to a definition of the word “time” in its main sense, the sense that most interests philosophers and physicists? That is, explore the usage of the word “time” in its principal sense as a means of learning what time is. Well, this project would require some consideration of the grammar of the word “time.” Most philosophers today would agree with A. N. Prior who remarked that, “there are genuine metaphysical problems, but I think you have to talk about grammar at least a little bit in order to solve most of them.” However, do we learn enough about what time is when we learn about the grammatical intricacies of the word? John Austin made this point in “A Plea for Excuses,” when he said, if we are using the analytic method, the method of analysis of language, in order to sharpen our perception of the phenomena, then “it is plainly preferable to investigate a field where ordinary language is rich and subtle, as it is in the pressingly practical matter of Excuses, but certainly is not in the matter, say, of Time.” Ordinary-language philosophers have studied time talk, what Wittgenstein called the “language game” of discourse about time. Wittgenstein’s expectation is that by drawing attention to ordinary ways of speaking we will be able to dissolve rather than answer our philosophical questions. But most philosophers of time are unsatisfied with this approach; they want the questions answered, not dissolved, although they are happy to have help from the ordinary language philosopher in clearing up misconceptions that may be produced by the way we use the word in our ordinary, non-technical discourse.

c. Linear and Circular Time

Is time more like a straight line or instead more like a circle? If your personal time were circular, then eventually you would be reborn. With circular time, the future is also in the past, and every event occurs before itself. If your time is like this, then the question arises as to whether you would be born an infinite number of times or only once. The argument that you'd be born only once appeals to Leibniz’s Principle of the Identity of Indiscernibles: each supposedly repeating state of the world would occur just once because each state would not be discernible from the state that recurs. The way to support the idea of eternal recurrence or repeated occurrence seems to be to presuppose a linear ordering in some "hyper" time of all the cycles so that each cycle is discernible from its predecessor because it occurs at a different hyper time.

During history (and long before Einstein made a distinction between proper time and coordinate time), a variety of answers were given to the question of whether time is like a line or, instead, closed like a circle. The concept of linear time first appeared in the writings of the Hebrews and the Zoroastrian Iranians. The Roman writer Seneca also advocated linear time. Plato and most other Greeks and Romans believed time to be motion and believed cosmic motion was cyclical, but this was not envisioned as requiring any detailed endless repetition such as the multiple rebirths of Socrates. However, the Pythagoreans and some Stoic philosophers such as Chrysippus did adopt this drastic position. The idea was picked up again by Nietzsche in 1882. Scholars do not agree on whether Nietzsche meant his idea of circular time to be taken literally or merely for a moral lesson about how you should live your life if you knew that you'd live it over and over.

Islamic and Christian theologians adopted the ancient idea that time is linear plus the Jewish-Zoroastrian idea that the universe was created at a definite moment in the past. Augustine emphasized that human experience is a one-way journey from Genesis to Judgment, regardless of any recurring patterns or cycles in nature. In the Medieval period, Thomas Aquinas agreed. Nevertheless, it was not until 1602 that the concept of linear time was more clearly formulated—by the English philosopher Francis Bacon. In 1687, Newton advocated linear time when he represented time mathematically by using a continuous straight line. The concept of linear time was promoted by Barrow, Newton, Leibniz, Locke and Kant. Kant argued that it is a matter of necessity. In 19th century Europe, the idea of linear time became dominant in both science and philosophy. However, in the twentieth century, Gödel and several others discovered solutions to the equations of Einstein’s general theory of relativity that allowed closed loops of proper time (closed time-like curves). Each event in the loop lies in its own causal history. These causal loops or closed curves in spacetime allow you to go forward continuously in time until you arrive back into your past. The idea is that time is not ordered globally, but only locally, that is, for short durations. As far as we can tell today, our universe does not exemplify any of these solutions to Einstein’s equations.

There are many mathematically possible topologies for time. Time could be linear or closed (circular). Linear time might have a beginning or have no beginning; it might have an ending or no ending. There could be two disconnected time streams, in two parallel worlds; perhaps one would be linear and the other circular. There could be branching time, in which time is like the letter "Y", and there could be a fusion time in which two different time streams are separate for some durations but merge into one for others. Time might be two dimensional instead of one dimensional. For all these topologies, there could be discrete time or, instead, continuous time. That is, the micro-structure of time's instants might be analogous to a sequence of integers or, instead, analogous to a continuum of real numbers. For physicists, if time were discrete or quantized, their favorite lower limit on a possible duration is the Planck time of about 10-43 seconds.

d. The Extent of Time

In ancient Greece, Plato and Aristotle agreed that the past is eternal. Aristotle claimed that time had no beginning because, for any time, we always imagine an earlier time.  The Medieval philosopher Thomas Aquinas objected to Aristotle's position, saying that, although the world could have existed infinitely into the past, in fact it did not, and our imagination cannot always be trusted to tell us how things are. Instead, the past is finite because time began with God’s creation of Earth a finite time ago. In the late 17th century, Newton declared that time is infinite in both the past and future. Then, in the 18th century, Kant argued that this is not an empirical matter but rather a matter of necessity.

It is still an open question physics whether past time was finite or infinite, but it is generally agreed that future time is infinite.

In the most well accepted version of the Big Bang Theory in the field of astrophysics, about 13.8 billion years ago our universe had a nearly infinitesimal size and a nearly infinite gravitational field. Nearly all physicists believe the extent of past time is at least 13.8 billion years. Many physicists believe that past time is infinite, and many physicists believe instead that time began 13.8 billion years ago. This is still an unsettled issue. There are solutions to Einstein's equations of relativity in which spacetime is infinite and other solutions in which spacetime is finite. In the Big Bang theory that is generated by the Russian physicist Alexander Friedmann’s solution to Einstein’s equations of general relativity, if we follow time backwards from the present, there was a time when the universe began with zero volume, infinite density and infinite temperature. The universe has been expanding and cooling ever since. Nearly all physicists believe that Friedmann’s solution cannot be trusted for the earliest times when the diameter of the universe is so small that quantum theory must be taken into account.

In the more popular version of the Big Bang theory, the Big Bang theory with inflation, the universe once was an extremely tiny bit of explosively inflating material. About 10-36 second later, this inflationary material underwent an accelerating expansion that lasted for 10-30 seconds during which the universe expanded by a factor of 1078. Once this brief period of inflation ended, the volume of the universe was the size of an orange, and the energy causing the inflation was transformed into a dense gas of expanding hot radiation. This expansion has never stopped. But with expansion came cooling, and this allowed individual material particles to condense and eventually much later to clump into stars and galaxies. The mutual gravitational force of the universe’s matter and energy decelerated the expansion, but seven billion years after our Big Bang, the universe’s dark energy became especially influential and started to accelerate the expansion again, although not at the explosive rate of the initial inflation. This more recent inflation of the universe will continue forever at an exponentially accelerating rate, turning space into an almost perfect vacuum as the remaining matter-energy becomes more and more diluted.

The Big Bang Theory with or without inflation is challenged by other theories such as a cyclic theory in which every trillion years the expansion changes to contraction until the universe becomes infinitesimal, at which time there is a bounce or new Big Bang. The cycles of Bang and Crunch continue forever, and they might or might not have existed forever. For the details, see (Steinhardt 2012). A promising but as yet untested theory called "eternal inflation" implies that our particular Big Bang is one among many other Big Bangs that occurred within a background spacetime that is actually infinite in space and in past time and future time.

Consider this challenging argument from (Newton-Smith 1980, p. 111) that claims time cannot have had a finite past: “As we have reasons for supposing that macroscopic events have causal origins, we have reason to suppose that some prior state of the universe led to the product of [the Big Bang]. So the prospects for ever being warranted in positing a beginning of time are dim.” The usual response to Newton-Smith here is two-fold. First, our Big Bang is a microscopic event, not a macroscopic event. Second, if a confirmed cosmological theory implies there is a first event, we can say this event is an exception to the metaphysical assumption that every event has a prior cause.

When we discuss whether time was infinite in the past or will be in the future, we are presuming an ordinary scale of time,  one for which it is easy to find periodic processes to use in building clocks. However, if we alter this scale of time t by using a logarithmic scale, we can turn the finite into the infinite. With a scale change from time t to log t, a finite event lasting from year 0 to year 1 becomes an infinite event lasting from -∞ to 0 because the log 0 = -∞ and log 1 = 0.

e. Does Time Emerge from Something More Basic?

Is time a fundamental feature of nature, or does it emerge from more basic timeless features–in analogy to the way the smoothness of water flow emerges from the complicated behavior of the underlying molecules, none of which is properly called "smooth"? That is, is time ontologically basic (fundamental), or does it depend on something even more basic? We might rephrase this question more technically by asking whether facts about time supervene on more basic facts. Facts about sound supervene on, or are a product of, facts about changes in the molecules of the air, so molecular change is more basic than sound. Minkowski argued in 1908 that we should believe spacetime is more basic than time, and this argument is generally well accepted. However, is this spacetime itself basic? Some physicists argue that spacetime is the product of some more basic micro-substrate at the level of the Planck length, although there is no agreed-upon theory of what the substrate is. Other physicists say space is not basic, but time is. In 2004, after winning the Nobel Prize in physics, David Gross expressed this viewpoint:

Everyone in string theory is convinced…that spacetime is doomed. But we don’t know what it’s replaced by. We have an enormous amount of evidence that space is doomed. We even have examples, mathematically well-defined examples, where space is an emergent concept…. But in my opinion the tough problem that has not yet been faced up to at all is, “How do we imagine a dynamical theory of physics in which time is emergent?” …All the examples we have do not have an emergent time. They have emergent space but not time. It is very hard for me to imagine a formulation of physics without time as a primary concept because physics is typically thought of as predicting the future given the past. We have unitary time evolution. How could we have a theory of physics where we start with something in which time is never mentioned?

The discussion in this section about whether time is ontologically basic has no implications for whether the word “time” is semantically basic or whether the idea of time is basic to concept formation.

f. Time and Conventionality

It is an arbitrary convention that our civilizations designs clocks to count up to higher numbers rather than down to lower numbers as time goes on. It is just a matter of convenience that we agree to the convention of re-setting our clock by one hour as we cross a time-zone. It is an arbitrary convention that there are twenty-four hours in a day instead of ten, that there are sixty seconds in a minute rather than twelve, that a second lasts as long as it does, and that the origin of our coordinate system for time is associated with the birth of Jesus on some calendars but the entry of Mohammed into Mecca on other calendars.

According to relativity theory, if two events couldn't have had a causal effect on each other, then we analysts are free to choose a reference frame in which one of the events happens first, or instead the other event happens first, or instead the two events are simultaneous. But once a frame is chosen, this fixes the time order of any pair of events. This point is discussed further in the next section.

In 1905, the French physicist Henri Poincaré argued that time is not a feature of reality to be discovered, but rather is something we've invented for our convenience. Because, he said, possible empirical tests cannot determine very much about time, he recommended the convention of adopting the concept of time that makes for the simplest laws of physics. Opposing this conventionalist picture of time, other philosophers of science have recommended a less idealistic view in which time is an objective feature of reality. These philosophers are recommending an objectivist picture of time.

Can our standard clock be inaccurate? Yes, say the objectivists about the standard clock. No, say the conventionalists who say that the standard clock is accurate by convention; if it acts strangely, then all clocks must act strangely in order to stay in synchrony with the standard clock that tells everyone the correct time. A closely related question is whether, when we change our standard clock, from being the Earth's rotation to being an atomic clock, or just our standard from one kind of atomic clock to another kind of atomic clock, are we merely adopting constitutive conventions for our convenience, or in some objective sense are we making a more correct choice?

Consider how we use a clock to measure how long an event lasts, its duration. We always use the following method: Take the time of the instant at which the event ends, and subtract the time of the instant when the event starts. To find how long an event lasts that starts at 3:00 and ends at 5:00, we subtract and get the answer of two hours. Is the use of this method merely a convention, or in some objective sense is it the only way that a clock should be used? The method of subtracting the start time from the end time is called the "metric" of time. Is there an objective metric, or is time "metrically amorphous," to use a phrase from Adolf Grünbaum, because there are alternatively acceptable metrics, such as subtracting the square roots of those times, or perhaps using the square root of their difference and calling this the "duration"?

There is an ongoing dispute about the extent to which there is an element of conventionality in Einstein’s notion of two separated events happening at the same time. Einstein said that to define simultaneity in a single reference frame you must adopt a convention about how fast light travels going one way as opposed to coming back (or going any other direction). He recommended adopting the convention that light travels the same speed in all directions (in a vacuum free of the influence of gravity). He claimed it must be a convention because there is no way to measure whether the speed is really the same in opposite directions since any measurement of the two speeds between two locations requires first having synchronized clocks at those two locations, yet the synchronization process will presuppose whether the speed is the same in both directions. The philosophers B. Ellis and P. Bowman in 1967 and D. Malament in 1977 gave different reasons why Einstein is mistaken. For an introduction to this dispute, see the Frequently Asked Questions. For more discussion, see (Callender and Hoefer 2002).

4. What Does Science Require of Time?

Physics, including astronomy, is the only science that explicitly studies time, although all sciences use the concept. Yet different physical theories place different demands on this concept. So, let's discuss time from the perspective of current science.

Physical theories treat time as being another dimension, analogous to a spatial dimension, and they describe an event as being located at temporal coordinate t, where t is a real number. Each specific temporal coordinate is called a "time." An instantaneous event is a moment and is located at just one time, or one temporal coordinate, say t1. It is said to last for an "instant." If the event is also a so-called "point event," then it is located at a single spatial coordinate, say <x1, y1, z1>. Locations constitute space, and times constitute time.

The fundamental laws of science do not pick out a present moment or present time. This fact is often surprising to a student who takes a science class and notices all sorts of talk about the present. Scientists frequently do apply some law of science while assigning, say, t0 to be the name of the present moment, then calculate this or that. This insertion of the fact that t0 is the present is an initial condition of the situation to which the law is being applied, and is not part of the law itself. The laws themselves treat all moments equally.

Science does not require that its theories have symmetry under time-translation, but this is a goal that physicists do pursue for their basic (fundamental) theories. If a theory has symmetry under time-translation, then the laws of the theories do not change. The law of gravitation in the 21st century is the same law that held one thousand centuries ago.

Physics also requires that almost all the basic laws of science to be time symmetric. This means that a law, if it is a basic law, must not distinguish between backward and forward time directions.

In physics we need to speak of one event happening pi seconds after another, and of one event happening the square root of three seconds after another. In ordinary discourse outside of science we would never need this kind of precision. The need for this precision has led to requiring time to be a linear continuum, very much like a segment of the real number line. So, one  requirement that relativity, quantum mechanics and the Big Bang theory place on any duration is that is be a continuum. This implies that time is not quantized, even in quantum mechanics. In a world with time being a continuum, we cannot speak of some event being caused by the state of the world at the immediately preceding instant because there is no immediately preceding instant, just as there is no real number immediately preceding pi.

EinsteinEinstein's theory of relativity has had the biggest impact on our understanding of time. But Einstein was not the first physicist to appreciate the relativity of motion. Galileo and Newton would have said speed is relative to reference frame. Einstein would agree but would add that durations and occurrence times are also relative. For example, any observer fixed to a moving railroad car in which you are seated will say your speed is zero, whereas an observer fixed to the train station will say you have a positive speed. But as Galileo and Newton understood relativity, both observers will agree about the time you had lunch on the train. Einstein would say they are making a mistake about your lunchtime; they should disagree about when you had lunch. For Newton, the speed of anything, including light, would be different in the two frames that move relative to each other, but Einstein said Maxwell’s equations require the speed of light to be invariant. This implies that the Galilean equations of motion are incorrect. Einstein figured out how to change the equations; the consequence is the Lorentz transformations in which two observers in relative motion will have to disagree also about the durations and occurrence times of events. What is happening here is that Einstein is requiring a mixing of space and time; Minkowski said it follows that there is a spacetime which divides into its space and time differently for different observers.

One consequence of this is that relativity's spacetime is more fundamental than either space or time alone. Spacetime is commonly said to be four-dimensional, but because time is not space it is more accurate to think of spacetime as being (3 + 1)-dimensional. Time is a distinguished, linear subspace of four-dimensional spacetime.

Time is relative in the sense that the duration of an event depends on the reference frame used in measuring the duration. Specifying that an event lasted three minutes without giving even an implicit indication of the reference frame is like asking someone to stand over there and not giving any indication of where “there” is. One implication of this is that it becomes more difficult to defend McTaggart's A-theory which says that properties of events such as "happened twenty-three minutes ago" and "is happening now" are basic properties of events and are not properties relative to chosen reference frames.

Another profound idea from relativity theory is that accurate clocks do not tick the same for everyone everywhere. Each object has its own proper time, and so the correct time shown by a clock depends on its history (in particular, it history of speed and gravitational influence).  Relative to clocks that are stationary in the reference frame, clocks in motion run slower, as do clocks in stronger gravitational fields. In general, two synchronized clocks do not stay synchronized if they move relative to each other or undergo different gravitational forces. Clocks in cars driving by your apartment building run slower than your apartment’s clock.

Suppose there are two twins. One stays on Earth while the other twin zooms away in a spaceship and returns ten years later according to the spaceship’s clock. That same arrival event could be twenty years later according to an Earth-based clock, provided the spaceship went fast enough. The Earth twin would now be ten years older than the spaceship twin. So, one could say that the Earth twin lived two seconds for every one second of the spaceship twin.

According to relativity theory, the order of events in time is only a partial order because for any event e, there is an event f such that e need not occur before f, simultaneous with f, nor after f.  These pairs of events are said to be in each others’ “absolute elsewhere,” which is another way of saying that neither could causally affect each other because even a light signal could not reach from one event to the other. Adding a coordinate system or reference frame to spacetime will force the events in all these pairs to have an order and so force the set of all events to be totally ordered in time, but what is interesting philosophically is that there is a leeway in the choice of the frame. For any two specific events e and f that could never causally affect each other, the analyst may choose a frame in which e occurs first, or choose another frame in which f occurs first, or instead choose another frame in which they are simultaneous. Any choice of frame will be correct. Such is the surprising nature of time according to relativity theory.

General relativity places other requirements on events that are not required in special relativity. Unlike in Newton's physics and the physics of special relativity, in general relativity the spacetime is not a passive container for events; it is dynamic in the sense that any change in the amount and distribution of matter-energy will change the curvature of spacetime itself. Gravity is a manifestation of the warping of spacetime. In special relativity, its Minkowski spacetime has no curvature. In general relativity a spacetime with no mass or energy might or might not have curvature, so the geometry of spacetime is not always determined by the behavior of matter and energy.

In 1611, Bishop James Ussher declared that the beginning of time occurred on October 23, 4004 B.C.E. Today's science disagrees. According to one interpretation of the Big Bang theory of cosmology, the universe began 13.8 billion years ago as spacetime started to expand from an infinitesimal volume; and the expansion continues today, with the volume of space now doubling in size about every ten billion years. The amount of future time  is a potential infinity (in Aristotle's sense of the term) as opposed to an actual infinity. For more discussion of all these compressed remarks, see What Science Requires of Time.

5. What Kinds of Time Travel are Possible?

Most philosophers and scientists believe time travel is physically possible. To define the term, we can say that in time travel, the traveler’s journey as judged by the traveler's correct clock takes a different amount of time than the journey does as judged by the correct clocks of those who do not take the journey. The physical possibility of travel to the future is well accepted, but travel to the past is more controversial, and time travel that changes the future or the past is generally considered to be impossible.

According to relativity theory, there are two ways to travel into the future using time dilation—either by moving at high speed or by taking advantage of the presence of an intense gravitational field. If you move at extremely high speed, you can travel into the future to the year 2,300 on Earth (as measured by Earth-based clocks or by clocks elsewhere that are not moving relative to Earth) while your personal clock measures that only ten years have elapsed. You can participate in that future, not just view it. But you can not get back to the twenty-first century on Earth by reversing your velocity. It's not that you suddenly jump into the Earth's future of the year 2,300; you have continually been traveling forward in both your personal time and the world's external time, and you could have been continuously observed from Earth. But as judged by the world's external time you do have a much longer lifetime than your biological twin whom you left back on Earth long ago. (See the discussion of the twin paradox for the solution to the famous paradox involving time dilation.)

In addition to time dilation due to high speed, there is time dilation due to being in the presence of a gravitation field; this is called gravitational time dilation or gravitational red shift. Because of Earth's gravity, people who live in the ground floor apartment age slower than their twin who lives in the top floor apartment of the same building. This kind of time travel is more noticeable if the younger twin lives near a black hole where the gravity is much stronger than on Earth.

You may have heard the remark that you have no time to take a spaceship ride across the galaxy since it is 100,000 light years across. So, even if you were to travel at just under the speed of light, it would take you over 100,000 years. Who has that kind of time? This remark contains a misunderstanding about time dilation. This is 100,000 years as judged by clocks that are stationary relative to Earth, not as judged by your clock. If you were in the spaceship that accelerated quickly to just under the speed of light, then you and your clock might age hardly at all as you traveled across the galaxy. In fact, with a very fast spaceship, you have plenty of time to go anywhere in the universe you wish to go.

How about travel to the past, the more interesting kind of time travel? This is not allowed by either Newton's physics or Einstein's special relativity, but is allowed by general relativity. In 1949, Kurt Gödel discovered a solution to Einstein’s field equations that allows continuous, closed future-directed timelike curves. To say this more simply, Gödel discovered that in some possible worlds that obey the theory of general relativity, you can continually travel forward in your personal time but eventually arrive into your own past. In this unusual non-Minkowski spacetime, the universe as a whole is the time machine; no one needs to build a device in order to travel this way.

The situation required for travel to the past is much more exotic than merely having a fast spaceship, but scientists do know how you could get back to Hitler’s office in Berlin in a manner consistent with the laws of science. Unfortunately,  you cannot do anything that hasn’t already been done, or else there would be a contradiction. In fact, if you did go back, then you would already have been back there. So, you can participate in a Hitler assassination attempt, but you cannot change its outcome. For the same reason, you cannot kill your childhood self no matter how hard you try. Also, when you travel to the past, you do not suddenly fade out of the present and into some past time, although this is how time travel is so often portrayed in films.

There are several well known philosophical arguments against past-directed time travel. None are generally considered to be decisive. Here are the arguments:

  1. Time travel is impossible because if it were possible we should have seen many time travelers by now, but nobody has encountered any time travelers.
  2. If there were time travel, then when time travelers go back and attempt to change history they must always botch their attempts to change anything, and it will appear to anyone watching them at the time as if nature is conspiring against them. Since observers have never witnessed this apparent conspiracy of nature, there is no time travel.
  3. If there were travel to the past along a closed timelike curve, then these events would occur before themselves and after themselves, but this violates our definition of the word “before.”
  4. Travel to the past is impossible because it allows the gaining of information for free. For example, buy a copy of Darwin's book The Origin of Species, that was published in 1859. In the 21st century, enter a time machine with it, go back to 1855 and give the book to Darwin himself. He could use your copy in order to write his manuscript which he sends off to the publisher. If so, who first came up with the knowledge about evolution? Because this scenario contradicts what we know about where knowledge comes from, past-directed time travel isn't really possible.
  5. Suppose you enter a time machine and bring along several male and female squirrels of one species. You take these back to the time of the dinosaurs. The squirrels begin breeding, the dinosaurs die out, and the species of squirrel survives into modern times. Since this scenario allows a species to come into existence without its going through the process of Darwinian evolution, time travel is impossible.
  6. In 1972, John Earman described a rocket ship that carries a time machine capable of firing a probe (perhaps a smaller rocket) into its recent past. The ship is programmed to fire the probe at a certain time unless a safety switch is on at that time. Suppose the safety switch is programmed to be turned on if and only if the “return” or “impending arrival” of the probe is (or has been) detected by a sensing device on the ship. Does the probe get launched? At first glance it seems to be launched if and only if it is not launched. Is this like designing a gun that shoots if and only if it does not shoot? Not quite. The argument of this paradox depends on the assumptions that the rocket ship does work as intended—that people are able to build the computer program, the probe, the safety switch, and an effective sensing device. Earman himself says all these premises are acceptable and so the only weak point in the reasoning to the paradoxical conclusion is the assumption that travel to the past is physically possible.

These six complaints are a mixture of arguments that past-directed time travel is not logically possible, that it is not physically possible, that it is not technologically possible with current technology, and that it is unlikely, given today's empirical evidence.

For more discussion of time travel, see the encyclopedia article “Time Travel.”

6. Does Time Require Change? (Relational vs. Substantival Theories)

By "time requires change," we mean that for time to exist something must change its properties over time. We don't mean, change it properties over space as in change color from top to bottom. There are two main philosophical theories about whether time requires change, relational theories and substantival theories.

In a relational theory of time, time is defined in terms of relationships among objects, in particular their changes. Substantival theories are theories that imply time is substance-like in that it exists independently of changes; it exists independently of all the spacetime relations exhibited by physical processes. This theory allows "empty time" in which nothing changes. On the other hand, relational theories do not allow this. They imply that at every time something is happening—such as an electron moving through space or a tree leaf changing its color. In short, no change implies no time. Some substantival theories describe spacetime as being like a container for events. The container exists with or without events in it. Relational theories imply there is no container without contents. But the substance that substantivalists have in mind is more like a medium pervading all of spacetime and less like an external container. The vast majority of relationists present their relational theories in terms of actually instantiated relations and not merely possible relations.

Everyone agrees time cannot be measured without there being changes, because we measure time by observing changes in some property or other, but the present issue is whether time exists without changes. On this issue, we need to be clear about what sense of change and what sense of property we are intending. For the relational theory, the term "property" is intended to exclude what Nelson Goodman called grue-like properties. Let us define an object to be grue if it is green before the beginning of the year 1888 but is blue thereafter. Then the world’s chlorophyll undergoes a change from grue to non-grue in 1888. We’d naturally react to this by saying that change in chlorophyll's grue property is not a “real change” in the world’s chlorophyll.

Does Queen Anne’s death change when I forget about it? Yes, but the debate here is whether the event’s intrinsic properties can change, not merely its non-intrinsic properties such as its relationships to us. This special intrinsic change is called by many names: secondary change and second-order change and McTaggartian change and McTaggart change. Second-order change is the kind of change that A-theorists say occurs when Queen Anne's death recedes ever farther into the past. The objection from the B-theorists here is that this is not a "real, objective, intrinsic change" in her death. First-order change is ordinary change, the kind that occurs when a leaf changes from green to brown, or a person changes from sitting to standing.

Einstein's general theory of relativity does imply it is possible for spacetime to exist while empty of events. This empty time is permissible according to the substantival theory but not allowed by the relational theory. Yet Einstein considered himself to be a relationalist.

Substantival theories are sometimes called "absolute theories." Unfortunately the term "absolute theory" is used in two other ways. A second sense of " to be absolute" is to be immutable,  or changeless. A third sense is to be independent of observer or reference frame. Although Einstein’s theory implies there is no absolute time in the sense of being independent of reference frame, it is an open question whether relativity theory undermines absolute time in the sense of substantival time; Einstein believed it did, but many philosophers of science do not.

The first advocate of a relational theory of time was Aristotle. He said, “neither does time exist without change.” (Physics, book IV, chapter 11, page 218b) However, the battle lines were most clearly drawn in the early 18th century when Leibniz argued for the relational position against Newton, who had adopted a substantival theory of time. Leibniz’s principal argument against Newton is a reductio ad absurdum. Suppose Newton’s space and time were to exist. But one could then imagine a universe just like ours except with everything shifted five kilometers east and five minutes earlier. However, there would be no reason why this shifted universe does not exist and ours does. Now we have arrived at a contradiction because, if there is no reason for there to be our universe rather than the shifted universe, then we have violated Leibniz’s Principle of Sufficient Reason: that there is an understandable reason for everything being the way it is. So, by reductio ad absurdum, Newton’s substantival space and time do not exist. In short, the trouble with Newton’s theory is that it leads to too many unnecessary possibilities.

Newton offered this two-part response: (1) Leibniz is correct to accept the Principle of Sufficient Reason regarding the rational intelligibility of the universe, but there do not have to be knowable reasons for humans; God might have had His own sufficient reason for creating the universe at a given place and time even though mere mortals cannot comprehend His reasons. (2) The bucket thought-experiment shows that acceleration relative to absolute space is detectable; thus absolute space is real, and if absolute space is real, so is absolute time. Here's how to detect absolute space. Suppose we tie a bucket’s handle to a rope hanging down from a tree branch. Partially fill the bucket with water, and let it come to equilibrium. Notice that there is no relative motion between the bucket and the water, and in this case the water surface is flat. Now spin the bucket, and keep doing this until the angular velocity of the water and the bucket are the same. In this second case there is again no relative motion between the bucket and the water, but now the water surface is concave. So spinning makes a difference, but how can a relational theory explain the difference in the shape of the surface? It can not, says Newton. When the bucket and water are spinning, what are they spinning relative to? Because we can disregard the rest of the environment including the tree and rope, says Newton, the only explanation of the difference in surface shape between the non-spinning case and the spinning case is that when it is not spinning there is no motion relative to space, but when it is spinning there is motion relative to a third thing, space itself, and space itself is acting upon the water surface to make it concave. Alternatively expressed, the key idea is that the presence of centrifugal force is a sign of rotation relative to absolute space. Leibniz had no rebuttal. So, for over two centuries after this argument was created, Newton’s absolute theory of space and time was generally accepted by European scientists and philosophers.

One hundred years later, Kant entered the arena on the side of Newton. In a space containing only a single glove, said Kant, Leibniz could not account for its being a right-handed glove versus a left-handed glove because all the internal relationships would be the same in either case. However, we all know that there is a real difference between a right and a left glove, so this difference can only be due to the glove’s relationship to space itself. But if there is a “space itself,” then the absolute or substantival theory is better than the relational theory.

Newton’s theory of time was dominant in the 18th and 19th centuries, even though during those centuries Huygens, Berkeley, and Mach had entered the arena on the side of Leibniz. Mach argued that it must be the remaining matter in the universe, such as the "fixed" stars, which causes the water surface in the bucket to be concave, and that without these stars or other matter, a spinning bucket would have a flat surface. In the 20th century, Hans Reichenbach and the early Einstein declared the special theory of relativity to be a victory for the relational theory, in large part because a Newtonian absolute space would be undetectable. Special relativity, they also said, ruled out a space-filling ether, the leading candidate for substantival space, so the substantival theory was incorrect. And the response to Newton’s bucket argument is to note Newton’s error in not considering the environment. Einstein agreed with Mach that, if you hold the bucket still but spin the background stars  in the environment, then the water will creep up the side of the bucket and form a concave surface—so the bucket thought experiment does not require absolute space.

Although it was initially believed by Einstein and Reichenbach that relativity theory supported Mach regarding the bucket experiment and the absence of absolute space, this belief is controversial. Many philosophers argue that Reichenbach and the early Einstein have been overstating the amount of metaphysics that can be extracted from the physics.  There is substantival in the sense of independent of reference frame and substantival in the sense of independent of events. Isn't only the first sense ruled out when we reject a space-filling ether? The critics admit that general relativity does show that the curvature of spacetime is affected by the distribution of matter, so today it is no longer plausible for a substantivalist to assert that the “container” is independent of the behavior of the matter it contains. But, so they argue, general relativity does not rule out a more sophisticated substantival theory in which spacetime exists even if it is empty and in which two empty universes could differ in the curvature of their spacetime. For this reason, by the end of the 20th century, substantival theories had gained some ground.

In 1969, Sydney Shoemaker presented an argument attempting to establish the understandability of time existing without change, as Newton’s absolutism requires. Divide all space into three disjoint regions, called region 3, region 4, and region 5. In region 3, change ceases every third year for one year. People in regions 4 and 5 can verify this and then convince the people in region 3 of it after they come back to life at the end of their frozen year. Similarly, change ceases in region 4 every fourth year for a year; and change ceases in region 5 every fifth year. Every sixty years, that is, every 3 x 4 x 5 years, all three regions freeze simultaneously for a year. In year sixty-one, everyone comes back to life, time having marched on for a year with no change. Note that even if Shoemaker’s scenario successfully shows that the notion of empty time is understandable, it does not show that empty time actually exists. If we accept that empty time occasionally exists, then someone who claims the tick of the clock lasts one second could be challenged by a skeptic who says perhaps empty time periods