Philosophy of Technology

Like many domain-specific subfields of philosophy, such as philosophy of physics or philosophy of biology, philosophy of technology is a comparatively young field of investigation. It is generally thought to have emerged as a recognizable philosophical specialization in the second half of the 19th century, its origins often being located with the publication of the Ernst Kapp’s book, Grundlinien einer Philosophie der Technik (Kapp, 1877). Philosophy of technology continues to be a field in the making and as such is characterized by the coexistence of a number of different approaches to (or, perhaps, styles of) doing philosophy. This highlights a problem for anyone aiming to give a brief but concise overview of the field because “philosophy of technology” does not name a clearly delimited academic domain of investigation that is characterized by a general agreement among investigators on what are the central topics, questions and aims, and who are the principal authors and positions. Instead, “philosophy of technology” denotes a considerable variety of philosophical endeavors that all in some way reflect on technology.

There is, then, an ongoing discussion among philosophers, scholars in science and technology studies, as well as engineers about what philosophy of technology is, what it is not, and what it could and should be. These questions will form the background against which the present article presents the field. Section 1 begins by sketching a brief history of philosophical reflection on technology from Greek Antiquity to the rise of contemporary philosophy of technology in the mid-19th to mid-20th century. This is followed by a discussion of the present state of affairs in the field (Section 2). In Section 3, the main approaches to philosophy of technology and the principal kinds of questions which philosophers of technology address are mapped out. Section 4 concludes by presenting two examples of current central discussions in the field.

Table of Contents

  1. A Brief History of Thinking about Technology
    1. Greek Antiquity: Plato and Aristotle
    2. From the Middle Ages to the Nineteenth Century: Francis Bacon
    3. The Twentieth Century: Martin Heidegger
  2. Philosophy of Technology: The State of the Field in the Early Twenty-First Century
  3. How Philosophy of Technology Can Be Done: The Principal Kinds of Questions That Philosophers of Technology Ask
  4. Two Exemplary Discussions
    1. What Is (the Nature of) Technology?
    2. Questions Regarding Biotechnology
  5. References and Further Reading

1. A Brief History of Thinking about Technology

The origin of philosophy of technology can be placed in the second half of the 19th century. But this does not mean that philosophers before the mid-19th century did not address questions that would today be thought of as belonging in the domain of philosophy of technology. This section will give the history of thinking about technology – focusing on a few key figures, namely Plato, Aristotle, Francis Bacon and Martin Heidegger.

a. Greek Antiquity: Plato and Aristotle

Philosophers in Greek antiquity already addressed questions related to the making of things. The terms “technique” and “technology” have their roots in the ancient Greek notion of “techne” (art, or craft-knowledge), that is, the body of knowledge associated with a particular practice of making (cf. Parry, 2008). Originally the term referred to a carpenter’s craft-knowledge about how to make objects from wood (Fischer, 2004: 11; Zoglauer, 2002: 11), but later it was extended to include all sorts of craftsmanship, such as the ship’s captain’s techne of piloting a ship, the musician’s techne of playing a particular kind of instrument, the farmer’s techne of working the land, the statesman’s techne of governing a state or polis, or the physician’s techne of healing patients (Nye, 2006: 7; Parry, 2008).

In classical Greek philosophy, reflection on the art of making involved both reflection on human action and metaphysical speculation about what the world was like. In the Timaeus, for example, Plato unfolded a cosmology in which the natural world was understood as having been made by a divine Demiurge, a creator who made the various things in the world by giving form to formless matter in accordance with the eternal Ideas. In this picture, the Demiurge’s work is similar to that of a craftsman who makes artifacts in accordance with design plans. (Indeed, the Greek word “Demiourgos” originally meant “public worker” in the sense of a skilled craftsman.) Conversely, according to Plato (Laws, Book X) what craftsmen do when making artifacts is to imitate nature’s craftsmanship – a view that was widely endorsed in ancient Greek philosophy and continued to play an important role in later stages of thinking about technology. On Plato’s view, then, natural objects and man-made objects come into being in similar ways, both being made by an agent according to pre-determined plans.

In Aristotle’s works this connection between human action and the state of affairs in the world is also found. For Aristotle, however, this connection did not consist in a metaphysical similarity in the ways in which natural and man-made objects come into being. Instead of drawing a metaphysical similarity between the two domains of objects, Aristotle pointed to a fundamental metaphysical difference between them while at the same time making epistemological connections between on the one hand different modes of knowing and on the other hand different domains of the world about which knowledge can be achieved. In the Physics (Book II, Chapter 1), Aristotle made a fundamental distinction between the domains of physis (the domain of natural things) and poiesis (the domain of non-natural things). The fundamental distinction between the two domains consisted in the kinds of principles of existence that were underlying the entities that existed in the two domains. The natural realm for Aristotle consisted of things that have the principles by which they come into being, remain in existence and “move” (in the senses of movement in space, of performing actions and of change) within themselves. A plant, for instance, comes into being and remains in existence by means of growth, metabolism and photosynthesis, processes that operate by themselves without the interference of an external agent. The realm of poiesis, in contrast, encompasses things of which the principles of existence and movement are external to them and can be attributed to an external agent – a wooden bed, for example, exists as a consequence of a carpenter’s action of making it and an owner’s action of maintaining it.

Here it needs to be kept in mind that on Aristotle’s worldview every entity by its nature was inclined to strive toward its proper place in the world. For example, unsupported material objects move downward, because that is the natural location for material objects. The movement of a falling stone could thus be interpreted as a consequence of the stone’s internal principles of existence, rather than as a result of the operation of a gravitational force external to the stone. On Aristotle’s worldview, contrary to our present-day worldview, it thus made perfect sense to think of all natural objects as being subject to their own internal principles of existence and in this respect being fundamentally distinct from artifacts that are subject to externally operating principles of existence (to be found in the agents that make an maintain them).

In the Nicomachean Ethics (Book VI, Chapters 3-7), Aristotle distinguished between five modes of knowing, or of achieving truth, that human beings are capable of. He began with two distinctions that apply to the human soul. First, the human soul possesses a rational part and a part that does not operate rationally. The non-rational part is shared with other animals (it encompasses the appetites, instincts, etc.), whereas the rational part is what makes us human – it is what makes man the animal rationale. The rational part of the soul in turn can be subdivided further into a scientific part and a deductive or ratiocinative part. The scientific part can achieve knowledge of those entities of which the principles of existence could not have been different from what they are; these are the entities in the natural domain of which the principles of existence are internal to them and thus could not have been different. The deductive or ratiocinative part can achieve knowledge of those entities of which the principles of existence could have been different; the external principles of existence of artifacts and other things in the non-natural domain could have been different in that, for example, the silver smith who made a particular silver bowl could have had a different purpose in mind than the purpose for which the bowl was actually made. The five modes of knowledge that humans are capable of – often denoted as virtues of thought – are faculties of the rational part of the soul and in part map onto the scientific part / deductive part dichotomy. They are what we today would call science or scientific knowledge (episteme), art or craft knowledge (techne), prudence or practical knowledge (phronesis), intellect or intuitive apprehension (nous) and wisdom (sophia). While episteme applies to the natural domain, techne and phronesis apply to the non-natural domain, phronesis applying to actions in general life and techne to the crafts. Nous and sophia, however, do not map onto these two domains: while nous yields knowledge of unproven (and not provable) first principles and hence forms the foundation of all knowledge, sophia is a state of perfection that can be reached with respect to knowledge in general, including techne.

Both Plato and Aristotle thus distinguished between techne and episteme as pertaining to different domains of the world, but also drew connections between the two. The reconstruction of the actual views of Plato and Aristotle, however, remains a matter of interpretation (see Parry, 2008). For example, while many authors interpret Aristotle as endorsing the widespread view of technology as consisting in the imitation of nature (for example, Zoglauer, 2002: 12), Schummer (2001) recently argued that for Aristotle this was not a characterization of technology or an explication of the nature of technology, but merely a description of how technological activities often (but not necessarily) take place. And indeed, it seems that in Aristotle’s account of the making of things the idea of man imitating nature is much less central than it is for Plato, when he draws a metaphysical similarity between the Demiurge’s work and the work of craftsmen.

b. From the Middle Ages to the Nineteenth Century: Francis Bacon

In the Middle Ages, the ancient dichotomy between the natural and artificial realms and the conception of craftsmanship as the imitation of nature continued to play a central role in understanding the world. On the one hand, the conception of craftsmanship as the imitation of nature became thought of as applying not only to what we would now call “technology” (that is, the mechanical arts), but also to art. Both were thought of as the same sort of endeavor. On the other hand, however, some authors began to consider craftsmanship as being more than merely the imitation of nature’s works, holding that in their craftsmanship humans were also capable of improving upon nature’s designs. This conception of technology led to an elevated appreciation of technical craftsmanship which, as the mere imitation of nature, used to be thought of as inferior to the higher arts in the Scholastic canon that was taught at medieval colleges. The philosopher and theologian Hugh of St. Victor (1096-1141), for example, in his Didascalicon compared the seven mechanical arts (weaving, instrument and armament making, nautical art and commerce, hunting, agriculture, healing, dramatic art) with the seven liberal arts (the trivium of grammar, rhetoric, and dialectic logic, and the quadrivium of astronomy, geometry, arithmetic, and music) and incorporated the mechanical arts together with the liberal arts into the corpus of knowledge that was to be taught (Whitney, 1990: 82ff.; Zoglauer, 2002: 13-16).

While the Middle Ages thus can be characterized by an elevated appreciation of the mechanical arts, with the transition into the Renaissance thinking about technology gained new momentum due to the many technical advances that were being made. A key figure at the end of the Renaissance is Francis Bacon (1561-1626), who was both an influential natural philosopher and an important English statesman (among other things, Bacon held the offices of Lord Keeper of the Great Seal and later Lord Chancellor). In his Novum Organum (1620), Bacon proposed a new, experiment-based method for the investigation of nature and emphasized the intrinsic connectedness of the investigation of nature and the construction of technical “works”. In his New Atlantis (written in 1623 and published posthumously in 1627), he presented a vision of a society in which natural philosophy and technology occupied a central position. In this context it should be noted that before the advent of science in its modern form the investigation of nature was conceived of as a philosophical project, that is, natural philosophy. Accordingly, Bacon did not distinguish between science and technology, as we do today, but saw technology as an integral part of natural philosophy and treated the carrying out of experiments and the construction of technological “works” on an equal footing. On his view, technical “works” were of the utmost practical importance for the improvement of the living conditions of people, but even more so as indications of the truth or falsity of our theories about the fundamental principles and causes in nature (see Novum Organum, Book I, aphorism 124).

New Atlantis is the fictional report of a traveler who arrives at an as yet unknown island state called Bensalem and informs the reader about the structure of its society. Rather than constituting a utopian vision of an ideal society, Bensalem’s society was modeled on the English society of Bacons” own times that had become increasingly industrialized and in which the need for technical innovations, new instruments and devices to help with the production of goods and the improvement of human life was clearly felt (compare Kogan-Bernstein, 1959). The utopian vision in New Atlantis only pertained to the organization of the practice of natural philosophy. Accordingly, Bacon spent much of New Atlantis describing the most important institution in the society of Bensalem, Salomon’s House, an institution devoted entirely to inquiry and technological innovation.

Bacon provided a long list of the various areas of knowledge, techniques, instruments and devices that Salomon’s House possesses, as well as descriptions of the way in which the House is organized and the different functions that its members fulfill. In his account of Salomon’s house Bacon’s unbridled optimism about technology can be seen: Salomon’s House appears to be in the possession of every possible (and impossible) technology that one could think of, including several that were only realized much later (such as flying machines and submarines) and some that are impossible to realize. (Salomon’s House even possesses several working perpetuum mobile machines, that is, machines that once they have been started up will remain in motion forever and are able to do work without consuming energy. Contemporary thermodynamics shows that such machines are impossible.) Repeatedly it is stated that Salomon’s House works for the benefit of Bensalem’s people and society: the members of the House, for example, regularly travel through the county to inform the people about new inventions, to warn them about upcoming catastrophic events, such as earthquakes and droughts the occurrence of which Salomon’s House is been able to forecast, and to advise them about how they could prepare themselves for these events.

While Bacon is often associated with the slogan “knowledge is power”, contrary to how the slogan is often understood today (where “power” is often taken to mean political power or power within society) what is meant is that knowledge of natural causes gives us power over nature that can be used for the benefit of mankind. This can be seen, for instance, from the way Bacon described the reasons of the Bensalemians for founding Salomon’s House: “The end of our foundation is the knowledge of causes, and secret motions of things; and the enlarging of the bounds of human empire to the effecting of all things possible.” Here, inquiry into “the knowledge of causes, and secret motions of things” and technological innovation by producing what is possible (“enlarging of the bounds of human empire to the effecting of all things possible”) are explicitly mentioned as the two principal goals of the most important institution in society. (It should also be noted that Bacon himself never formulated the slogan “knowledge is power”. Rather, in the section “Plan of the Work” in the Instauratio Magna he speaks of the twin aims of knowledge – Bacon’s term is ‘scientia” – and power – “Potentia” – as coinciding in the devising of new works because one can only have power over nature when one knows and follows nature’s causes. The connection between knowledge and power here is the same as in the description of the purpose of Salomon’s House.)

The improvement of life by means of natural philosophy and technology is a theme which pervades much of Bacons’ works, including the New Atlantis and his unfinished opus magnum, the Instauratio Magna. Bacon saw the Instauratio Magna, the “Great Renewal of the Sciences”, as the culmination of his life work on natural philosophy. It was to encompass six parts, presenting an overview and critical assessment of the knowledge about nature available at the time, a presentation of Bacon’s new method for investigating nature, a mapping of the blank spots in the corpus of available knowledge and numerous examples of how natural philosophy would progress when using Bacon’s new method. It was clear to Bacon that his work could only be the beginning of a new natural philosophy, to be pursued by later generations of natural philosophers, and that he would himself not be able to finish the project he started in the Instauratio. In fact, even the writing of the Instauratio proved a much too ambitious project for one man: Bacon only finished the second part, the Novum Organum, in which he presented his new method for the investigation of nature.

With respect to this new method, Bacon argued against the medieval tradition of building on the Aristotelian/Scholastic canon and other written sources as the sources of knowledge, proposing a view of knowledge gained from systematic empirical discovery instead. For Bacon, craftsmanship and technology played a threefold role in this context. First, knowledge was to be gained by means of observation and experimentation, so inquiry in natural philosophy heavily relied on the construction of instruments, devices and other works of craftsmanship to make empirical investigations possible. Second, as discussed above, natural philosophy should not be limited to the study of nature for knowledge’s sake but should also always inquire how newly gained knowledge could be used in practice to extend man’s power over nature to the benefit of society and its inhabitants (Kogan-Bernstein, 1959; Fischer, 1996: 284-287). And third, technological “works” served as the empirical foundations of knowledge about nature in that a successful “work” could count as an indication of the truth of the involved theories about the fundamental principles and causes in nature (see above).

While in many locations in his writings Bacon suggests that the “pure” investigation of nature and the construction of new “works” are of equal importance, he did prioritize technology. From the description that Bacon gives of how Salomon’s House is organized, for example, it is clear that the members of Salomon’s House also practice “pure” investigation of nature without much regard for its practical use. The “pure” investigation of nature seems to have its own place within the House and to be able to operate autonomously. Still, as a whole, the institution of Salomon’s House is decidedly practice-oriented, such that the relative freedom of inquiry in the end manifests itself within the confines of an environment in which practical applicability is what counts. Bacon draws the same picture in the Instauratio Magna, where he explicitly acknowledges the value of “pure” investigation while at the same time emphasizing that the true aims of natural philosophy (‘scientiae veros fines” – see towards the end of the Preface of the Instauratio Magna) concern its benefits and usefulness for human life.

c. The Twentieth Century: Martin Heidegger

Notwithstanding the fact that philosophers have been reflecting on technology-related matters ever since the beginning of Western philosophy, those pre-19th century philosophers who looked at aspects of technology did not do so with the aim of understanding technology as such. Rather, they examined technology in the context of more general philosophical projects aimed at clarifying traditional philosophical issues other than technology (Fischer, 1996: 309). It is probably safe to say that before the mid to late 19th century no philosopher considered himself as being a specialized philosopher of technology, or even as a general philosopher with an explicit concern for understanding the phenomenon of technology as such, and that no full-fledged philosophies of technology had yet been elaborated.

No doubt one reason for this is that before the mid to late 19th century technology had not yet become the tremendously powerful and ubiquitously manifest phenomenon that it would later become. The same holds with respect to science, for that matter: it is only after the investigation of nature stopped being thought of as a branch of philosophy – natural philosophy – and the contemporary notion of science emerged that philosophy of science as a field of investigation could emerge. (Note that the term “scientist”, as the name for a particular profession, was coined in the first half of the 19th century by the polymath and philosopher William Whewell – see Snyder, 2009.) Thus, by the end of the 19th century natural science in its present form had emerged from natural philosophy and technology had manifested itself as a phenomenon distinct from science. Accordingly, “until the twentieth century the phenomenon of technology remained a background phenomenon” (Ihde, 1991: 26) and the philosophy of technology “is primarily a twentieth-century development” (Ihde, 2009: 55).

While one reason for the emergence of the philosophy of technology in the 20th century is the rapid development of technology at the time, according to the German philosopher Martin Heidegger an important additional reason should be pointed out. According to Heidegger, not only did technology in the 20th century develop more rapidly than in previous times and by consequence became a more visible factor in everyday life, but also did the nature of technology itself at the same time undergo a profound change. The argument is found in a famous lecture that Heidegger gave in 1955, titled The Question of Technology (Heidegger, 1962), in which he inquired into the nature of technology. Note that although Heidegger actually talked about “Technik” (and his inquiry was into “das Wesen der Technik”; Heidegger, 1962: 5), the question he addressed is about technology. In German, “Technologie” (technology) is often used to denote modern “high-tech” technologies (such as biotechnology, nanotechnology, etc.), while “Technik” is both used to denote the older mechanical crafts and the modern established fields of engineering. (“Elektrotechnik”, for example, is electrical engineering.) As will be discussed in Section 2, philosophy of technology as an academic field arose in Germany in the form of philosophical reflection on “Technik”, not “Technologie”. While the difference between the two terms remains important in contemporary German philosophy of technology (see Section 4.a below), both “Technologie” and “Technik” are commonly translated as “technology” and what in German is called “Technikphilosophie” in English goes by the name of “philosophy of technology”.

On Heidegger’s view, one aspect of the nature of both older and contemporary technology is that technology is instrumental: technological objects (tools, windmills, machines, etc.) are means by which we can achieve particular ends. However, Heidegger argued, it is often overlooked that technology is more than just the devising of instruments for particular practical purposes. Technology, he argued, is also a way of knowing, a way of uncovering the hidden natures of things. In his often idiosyncratic terminology, he wrote that “Technology is a way of uncovering” (“Technik ist eine Weise des Entbergens”; Heidegger, 1962: 13), where “Entbergen” means “to uncover” in the sense of uncovering a hidden truth. (For example, Heidegger (1962: 11-12) connects his term “Entbergen” with the Greek term “aletheia”, the Latin “veritas” and the German “Wahrheit”.) Heidegger thus adopted a view of the nature of technology close to Aristotle’s position, who conceived of techne as one of five modes of knowing, as well as to Francis Bacon’s view, who considered technical works as indications of the truth or falsity of our theories about the fundamental principles and causes in nature.

The difference between older and contemporary technology, Heidegger went on to argue, consists in how this uncovering of truth takes place. According to Heidegger, older technology consisted in “Hervorbringen” (Heidegger, 1962: 11). Heidegger here plays with the dual meaning of the term: the German “Hervorbringen” means both “to make” (the making or production of things, material objects, sound effects, etc.) and “to bring to the fore”. Thus the German term can be used to characterize both the “making” aspect of technology and its aspect of being a way of knowing. While contemporary technology retains the “making” aspect of older technology, Heidegger argued that as a way of knowing it no longer can be understood as Hervorbringen (Heidegger, 1962: 14). In contrast to older technology, contemporary technology as a way of knowing consists in the challenging (“Herausfordern” in German) of both nature (by man) and man (by technology). The difference is that while older technologies had to submit to the standards set by nature (e.g., the work that an old windmill can do depends on how strongly the wind blows), contemporary technologies can themselves set the standards (for example, in modern river dams a steady supply of energy can be guaranteed by actively regulating the water flow). Contemporary technology can thus be used to challenge nature: “Heidegger understands technology as a particular manner of approaching reality, a dominating and controlling one in which reality can only appear as raw material to be manipulated” (Verbeek, 2005: 10). In addition, on Heidegger’s view contemporary technology challenges man to challenge nature in the sense that we are constantly being challenged to realize some of the hitherto unrealized potential offered by nature – that is, to devise new technologies that force nature in novel ways and in so doing uncover new truths about nature.

Thus, in the 20th century, according to Heidegger, technology as a way of knowing assumed a new nature. Older technology can be thought of as imitating nature, where the process of imitation is inseparably connected to the uncovering of the hidden nature of the natural entities that are being imitated. Contemporary technology, in contrast, places nature in the position of a supplier of resources and in this way places man in an epistemic position with respect to nature that differs from the epistemic relation of imitating nature. When we imitate nature, we examine entities and phenomena that already exist. But products of contemporary technology, such as the Hoover dam or a nuclear power plant, are not like already existing natural objects. On Heidegger’s view, they force nature to deliver energy (or another kind of resource) whenever we ask for it and therefore cannot be understood as objects made by man in a mode of imitating nature – nature, after all, cannot produce things that force herself to deliver resources in ways that man-made things can force her to do this. This means that there is a fundamental divide between older and contemporary technology, making the rise of philosophy of technology in the late 19th century and in the 20th century an event that occurred in parallel to a profound change in the nature of technology itself.

2. Philosophy of Technology: The State of the Field in the Early Twenty-First Century

In accordance with the preceding historical sketch, the history of philosophy of technology – as the history of philosophical thinking about issues concerned with the making of things, the use of techne, the challenging of nature and so forth – can be (very) roughly divided into three major periods.

The first period runs from Greek antiquity through the Middle Ages. In this period techne was conceived of as one among several kinds of human knowledge, namely the craft-knowledge that features in the domain of man-made objects and phenomena. Accordingly, philosophical attention for technology was part of the philosophical examination of human knowledge more generally. The second period runs roughly from the Renaissance through the Industrial Revolution and is characterized by an elevated appreciation for technology as an increasingly manifest but not yet all-pervasive phenomenon. Here we see a general interest in technology not only as a domain of knowledge but also as a domain of construction, that is, of the making of artifacts with a view on the improvement of human life (for instance, in Francis Bacon’s vision of natural philosophy). However, there is no particular philosophical interest yet in technology per se other than the issues that earlier philosophers had also considered. The third period is the contemporary period (from the mid 19th century to the present) in which technology had become such a ubiquitous and important factor in human lives and societies that it began to manifest itself as a subject sui generis of philosophical reflection. Of course, this is only a very rough periodization and different ways of periodizing the history of philosophy of technology can be found in the literature – e.g., Wartofsky (1979), Feenberg (2003: 2-3) or Franssen and others (2009: Sec. 1). Moreover, this periodization applies only to Western philosophy. To be sure, there is much to be said about technology and thinking about technology in technologically advanced ancient civilizations in China, Persia, Egypt, etc., but this cannot be done within the confines of the present article. Still, the periodization proposed above is a useful first-order subdivision of the history of thinking about technology as it highlights important changes in how technology was and is understood.

The first monograph on philosophy of technology appeared in Germany in the second half of the 19th century in the form of Ernst Kapp’s book, Grundlinien einer Philosophie der Technik (“Foundations of a Philosophy of Engineering”) (Kapp, 1877). This book is commonly seen as the origin of the field (Rapp, 1981: 4; Ferré, 1988: 10; Fischer, 1996: 309; Zoglauer, 2002: 9; De Vries, 2005: 68; Ropohl, 2009: 13), because the term “philosophy of technology” (or rather, “philosophy of technics”) was first introduced there. Kapp used it to denote the philosophical inquiry into the effects of the use of technology on human society. (Mitcham (1994: 20), however, mentions the Scottish chemical engineer Andrew Ure as a precursor to Kapp in this context. Apparently in 1835 Ure coined the phrase “philosophy of manufactures” in a treatise on philosophical issues concerning technology.) For several decades after the publication of Kapp’s work not much philosophical work focusing on technology appeared in print and the field didn”t really get going until well into the 20th century. Again, the main publications appeared in Germany (for example, Dessauer, 1927; Jaspers, 1931; Diesel, 1939).

It should be noted that if philosophy of technology as an academic field indeed started here, the field’s origins lie outside professionalized philosophy. Jaspers was a philosopher, but neither Kapp nor most of the other early authors on the topic were professional philosophers. Kapp, for example, had earned a doctorate in classical philology and spent much of his life as a schoolteacher of geography and history and as an independent writer and untenured university lecturer (a German “Privatdozent”). Dessauer was an engineer (and an advocate of an unconditionally optimistic view of technology), Ure a chemical engineer and Diesel (son of the inventor of the Diesel engine, Rudolf Diesel) an independent writer.

In his book, Kapp argued that technological artifacts should be thought of as man-made imitations and improvements of human organs (see Brey, 2000; De Vries, 2005). The underlying idea is that human beings have limited capacities: we have limited visual powers, limited muscular strength, limited resources for storing information, etc. These limitations have led human beings to attempt to improve their natural capacities by means of artifacts such as cranes, lenses, etc. On Kapp’s view, such improvements should not so much be thought of as extensions or supplements of natural human organs, but rather as their replacements (Brey, 2000: 62). Because technological artifacts are supposed to serve as replacements of natural organs, they must on Kapp’s view be devised as imitations of these organs – after all, they are intended to perform the same function – or at least as being modeled on natural organs: ‘since the organ whose utility and power is to be increased is the standard, the appropriate form of a tool can only be derived from that organ” (Kapp, quoted and translated by Brey, 2000: 62). This way of understanding technology, which echoes the view of technology as the imitation of nature by men that was already found with Plato and Aristotle, was dominant throughout the Middle Ages and continued to be endorsed later.

The period after World War II saw a sharp increase in the amount of published reflections on technology that, for obvious reasons given the role of technology in both World Wars, often expressed a deeply critical and pessimistic view of the influence of technology on human societies, human values and the human life-world in general. Because of this increase in the amount of reflection on technology after World War II, some authors locate the emergence of the field in that period rather than in the late 19th century (for example Ihde, 1993: 14-15, 32-33; Dusek, 2006: 1-2; Kroes and others, 2008: 1). Ihde (1993: 32) points to an additional reason to locate the beginning of the field in the period following World War II: historians of technology rate World War II as the technologically most innovative period in human history until then, as during that war many new technologies were introduced that continued to drive technological innovation as well as the associated reflection on such innovation for several decades to follow. Thus, from this perspective it was World War II and the following period in which technology reached the level of prominence in the early 21st century and, accordingly, became a focal topic for philosophy. It became “a force too important to overlook”, as Ihde (1993: 32) writes.

A still different picture is obtained if one takes the existence of specialized professional societies, dedicated academic journals, topic-specific textbooks as well as a specific name identifying the field as typical signs that a particular field of investigation has become established as a branch of academia. (Note that in his influential The Structure of Scientific Revolutions, historian and philosopher of science Thomas Kuhns mentions these as signs of the establishment of a new paradigm, albeit not a new field or discipline – see Kuhn, 1970: 19.) By these indications, the process of establishing philosophy of technology as an academic field has only begun in the late 1970s and early 1980s – as Ihde (1993: 45) writes, “from the 1970s on, philosophy of technology began to take its place alongside the other “philosophies of …”” – and continued into the early 21st century.

As Mitcham (1994: 33) remarks, the term “philosophy of technology” was not widely used outside Germany until the 1980s (where the German term is “Technikphilosophie” or “Philosophie der Technik” rather than “philosophy of technology”). In 1976, the Society for the Philosophy of Technology was founded as the first professional society in the field. In the 1980s introductory textbooks on philosophy of technology began to appear. One of the very first (Ferré, 1988) appeared in the famous Prentice Hall Foundations of Philosophy series that included several hallmark introductory texts in philosophy (such as Carl Hempel’s Philosophy of Natural Science, David Hull’s Philosophy of Biological Science, William Frankena’s Ethics and Wesley Salmon’s Logic). In recent years numerous introductory texts have become available, including Ihde (1993), Mitcham (1994), Pitt (2000), Bucciarelli (2003), Fischer (2004), De Vries (2005), Dusek (2006), Irrgang (2008) and Nordmann (2008). Anthologies of classic texts in the field and encyclopedias of philosophy of technology have only very recently begun to appear (e.g., Scharff & Dusek, 2003; Kaplan, 2004; Meijers, 2009; Olsen, Pedersen & Hendricks, 2009; Olsen, Selinger, & Riis, 2009). However, there were few academic journals in the early 21st century dedicated specifically to philosophy of technology and covering the entire range of themes in the field.

”Philosophy of technology” denotes a considerable variety of philosophical endeavors. There is an ongoing discussion among philosophers of technology and scholars in related fields (e.g., science and technology studies, and engineering) on how philosophy of technology should be conceived of. One would expect to find a clear answer to this question in the available introductory texts, along with a general of agreement on the central themes and questions of the field, as well as on who are its most important authors and which the fundamental positions, theories, theses and approaches. In the case of philosophy of technology, however, comparing recent textbooks reveals a striking lack of consensus about what kind of endeavor philosophy of technology is. According to some authors, the sole commonality of the various endeavors called “philosophy of technology” is that they all in some way or other reflect on technology (cf. Rapp, 1981: 19-22; 1989: ix; Ihde, 1993: 97-98; Nordmann, 2008: 10).

For example, Nordmann characterized philosophy of technology as follows: “Not only is it a field of work without a tradition, it is foremost a field without its own guiding questions. In the end, philosophy of technology is the whole of philosophy done over again from the start – only this time with consideration for technology” (2008: 10; Reydon’s translation). Nordmann (2008: 14) added that the job of philosophy of technology is not to deal philosophically with a particular subject domain called “technology” (or “Technik” in German). Rather, its job is to deal with all the traditional questions of philosophy, relating them to technology. Such a characterization of the field, however, seems impracticably broad because it causes the name “philosophy of technology” to lose much of its meaning. On Nordmann’s broad characterization it seems meaningless to talk of “philosophy of technology”, as there is no clearly recognizable subfield of philosophy for the name to refer to. All of philosophy would be philosophy of technology, as long as some attention is paid to technology.

A similar, albeit apparently somewhat stricter, characterization of the field was given by Ferré (1988: ix, 9), who suggested that philosophy of technology is ‘simply philosophy dealing with a special area of interest”, namely technology. According to Ferré, the various “philosophies of” (of science, of biology, of physics, of language, of technology, etc.) should be conceived of as philosophy in the broad sense, with all its traditional questions and methods, but now “turned with a special interest toward discovering how those fundamental questions and methods relate to a particular segment of human concern” (Ferré, 1988: 9). The question arises what this “particular segment of human concern” called “technology” is. But first, the kinds of questions philosophers of technology ask with respect to technology must be explicated.

3. How Philosophy of Technology Can Be Done: The Principal Kinds of Questions That Philosophers of Technology Ask

Philosopher of technology Don Ihde defines philosophy of technology as philosophy that examines the phenomenon of technology per se, rather than merely considering technology in the context of reflections aimed at philosophical issues other than technology. (Note the opposition to Nordmann’s view, mentioned above.) That is, philosophy of technology “must make technology a foreground phenomenon and be able to reflectively analyze it in such a way as to illuminate features of the phenomenon of technology itself” (Ihde, 1993: 38; original emphasis).

However, there are a number of different ways in which one can approach the project of illuminating characteristic features of the phenomenon of technology. While different authors have presented different views of what philosophy of technology is about, there is no generally agreed upon taxonomy of the various approaches to (or traditions in, or styles of doing) philosophy of technology. In this section, a number of approaches that have been distinguished in the recent literature are discussed with the aim of providing an overview of the various kinds of questions that philosophers ask with respect to technology.

In an early review of the state of the field, philosopher of science Marx W. Wartofsky distinguished four main approaches to philosophy of technology (Wartofsky, 1979: 177-178). First, there is the holistic approach that sees technology as one of the phenomena generally found in human societies (on a par with phenomena such as art, war, politics, etc.) and attempts to characterize the nature of this phenomenon. The philosophical question in focus here is: What is technology? Second, Wartofsky distinguished the particularistic approach that addresses specific philosophical questions that arise with respect to particular episodes in the history of technology. Relevant questions are: Why did a particular technology gain or lose prominence in a particular period? Why did the general attitude towards technology change at a particular time? And so forth. Third is the developmental approach that aims at explaining the general process of technological change and as such has a historical focus too. And fourth, there is the social-critical approach that conceives of technology as a social/cultural phenomenon, that is a product of social conventions, ideologies, etc. In this approach, technology is seen as a product of human actions that should be critically assessed (rather than characterized, as in the holistic approach). Besides critical reflection on technology, a central question here is how technology has come to be what it is today and which social factors have been important in shaping it. The four approaches as distinguished by Wartofsky clearly are not mutually exclusive: while different approaches address similar and related questions, the difference between them is a matter of emphasis.

A similar taxonomy of approaches is found with Friedrich Rapp, an early proponent of analytic philosophy of technology (see also below). For Rapp, the principal dichotomy is between holistic and particularistic approaches, that is, approaches that conceive of technology as a single phenomenon the nature of which philosophers should clarify vs. approaches that see “technology” as an umbrella term for a number of distinct historical and social phenomena that are related to one another in complex ways and accordingly should each be examined in relation to the other relevant phenomena (Rapp, 1989: xi-xii). Rapp’s own philosophy of technology stands in the latter line of work. Within this dichotomy, Rapp (1981: 4-19) distinguished four main approaches, each reflecting on a different aspect of technology: on the practice of invention and engineering, on technology as a cultural phenomenon, on the social impact of technology, and on the impact of technology on the physical/biological system of planet Earth. While it is not entirely clear how Rapp conceives of the relation between these four approaches and his holistic/particularistic dichotomy, it seems that holism and particularism can generally be understood as modes of doing philosophy that can be realized within each of the four approaches.

Gernot Böhme (2008: 23-32) also distinguished between four main paradigms of contemporary philosophy of technology: the ontological paradigm, the anthropological paradigm, the historical-philosophical paradigm and the epistemological paradigm. The ontological paradigm, according to Böhme, inquires into the nature of artifacts and other technical entities. It basically consists in a philosophy of technology that parallels philosophy of nature, but focuses on the Aristotelian domain of poiesis instead of the domain of physis (see Section 1.a. above). The anthropological paradigm asks one of the traditional questions of philosophy – What is man? – and approaches this question by way of an examination of technology as a product of human action. The historical-philosophical paradigm examines the various manifestations of technology throughout human history and aims to clarify what characterizes the nature of technology in different periods. In this respect, it is closely related to the anthropological paradigm and individual philosophers can work in both paradigms simultaneously. Böhme (2008: 26), for example, lists Ernst Kapp as a representative of both the anthropological and historical-philosophical paradigms. Finally, the epistemological paradigm inquires into technology as a form of knowledge in the sense in which Aristotle did (See Sec. 1.a. above). Böhme (2008: 23) observed that despite the factual existence of philosophy of technology as an academic field, as yet there is no paradigm that dominates the field.

Carl Mitcham (1994) made a fundamental distinction between two principal subdomains of philosophy of technology, which he called “engineering philosophy of technology” and “humanities philosophy of technology”. Engineering philosophy of technology is the philosophical project aimed at understanding the phenomenon of technology as instantiated in the practices of engineers and others working in technological professions. It analyzes “technology from within, and [is] oriented toward an understanding of the technological way of being-in-the-world” (Mitcham, 1994: 39). As representatives of engineering philosophy of technology Mitcham lists, among others, Ernst Kapp and Friedrich Dessauer. Humanities philosophy of technology, on the other hand, consists of more general philosophical projects in which technology per se is not principal subject of concern. Rather, technology is taken as a case study that might lead to new insights into a variety of philosophical questions by examining how technology affects human life.

The above discussion shows how different philosophers have quite different views of how the field of philosophy of technology is structured and what kinds of questions are in focus in the field. Still, on the basis of the preceding discussion a taxonomy can be constructed of three principal ways of conceiving of philosophy of technology:

(1) philosophy of technology as the systematic clarification of the nature of technology as an element and product of human culture (Wartofsky’s holistic and developmental approaches; Rapp’s cultural approach; Böhme’s ontological, anthropological and historical paradigms; and Mitcham’s engineering approach);

(2) philosophy of technology as the systematic reflection on the consequences of technology for human life (Wartofsky’s particularistic and social/critical approaches; Rapp’s social impact and physical impact approaches; and Mitcham’s humanities approach);

(3) philosophy of technology as the systematic investigation of the practices of engineering, invention, designing and making of things (Wartofsky’s particularistic approach; Rapp’s invention approach; Böhme’s epistemological paradigm; and Mitcham’s engineering approach).

All three approaches are represented in present-day thinking about technology and are illustrated below.

(1) The systematic clarification of the nature of technology. Perhaps most philosophy of technology has been done – and continues to be done – in the form of reflection on the nature of technology as a cultural phenomenon. As clarifying the nature of things is a traditional philosophical endeavor, many prominent representatives of this approach are philosophers who do not consider themselves philosophers of technology in the first place. Rather, they are general philosophers who look at technology as one among the many products of human culture, such as the German philosophers Karl Jaspers (e.g., his book Die geistige Situation der Zeit; Jaspers, 1931), Oswald Spengler (Der Mensch und die Technik; Spengler, 1931), Ernst Cassirer (e.g., his Symbol, Technik, Sprache; Cassirer, 1985), Martin Heidegger (Heidegger, 1962; discussed above), Jürgen Habermas (for example with his Technik und Wissenschaft als “Ideologie”; Habermas, 1968) and Bernhard Irrgang (2008). The Spanish philosopher José Ortega y Gasset is also often counted among the prominent representatives of this line of work.

(2) Systematic reflection on the consequences of technology for human life. Related to the conception of technology as a human cultural product is the approach to philosophy of technology that reflects on and criticizes the social and environmental impact of technology. As an examination of how technology affects society, this approach lies at the intersection of philosophy and sociology, rather than lying squarely within philosophy itself. Prominent representatives thus include the German philosopher/sociologists of the Frankfurt School (Herbert Marcuse, Theodor W. Adorno and Max Horkheimer), Jürgen Habermas, the French sociologist Jacques Ellul (1954), or the American political theorist Langdon Winner (1977).

A central question in contemporary versions of this approach is whether technology controls us or we are able to control technology (Feenberg, 2003: 6; Dusek, 2006: 84-111; Nye, 2006: Chapter 2). Langdon Winner, for example, thought of technology as an autonomously developing phenomenon fundamentally out of human control. As Dusek (2006: 84) points out, this issue is in fact a constellation of two separate questions: Are the societies that we live in, and we ourselves in our everyday lives, determined by technology? And are we able to control or guide the development of technology and the application of technological inventions, or does technology have a life of its own? As it might be that while our lives are not determined by technology we still are not able to control the development and application of technology, these are separate, albeit intimately related issues. The challenge for philosophy of technology, then, is to assess the effects of technology on our societies and our lives, to explore possibilities for us to exert influence on the current applications and future development of technology, and to devise concepts and institutions that might enable democratic control over the role of technology in our lives and societies.

(3) The systematic investigation of the practices of engineering, invention, designing and making of things. The third principal approach to philosophy of technology examines concrete technological practices, such as invention, design and engineering. Early representatives of this approach include Ernst Kapp (1877), Friedrich Dessauer (1927; 1956) and Eugen Diesel (1939). The practical orientation of this approach, as well as its comparative distance from traditional issues in philosophy, is reflected in the fact that none of these three early philosophers of technology were professional philosophers (see Section 2).

A guiding idea in this approach to philosophy of technology is that the design process constitutes the core of technology (Franssen and others, 2009: Sec. 2.3), such that studying the design process is crucial to any project that attempts to understand technology. Thus, philosophers working in this approach often examine design practices, both in the strict context of engineering and in wider contexts such as architecture and industrial design (for example, Vermaas and others, 2008). In focus are epistemological and methodological questions, such as: What kinds of knowledge do engineers have? (for example, Vincenti, 1990; Pitt, 2000; Bucciarelli, 2003; Auyang, 2009; Houkes, 2009). Is there a kind of knowledge that is specific to engineering? What is the nature of the engineering process and the design process? (for example, Vermaas and others, 2008). What is design? (for example, Houkes, 2008). Is there a specific design/engineering methodology? How do reasoning and decision processes in engineering function? How do engineers deal with uncertainty, failure and error margins? (for example, Bucciarelli, 2003: Chapter 3). Is there any such thing as a technological explanation? If so, what is the structure of technological explanations? (for example, Pitt, 2000: Chapter 4; Pitt, 2009). What is the relation between science and technology and in what way are design processes similar to and different from investigative processes in natural science? (for example, Bunge, 1966).

This approach to philosophy of technology is closely related to philosophy of science, where also much attention is given to methodology and epistemology. This can be seen from the fact that central questions from philosophy of science parallel some of the aforementioned questions: What is scientific knowledge? Is there a specific scientific method, or perhaps a clearly delimited set of such methods? How does scientific reasoning work? What is the structure of scientific explanations? Etc. However, there still seems to be comparatively little attention for such questions among philosophers of technology. Philosopher of technology Joseph Pitt, for example, observed that notwithstanding the parallel with respect to questions that can be asked about technology “there is a startling lack of symmetry with respect to the kinds of questions that have been asked about science and the kinds of questions that have been asked about technology” (2000: 26; emphasis added). According to Pitt, philosophers of technology have largely ignored epistemological and methodological questions about technology and have instead focused overly on issues related to technology and society. But, Pitt pointed out, social criticism “can come only after we have a deeper understanding of the epistemological dimension of technology (Pitt, 2000: 27) and “policy decisions require prior assessment of the knowledge claims, which require good theories of what knowledge is and how to assess it” (ibid.). Thus, philosophers of technology should orient themselves anew with respect to the questions they ask.

But there are more parallels between the philosophies of technology and science. An important endeavor in philosophy of science that is also seen as central in philosophy of technology is conceptual analysis. In the case of philosophy of technology, this involves both concepts related to technology and engineering in general (concepts such as “technology”, “technics”, “technique”, “machine”, “mechanism”, “artifact”, “artifact kind”, “information”, ‘system”, “efficiency”, “risk”, etc.; see also Wartofsky, 1979: 179) and concepts that are specific for the various engineering disciplines. In addition, in both philosophy of science and philosophy of technology a renewed interest in metaphysical issues can currently be seen. For example, while philosophers of science inquire into the nature of the natural kinds that the sciences study, philosophers of technology are developing a parallel interest into the metaphysics of artifacts and kinds of artifacts (e.g., Houkes & Vermaas, 2004; Margolis & Laurence, 2007; Franssen, 2008). And lastly, philosophers of technology and philosophers of particular special sciences are increasingly beginning to cooperate on questions that are of crucial interest to both fields; a recent example is Krohs & Kroes (2009) on the notion of function in biology and technology.

A difference between the states of affairs in philosophy of science and in philosophy of technology, however, lies in the relative dominance of continental and analytic approaches. While there is some continental philosophy of science (e.g., Gutting, 2005), it constitutes a small minority in the field in comparison to analytic philosophy of science. In contrast, continental-style philosophy of technology is a domain of considerable size, while analytic-style philosophy of technology is small in comparison. Analytic philosophy of technology has existed since the 1960s and only began the process of becoming the dominant form of philosophy of technology in the early 21st century (Franssen and others, 2009: Sec. 1.3.). Kroes and others (2008: 2) even speak of a “recent analytic turn in the philosophy of technology”. Overviews of analytic philosophy of technology can be found in Mitcham (1994: Part 2), Franssen (2009) and Franssen and others (2009: Sec. 2).

4. Two Exemplary Discussions

After having mapped out three principal ways in which one can conceive of philosophy of technology, two discussions from contemporary philosophy of technology will be presented to illustrate what philosophers of technology do. The first example will demonstrate philosophy of technology as the systematic clarification of the nature of technology. The second example shows philosophy of technology as systematic reflection on the consequences of technology for human life, and is concerned with biotechnology. (Illustrations of philosophy of technology as the systematic investigation of the practices of engineering, invention, designing and making of things will not be presented. Examples of this approach to philosophy of technology can be found in Vermaas and others (2008) or Franssen and others (2009).)

a. What Is (the Nature of) Technology?

The question, What is technology? or What is the nature of technology?, is both a central question that philosophers of technology aim to answer and a question the answer to which determines the subject matter of philosophy of technology. One can think of philosophy of technology as the philosophical examination of technology, in the same way as the philosophy of science is the philosophical examination of science and the philosophy of biology the philosophical study of a particular subdomain of science. However, in this respect the philosophy of technology is in a similar situation as the philosophy of science finds itself in.

Central questions in the philosophy of science have long been what science is, what characterizes science and what distinguishes science from non-science (the demarcation problem). These questions have recently somewhat moved out of focus, however, due to the lack of acceptable answers. Philosophers of science have not been able to satisfactorily explicate the nature of science (for a recent suggestion, see Hoyningen-Huene, 2008) or to specify any clear-cut criterion by which science could be demarcated from non-science or pseudo-science. As philosopher of science Paul Hoyningen-Huene (2008: 168) wrote: “fact is that at the beginning of the 21st century there is no consensus among philosophers or historians or scientists about the nature of science.”

The nature of technology, however, is even less clear than the nature of science. As philosopher of science Marx Wartofsky put it, ““Technology” is unfortunately too vague a term to define a domain; or else, so broad in its scope that what it does define includes too much. For example, one may talk about technology as including all artifacts, that is, all things made by human beings. Since we “make” language, literature, art, social organizations, beliefs, laws and theories as well as tools and machines, and their products, such an approach covers too much” (Wartofsky, 1979: 176). More clarity on this issue can be achieved by looking at the history of the term (for example, Nye, 2006: Chapter 1; Misa, 2009; Mitcham & Schatzberg, 2009) as well as at recent suggestions to define it.

Jacob Bigelow, an early author on technology, conceived of it as a specific domain of knowledge: technology was “an account [...] of the principles, processes, and nomenclatures of the more conspicuous arts” (Bigelow, 1829, quoted in Misa, 2009: 9; Mitcham & Schatzberg, 2009: 37). In a similar manner, Günter Ropohl (1990: 112; 2009: 31) defined “technology” as the ‘science of technics” (“Wissenschaft von der Technik”, where “Technik” denotes the domain of crafts and other areas of manufacturing, making, etc.). The important aspect of Bigelow’s and Ropohl’s definitions is that “technology” does not denote a domain of human activity (such as making or designing) or a domain of objects (technological innovations, such as solar panels), but a domain of knowledge. In this respect, their usage of the term is continuous with the meaning of the Greek “techne” (Section 1.a).

A review of a number of definitions of “technology” (Li-Hua, 2009) shows that there is not much overlap between the various definitions that can be found in the literature. Many definitions conceive of technology in Bigelow’s and Ropohl’s sense as a particular body of knowledge (thus making the philosophy of technology a branch of epistemology), but do not agree on what kind of knowledge it is supposed to be. On some definitions it is seen as firm-specific knowledge about design and production processes, while others conceive of it as knowledge about natural phenomena and laws of nature that can be used to satisfy human needs and solve human problems (a view which closely resembles Francis Bacon’s).

Philosopher of science Mario Bunge presented a view of the nature of technology along the latter lines (Bunge, 1966). According to Bunge, technology should be understood as constituting a particular subdomain of the sciences, namely “applied science”, as he called it. Note that Bunge’s thesis is not that technology is applied science in the sense of the application of scientific theories, models, etc. for practical purposes. Although a view of technology as being “just the totality of means for applying science” (Scharff, 2009: 160) remains present among the general public, most engineers and philosophers of technology agree that technology cannot be conceived of as the application of science in this sense. Bunge’s view is that technology is the subdomain of science characterized by a particular aim, namely application. According to Bunge, natural science and applied science stand side by side as two distinct modes of doing science: while natural science is scientific investigation aimed at the production of reliable knowledge about the world, technology is scientific investigation aimed at application. Both are full-blown domains of science, in which investigations are carried out and knowledge is produced (knowledge about the world and how it can be applied to concrete problems, respectively). The difference between the two domains lies in the nature of the knowledge that is produced and the aims that are in focus. Bunge’s statement that “technology is applied science” should thus be read as “technology is science for the purpose of application” and not as “technology is the application of science."

Other definitions reflect still different conceptions of technology. In the definition accepted by the United Nations Conference on Trade and Development (UNCTAD), technology not only includes specific knowledge, but also machinery, production systems and skilled human labor force. Li-Hua (2009) follows the UNCTAD definition by proposing a four-element definition of “technology” as encompassing technique (that is, a specific technique for making a particular product), specific knowledge (required for making that product; he calls this technology in the strict sense), the organization of production and the end product itself. Friedrich Rapp, in contrast, defined “technology” even more broadly as a domain of human activity: “in simplest terms, technology is the reshaping of the physical world for human purposes” (Rapp, 1989: xxiii).

Thus, attempts to define “technology” in such a way that this definition would express the nature of technology, or only some of the principal characteristics of technology, have not led to any generally accepted view of what technology is. In this context, historian of science and technology Thomas J. Misa observed that historians of technology have so far resisted defining “technology” in the same way as “no scholarly historian of art would feel the least temptation to define “art”, as if that complex expression of human creativity could be pinned down by a few well-chosen words” (Misa, 2009: 8). The suggestion clearly is that technology is far too complex and too diverse a domain to define or to be able to talk about the nature of technology. Nordmann (2008: 14) went even further by arguing that not only can the term “technology” not be defined, but also it should not be defined. According to Nordmann, we should accept that technology is too diverse a domain to be caught in a compact definition. Accordingly, instead of conceiving of “technology” as the name of a particular fixed collection of phenomena that can be investigated, Nordmann held that “technology” is best understood as what Grunwald & Julliard (2005) called a “reflective concept”. According to the latter authors, “technology” (or rather, “Technik” – see Section 1.c) should simply be taken to mean whatever we mean when we use the term. While this clearly cannot be an adequate definition of the term, it still can serve as a basis for reflections on technology in that it gives us at least some sense of what it is that we are reflection on. Using “technology” in this extremely loose manner allows us to connect reflections on very different issues and phenomena as being about – in the broadest sense – the same thing. In this way, “technology” can serve as the core concept of the field of philosophy of technology.

Philosophy of technology faces the challenge of clarifying the nature of a particular domain of phenomena without being able to determine the boundaries of that domain. Perhaps the best way out of this situation is to approach the question on a case-by-case basis, where the various cases are connected by the fact that they all involve technology in the broadest possible sense of the term. Rather than asking what technology is, and how the nature of technology is to be characterized, it might be better to examine the natures of particular instances of technology and in so doing achieve more clarity about a number of local phenomena. In the end, the results from various case studies might to some extent converge – or they might not.

b. Questions Regarding Biotechnology

The question how to define “technology” is not merely an academic issue. Consider the case of biotechnology, the technological domain that features most prominently in systematic reflections on the consequences of technology for human life. When thinking about what the application of biotechnologies might mean for our lives, it is important to define what we mean by “biotechnology” such that the subject matter under consideration is delimited in a way that is useful for the discussion.

On one definition, given in 1984 by the United States Office of Technology Assessment, biotechnology comprises “[a]ny technique using organisms and their components to make products, modify plants and animals to carry desired traits, or develop micro-organisms for specific uses” (Office of Technology Assessment, 1984; Van den Beld, 2009: 1302). On such a conception of biotechnology, however, traditional farming, breeding and production of foodstuffs, as well as modern large-scale agriculture and industrialized food production would all count as biotechnology. The domain of biotechnology would thus encompass an extremely heterogeneous collection of practices and techniques of which many would not be particularly interesting subjects for philosophical or ethical reflection (although all of them affect human life: consider, for example, the enormous effect that the development of traditional farming had with respect to the rise of human societies). Accordingly, many definitions are much narrower and focus on “new” or “modern” biotechnologies, that is, technologies that involve the manipulation of genetic material. These are, after all, the technologies that are widely perceived by the general public as ethically problematic and thus as constituting the proper subject matter of philosophical reflection on biotechnology. Thus, the authors of a 2007 reported on the possible consequences, opportunities and challenges of biotechnology for Europe make a distinction between traditional and modern biotechnology, writing about modern biotechnology that it “can be defined as the use of cellular, molecular and genetic processes in production of goods and services. Its beginnings date back to the early 1970s when recombinant DNA technology was first developed” (quoted in Van den Beld, 2009: 1302).

Such narrow definitions, however, tend to cover too little. As Van den Beld (2009: 1306) pointed out in this context, “There are no definitions that are simply correct or incorrect, only definitions that are more or less pragmatically adequate in view of the aims one pursues.” When it comes to systematic reflection on how the use of technologies affects human life, the question thus is whether there is any particular area of technology that can be meaningfully singled out as constituting “biotechnology”. However, the spectrum of technological applications in the biological domain is simply too diverse.

In overviews of the technologies that are commonly discussed under the name of “biotechnology” a common distinction is between “white biotechnology” (biotechnology in industrial contexts), “green biotechnology” (biotechnology involving plants) and “red biotechnology” (biotechnology involving humans and non-human animals, in particular in medical and biomedical contexts). White biotechnology involves, among other things, the use of enzymes in detergents or the production of cheeses; the use of micro-organisms for the production of medicinal substances; the production of biofuels and bioplastics and so forth. Green biotechnology typically involves genetic technology and is also often called “green genetic technology”. It mainly encompasses the genetic modification of cultivated crops. Philosophical/ethical issues discussed under this label include the risk of outcrossing between genetically modified types of plants and the wild types; the use of genetically modified crops in the production of foodstuffs, either directly or indirectly as food for animals intended for human consumption (for example, soy beans, corn, potatoes and tomatoes); the labeling of foodstuffs produced on the basis of genetically modified organisms; issues related to the patenting of genetically modified crops and so forth.

Not surprisingly, red biotechnology is the most hotly discussed area of biotechnology as red biotechnologies directly involve human beings and non-human animals, both of which are categories that feature prominently throughout ethical discussions. Red biotechnology involves such things as the transplantation of human organs and tissues, and xenotransplantation (the transplantation of non-human animal organs and tissues to humans); the use of cloning techniques for reproductive and therapeutic purposes; the use of embryos for stem cell research; artificial reproduction, in vitro fertilization, the genetic testing of embryos and pre-implantation diagnostics and so forth. In addition, an increasingly discussed area of red biotechnology is constituted by human enhancement technologies. These encompass such diverse technologies as the use of psycho-pharmaceutical substances for the improvement of one’s mental capacities, the genetic modification of human embryos to prevent possible genetic diseases and so forth.

Other areas of biotechnology can include synthetic biology, which involves the creation of synthetic genetic systems, synthetic metabolic systems and attempts at creating living synthetic life forms from scratch. Synthetic biology does not fit into the distinction between white, green and red biotechnology and receives attention from philosophers not only because projects in synthetic biology may raise ethical questions (for example, Douglas & Savulescu, 2012) but also because of questions from epistemology and philosophy of science (for example, O”Malley, 2009; Van den Beld, 2009: 1314-1316).

Corresponding to this diversity of technologies covered by the label of “biotechnology”, philosophical reflection on biotechnology as such and on its possible consequences for human life will not be a very fruitful enterprise as there will not be much to say about biotechnology in general. Instead, philosophical reflection on biotechnology will need to be conducted locally rather than globally, taking the form of close examination of particular technologies in particular contexts. Philosophers concerned with biotechnology reflect on such specific issues as the genetic modification of plants for agricultural purposes, or the use of psycho-pharmaceutical substances for the improvement of the mental capacities of healthy subjects – not biotechnology as such. In the same way as “technology” can be thought of as a “reflective concept” (Grunwald & Julliard, 2005) that brings together a variety of phenomena under a common denominator for the purposes of enabling philosophical work, so “biotechnology” too can be understood as a “reflective concept” that is useful to locate particular considerations within the wide domain of philosophical reflection.

This is, however, not to say that on more general levels nothing can be said about biotechnology. Bioethicist Bernard Rollin, for example, considered genetic engineering in general and addressed the question whether genetic engineering could be considered intrinsically wrong – that is, wrong in any and all contexts and hence independently of the particular context of application that is under consideration (Rollin, 2006: 129-154). According to Rollin, the alleged intrinsic wrongness of genetic engineering constituted one out of three aspects of wrongness that members of the general public often associate with genetic engineering. These three aspects, which Rollin illustrated as three aspects of the Frankenstein myth (see Rollin, 2006: 135), are: the intrinsic wrongness of a particular practice, its possible dangerous consequences and the possibilities of causing harm to sentient beings. While the latter two aspects of wrongness might be avoided by means of appropriate measures, the intrinsic wrongness of a particular practice (in cases where it obtains) is unavoidable. Thus, if it could be argued that genetic engineering is intrinsically wrong – that is, something that we just ought not to do, irrespective of whatever positive or negative consequences are to be expected –, this would constitute a strong argument against large domains of white, green and red biotechnology. On the basis of an assessment of the motivations that people have to judge genetic engineering as being intrinsically wrong, Rollin, however, concluded that such an argument could not be made because in the various cases in which people concluded that genetic engineering was intrinsically wrong the premises of the argument were not well-founded.

But in this case, too, the need for local rather than global analyses can be seen. Assessing the tenability of the value judgment that genetic engineering is intrinsically wrong requires examining concrete arguments and motivations on a local level. This, I want to suggest by way of conclusion, is a general characteristic of the philosophy of technology: the relevant philosophical analyses will have to take place on the more local levels, examining particular technologies in particular contexts, rather than on more global levels, at which large domains of technology such as biotechnology or even the technological domain as a whole are in focus. Philosophy of technology, then, is a matter of piecemeal engineering, in much the same way as William Wimsatt has suggested that philosophy of science should be done (Wimsatt, 2007).

5. References and Further Reading

  • Auyang, S.Y. (2009): “Knowledge in science and engineering”, Synthese 168: 319-331.
  • Brey, P. (2000): “Theories of technology as extension of human faculties”, in: Mitcham, C. (Ed.): Metaphysics, Epistemology, and Technology (Research in Philosophy and Technology, Vol. 19), Amsterdam: JAI, pp. 59-78.
  • Böhme, G. (2008): Invasive Technologie: Technikphilosophie und Technikkritik, Kusterdingen: Die Graue Edition.
  • Bucciarelli, L.L. (1994): Designing Engineers, Cambridge (MA): MIT Press.
  • Bucciarelli, L.L. (2003): Engineering Philosophy, Delft: Delft University Press.
  • Bunge, M. (1966): “Technology as applied science”, Technology and Culture 7: 329-347.
  • Cassirer, E. (1985): Symbol, Technik, Sprache: Aufsätze aus den Jahren 1927-1933 (edited by E.W. Orth & J. M. Krois), Hamburg: Meiner.
  • De Vries, M.J. (2005): Teaching About Technology: An Introduction to the Philosophy of Technology for Non-Philosophers, Dordrecht: Springer.
  • Dessauer, F. (1927): Philosophie der Technik: Das Problem der Realisierung, Bonn: Friedrich Cohen.
  • Dessauer, F. (1956): Der Streit um die Technik, Frankfurt am Main: Verlag Josef Knecht.
  • Diesel, E. (1939): Das Phänomen der Technik: Zeugnisse, Deutung und Wirklichkeit, Leipzig: Reclam & Berlin: VDI-Verlag.
  • Douglas, T. & Savulescu, J. (2010): “Synthetic biology and the ethics of knowledge”, Journal of Medical Ethics 36: 687-693.
  • Dusek, V. (2006): Philosophy of Technology: An Introduction, Malden (MA): Blackwell.
  • Ellul, J. (1954): La Technique ou l’Enjeu du Siècle, Paris: Armand Colin.
  • Feenberg, A. (2003): “What is philosophy of technology?”, lecture at the University of Tokyo (Komaba campus), June 2003.
  • Ferré, F. (1988): Philosophy of Technology, Englewood Cliffs (NJ): Prentice Hall; unchanged reprint (1995): Philosophy of Technology, Athens (GA) & London, University of Georgia Press.
  • Fischer, P. (1996): “Zur Genealogie der Technikphilosophie”, in: Fischer, P. (Ed.): Technikphilosophie, Leipzig: Reclam, pp. 255-335.
  • Fischer, P. (2004): Philosophie der Technik, München: Wilhelm Fink (UTB).
  • Franssen, M.P.M. (2008): “Design, use, and the physical and intentional aspects of technical artifacts”, in: Vermaas, P.E., Kroes, P., Light, A. & Moore, S.A. (Eds): Philosophy and Design: From Engineering to Architecture, Dordrecht: Springer, pp. 21-35.
  • Franssen, M.P.M. (2009): “Analytic philosophy of technology”, in: J.K.B. Olsen, S.A. Pedersen & V.F. Hendricks (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 184-188.
  • Franssen, M.P.M., Lokhorst, G.-J. & Van de Poel, I. (2009): “Philosophy of technology”, in: Zalta, E. (Ed.): Stanford Encyclopedia of Philosophy (Fall 2009 Edition).
  • Grunwald, A. & Julliard, Y. (2005): “Technik als Reflexionsbegriff: Zur semantischen Struktur des Redens über Technik”, Philosophia Naturalis 42: 127-157.
  • Gutting, G. (Ed.) (2005): Continental Philosophy of Science, Malden (MA): Blackwell.
  • Habermas, J. (1968): Technik und Wissenschaft als “Ideologie”, Frankfurt am Main: Suhrkamp.
  • Heidegger, M. (1962): Die Technik und die Kehre, Pfullingen: Neske.
  • Houkes, W. (2008): “Designing is the construction of use plans”, in: Vermaas, P.E., Kroes, P., Light, A. & Moore, S.A. (Eds): Philosophy and Design: From Engineering to Architecture, Dordrecht: Springer, pp. 37-49.
  • Houkes, W. (2009): “The nature of technological knowledge”, in: Meijers, A.W.M. (Ed.): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland, pp. 310-350.
  • Houkes, W. & Vermaas, P.E. (2004): “Actions versus functions: A plea for an alternative metaphysics of artefacts”, The Monist 87: 52-71.
  • Hoyningen-Huene, P. (2008): ‘Systematicity: The nature of science”, Philosophia 36: 167-180.
  • Ihde, D. (1993): Philosophy of Technology: An Introduction, New York: Paragon House.
  • Ihde, D. (2009): “Technology and science”, in: Olsen, J.K.B., Pedersen, S.A. & Hendricks, V.F. (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 51-60.
  • Irrgang, B. (2008): Philosophie der Technik, Darmstadt: Wissenschaftliche Buchgesellschaft.
  • Jaspers, K. (1931): Die geistige Situation der Zeit, Berlin & Leipzig: Walter de Gruyter & Co.
  • Kaplan, D.M. (Ed.) (2004): Readings in the Philosophy of Technology, Lanham (Md.): Rowman & Littlefield.
  • Kapp, E. (1877): Grundlinien einer Philosophie der Technik: Zur Entstehungsgeschichte der Cultur aus neuen Gesichtspunkten, Braunschweig: G. Westermann.
  • Kogan-Bernstein, F.A. (1959): “Einleitung”, in: Kogan-Bernstein, F.A. (Ed): Francis Bacon: Neu-Atlantis, Berlin: Akademie-Verlag, pp. 1-46
  • Kroes, P.E., Light, A., Moore, S.A. & Vermaas, P.E. (2008): “Design in engineering and architecture: Towards an integrated philosophical understanding”, in: Vermaas, P.E., Kroes, P., Light, A. & Moore, S.A. (Eds): Philosophy and Design: From Engineering to Architecture, Dordrecht: Springer, pp. 1-17.
  • Krohs, U. & Kroes, P. (Eds) (2009): Functions in Biological and Artificial Worlds: Comparative Philosophical Perspectives, Cambridge (MA): MIT Press.
  • Kuhn, T.S. (1970): The Structure of Scientific Revolutions (Second Edition, Enlarged), Chicago: University of Chicago Press.
  • Li-Hua, R. (2009): “Definitions of technology”, in: J.K.B. Olsen, S.A. Pedersen & V.F. Hendricks (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 18-22.
  • Margolis, E. & Laurence, S. (Eds) (2007): Creations of the Mind: Theories of Artifacts and Their Representation, Oxford: Oxford University Press.
  • Meijers, A.W.M. (Ed.) (2009): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland.
  • Misa, T.J. (2009): “History of technology”, in: J.K.B. Olsen, S.A. Pedersen & V.F. Hendricks (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 7-17.
  • Mitcham, C. (1994): Thinking Through Technology: The Path Between Engineering and Philosophy, Chicago & London: University of Chicago Press.
  • Mitcham, C. & Schatzberg, E. (2009): “Defining technology and the engineering sciences”, in: Meijers, A.W.M. (Ed.): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland, pp. 27-63.
  • Nordmann, A. (2008): Technikphilosophie: Zur Einführung, Hamburg: Junius.
  • Nye, D.E. (2006): Technology Matters: Questions to Live With, Cambridge (MA): MIT Press.
  • O”Malley, M.A. (2009): “Making knowledge in synthetic biology: Design meets kludge”, Biological Theory 4: 378-389.
  • Parry, R. (2008): “Episteme and techne”, in: Zalta, E. (Ed.): Stanford Encyclopedia of Philosophy (Fall 2008 Edition).
  • Pitt, J.C. (2000): Thinking About Technology: Foundations of the Philosophy of Technology, New York & London: Seven Bridges Press.
  • Pitt, J.C. (2009): “Technological explanation”, in: Meijers, A.W.M. (Ed.): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland, pp. 861-879.
  • Olsen, J.K.B., Pedersen, S.A. & Hendricks, V.F. (Eds) (2009): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell.
  • Olsen, J.K.B., Selinger, E. & Riis, S. (Eds) (2009): New Waves in Philosophy of Technology, Houndmills: Palgrave Macmillan.
  • Office of Technology Assessment (1984): Commercial Biotechnology: An International Analysis, Washington (DC): U.S. Government Printing Office.
  • Rapp, F. (1981): Analytical Philosophy of Technology (Boston Studies in the Philosophy of Science, Vol. 63), Dordrecht: D. Reidel.
  • Rapp, F. (1989): “Introduction: General perspectives on the complexity of philosophy of technology”, in: Durbin, P.T. (Ed.): Philosophy of Technology: Practical, Historical and Other Dimensions, Dordrecht: Kluwer, pp. ix-xxiv.
  • Rollin, B.E. (2006): Science and Ethics, Cambridge: Cambridge University Press.
  • Ropohl, G. (1990): “Technisches Problemlösen und soziales Umfeld”, in: Rapp, F. (Ed.): Technik und Philosophie, Düsseldorf: VDI, pp. 111-167.
  • Ropohl, G. (2009): Allgemeine Technologie: Eine Systemtheorie der Technik (3., überarbeitete Auflage), Karlsruhe: Universitätsverlag Karlsruhe.
  • Scharff, R.C. (2009): “Technology as “applied science””, in: J.K.B. Olsen, S.A. Pedersen & V.F. Hendricks (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 160-164.
  • Scharff, R.C. & Dusek, V. (Eds.) (2003): Philosophy of Technology: The Technological Condition – An Anthology, Malden (MA): Blackwell.
  • Schummer, J. (2001): “Aristotle on technology and nature”, Philosophia Naturalis 38: 105-120.
  • Snyder, L.J. (2009): “William Whewell”, in: Zalta, E. (Ed.): Stanford Encyclopedia of Philosophy (Winter 2009 Edition).
  • Spengler, O. (1931): Der Mensch und die Technik: Beitrag zu einer Philosophie des Lebens, München: C.H. Beck.
  • Van den Beld, H. (2009): “Philosophy of biotechnology”, in: Meijers, A.W.M. (Ed.): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland, pp. 1302-1340.
  • Verbeek, P.-P. (2005): What Things Do: Philosophical Reflections on Technology, Agency, and Design, University Park (PA): Pennsylvania State University Press.
  • Vermaas, P.E., Kroes, P., Light, A. & Moore, S.A. (Eds) (2008): Philosophy and Design: From Engineering to Architecture, Dordrecht: Springer.
  • Vincenti, W.G. (1990): What Engineers Know and How They Know It: Analytical Studies from Aeronautical History, Baltimore (MD): Johns Hopkins University Press.
  • Wartofsky, M.W. (1979): “Philosophy of technology”, in: Asquith, P.D. & Kyburg, H.E. (eds): Current Research in Philosophy of Science, East Lansing (MI): Philosophy of Science Association, pp. 171-184.
  • Whitney, E. (1990): Paradise Restored: The Mechanical Arts From Antiquity Through the Thirteenth Century (Transactions of the American Philosophical Society, Vol. 80), Philadelphia: The American Philosophical Society.
  • Wimsatt, W.C. (2007): Re-engineering Philosophy for Limited Beings: Piecewise Approximations to Reality, Cambridge (MA): Cambridge University Press.
  • Winner, L. (1977): Autonomous Technology: Technics-out-of-control as a Theme in Political Thought, Cambridge (MA): MIT Press
  • Zoglauer, T. (2002): “Einleitung”, in: Zoglauer, T. (Ed.): Technikphilosophie, Freiburg & München: Karl Alber.


Author Information

Thomas A.C. Reydon
Leibniz University of Hannover

John McTaggart Ellis McTaggart (1866—1925)

McTaggartJ. M. E. McTaggart is a British idealist, best known for his argument for the unreality of time and for his system of metaphysics advocating personal idealism. By the early twentieth century, the philosophical movement known as British Idealism was waning, while the ‘new realism’ (later dubbed ‘analytic philosophy’) was gaining momentum. Although McTaggart’s commitment to idealism never faltered, he enjoyed an usually close relationship with several of the new realists. McTaggart spent almost his entire career at Trinity College, Cambridge, and there he taught Bertrand Russell, G. E. Moore and C. D. Broad. McTaggart influenced all of these figures to some degree, and all of them speak particularly highly of his careful and clear philosophical method.

McTaggart studied Hegel from the very beginning of his philosophical career and produced a large body of Hegel scholarship, including the mammoth Studies in Hegelian Cosmology (1901). Towards the end of his career he produced his two volume magnum opus The Nature of Existence (1921 & posthumously 1927), a highly original metaphysical system developing─what McTaggart took to be─Hegel’s ontology. This personal idealism holds that the universe is composed solely of minds and their perceptions, bound into a tight unity by love. However, McTaggart is best known for his influential paper “The Unreality of Time” in which he argues that change and time are contradictory and unreal. This argument, and the metaphysical groundwork it lays down, especially its contrast between his A-series and B-series of time, is still widely discussed.

Table of Contents

  1. Biography
  2. Philosophical Influences
    1. The British Idealists
    2. The British New Realists
  3. Philosophical Writings
    1. Hegel
    2. Some Dogmas of Religion
    3. The Unreality of Time
    4. The Nature of Existence
  4. References and Further Reading
    1. Primary Sources
    2. Selected Secondary Sources

1. Biography

McTaggart was born in London on 3 September 1866, the son of Francis Ellis, a county court judge, and his wife Caroline Ellis. McTaggart was born ‘John McTaggart Ellis’ and acquired the name ‘McTaggart’ as a surname when his father adopted it on condition of inheriting an uncle’s wealth. As a boy McTaggart attended a preparatory school in Weybridge, from which he was expelled for his frequent avowal of atheism. He subsequently attended school in Caterham and Clifton College, Bristol. He began studying philosophy at Trinity College, Cambridge in 1885. Once McTaggart began at Trinity, he hardly left: he graduated in 1888 with a first class degree, became a Prize Fellow in 1891, became a lecturer in Moral Sciences in 1897 and stayed until his retirement in 1923. In a letter to a friend, he writes of Cambridge: ‘Unless I am physically or spiritually at Cambridge or Oxford, I have no religion, no keenness (I do not identify them) except by snatches. I must have been made for a don... I learn a good many things there, the chief one being that I am a damned sight happier than I deserve to be’. In addition to being an academic, McTaggart was a mystic. He reports having visions ─ not imaginary, but literal perceptions of the senses ─ conveying the spiritual nature of the world; this may have played a part in his unswerving devotion to idealism. McTaggart investigates the nature of mysticism in “Mysticism” ─ reprinted in his Philosophical Studies (1934) ─ and he takes it to involve an awareness of the unity of the universe.

Beginning in 1891, McTaggart took a number of trips to New Zealand to visit his mother, and it was there that he met his future wife. He married Margaret Elizabeth Bird in New Zealand on 5 August 1899, and subsequently removed her to Cambridge. They had no children. During the first World War, McTaggart worked as a special constable and helped in a munitions factory. McTaggart’s friend Dickinson writes of him, ‘it is essential to remember that, if he was a philosopher by nature and choice he was also a lover and a husband... and a whole-hearted British patriot’ (Dickinson, 1931, 47).

Towards the end of his life McTaggart produced the first volume of his magnum opus The Nature of Existence (1921). He retired shortly afterwards in 1923, and died unexpectedly two years later on 18 January 1925. In his introduction to the second edition of Some Dogmas of Religion, McTaggart’s friend and former student Broad describes McTaggart’s funeral and mentions how one of McTaggart’s favourite Spinozistic passages were read out. It is worth mentioning here that, although McTaggart never contributed to Spinoza scholarship, he admired him greatly ─ perhaps even more than Hegel. McTaggart describes Spinoza as a great religious teacher, ‘in whom philosophical insight and religious devotion were blended as in no other man before or since’ (McTaggart, 1906, 299). The passage from Spinoza was consequently engraved on McTaggart’s memorial bass in Trinity College. McTaggart did not live to see the second volume of The Nature of Existence in print but fortunately the manuscript was largely complete and it was finally published in 1927, under Broad’s careful editorial care. Broad describes McTaggart as follows:

‘Take an eighteenth-century English Whig. Let him be a mystic. Endow him with the logical subtlety of the great schoolmen and their belief in the powers of human reason, with the business capacity of a successful lawyer, and with the lucidity of the best type of French mathematician. Inspire him (Heaven knows how) in early youth with a passion for Hegel. Then subject him to the teaching of Sidgwick and the continual influence of Moore and Russell. Set him to expound Hegel. What will be the result? Hegel himself could not have answered this question a priori, but the course of world history has solved it ambulando by producing McTaggart.’

For further biographical information (and anecdotes) see Dickinson’s (1931) biographical sketch of McTaggart, and Broad’s (1927) notice on McTaggart.

2. Philosophical Influences

McTaggart was active in British philosophy at a time when it was caught between two opposing philosophical currents ─ British Idealism and the New Realism ─ and McTaggart was involved with figures within both of these movements.

a. The British Idealists

McTaggart began his career in British philosophy when it was firmly under the sway of British Idealism, a movement which argues that the world is inherently unified, intelligible and idealist. Due to the influence of Hegel on these philosophers, the movement is also sometimes known as British Hegelianism. The movement began in the latter half of the nineteenth century; J. H. Stirling is generally credited with introducing Hegel’s work to Britain via his book The Secret of Hegel (1865). Aside from McTaggart himself, important figures in British Idealism include T. H. Green, F. H. Bradley, Harold Joachim, Bernard Bosanquet and Edward Caird. Early on, a schism appeared in the movement as to how idealism should be understood. Absolute idealists ─ such as Bradley ─ argued that reality is underpinned by a single partless spiritual unity known as the Absolute. In contrast, personal idealists ─ such as G. F. Stout and Andrew Seth ─ argued that reality consists of many individual spirits or persons. McTaggart firmly endorses personal idealism, the doctrine that he took to be Hegel’s own. In addition to his idealism, McTaggart shared other neo-Hegelian principles. Among these are his convictions that the universe is as tightly unified as it is possible for a plurality of existents to be, that the universe is fundamentally rational and open to a priori investigation, and his disregard for naturalism. On this last point, McTaggart goes so far as to say that, while science may investigate the nature of the universe, only philosophy investigates its ‘ultimate nature’ (McTaggart, 1906, 273).

Nearly all of McTaggart’s early work concerns Hegel, or Hegelian doctrines, and this work forms the basis of the metaphysical system he would later develop in so much detail. A good example of this is his earliest publication, a pamphlet printed for private circulation entitled “The Further Determination of the Absolute” (1893); it is reprinted in McTaggart’s Philosophical Studies. In this defence of idealism, McTaggart’s Hegelian credentials are well established: he repeatedly references Hegel, Green, and Bradley ─ whom he later describes as ‘the greatest of all living philosophers'. McTaggart apparently cared  greatly about this paper. In its introduction, McTaggart apologises for its ‘extreme crudeness... and of its absolute inadequacy to its subject’. In private correspondence (see Dickinson) McTaggart describes the experience of writing it. ‘It has been shown to one or two people who are rather authorities (Caird of Glasgow and Bradley of Oxford) and they have been very kind and encouraging about it... [writing] it was almost like turning one’s heart out’.

b. The British New Realists

Despite his close philosophical ties to British Idealism, McTaggart bucked the trends of the movement in a number of ways. (In fact, Broad (1927) goes so far as to say that English Hegelianism filled McTaggart with an ‘amused annoyance’). To begin with, McTaggart spent his entire career at Cambridge. Not only was Oxford, rather than Cambridge, the spiritual home of British Idealism but Cambridge became the home of new realism. While at Trinity College, McTaggart taught a number of the new realists ─ including Moore, Russell and Broad ─ and held great influence over them. Moore read and gave notes on a number of McTaggart’s works prior to publication, including Some Dogmas of Religion (1906) and the first volume of The Nature of Existence. In his obituary note on McTaggart, Moore describes him as a philosopher ‘of the very first rank’ (Moore, 1925, 271). For more on McTaggart’s influence on Moore, see Baldwin (1990). McTaggart was also involved with some of the realist debates of the time; for example, see his discussion note on Wittgenstein “Propositions Applicable to Themselves”, reprinted in his Philosophical Studies (1906).

As a young philosopher, Russell was so impressed by McTaggart’s paper “The Further Determination of the Absolute” and its doctrine of philosophical love that he used it to woo his future wife. In his autobiography, Russell writes that he remembers wondering as a student ‘as an almost unattainable ideal, whether I should ever do work as good as McTaggart’s’ (Russell, 1998, 129). Later, their relationship soured; McTaggart took a leading role in the expulsion of Russell from his fellowship following Russell’s controversial pacifist wartime writings. For more on this, and on McTaggart’s more general influence on Russell, see Dickinson (1931) and Griffin (1984). McTaggart, Russell and Moore were described at one point as ‘The Mad Tea Party of Trinity’, with McTaggart painted as the Dormouse.

As for Broad, McTaggart describes him as his ‘most brilliant’ pupil. Broad edited the two volumes of McTaggart’s The Nature of Existence, and produced extensive studies of both. Both Moore and Broad heap praise upon McTaggart for his exceptional clarity and philosophic rigour; the lack of these qualities in other idealists played a part in driving both of these new realists away from British Idealism. For example, Broad writes: ‘The writings of too many eminent Absolutists seem to start from no discoverable premises; to proceed by means of puns, metaphors, and ambiguities; and to resemble in their literary style glue thickened with sawdust’ (Broad, 1933, ii). In contrast, Broad says of McTaggart that he ‘was an extremely careful and conscientious writer... [to] be ranked with Hobbes, Berkeley and Hume among the masters of English philosophical prose... [his] style is pellucidly clear’ (Broad, 1927, 308).

Not only did McTaggart enjoy close relationships with the new realists, they shared some basic philosophic tenets. For example, McTaggart and the new realists reject the Bradleyian claim that reality and truth come in degrees. McTaggart argues that there is a ‘confusion’ which leads philosophers to move from one to the other (McTaggart, 1921, 4). McTaggart also rejects the coherence theory of truth espoused by British idealists such as Joachim (and, arguably, Bradley) in favour of the correspondence theory of truth (McTaggart, 1921, 10).

3. Philosophical Writings

a. Hegel

While many of the British idealists studied Hegel, few entered into the murky waters of Hegel scholarship. McTaggart is an exception: Hegel scholarship occupied McTaggart for most of his career. Hegel is a German idealist and his work is notoriously difficult. While some of the British  idealists understood Hegel to be arguing that reality consists of a single partless spiritual being known as the Absolute, McTaggart took Hegel to be arguing for personal idealism.

Hegel is discussed in McTaggart’s very first publication, “The Further Determination of the Absolute” (1893). McTaggart argues that the progress of any idealistic philosophy may be divided into three stages: the proof that reality is not exclusively matter, the proof that reality is exclusively spirit and determining the fundamental nature of that spirit. McTaggart describes Hegel’s understanding of the fundamental nature of spirit as follows. ‘Spirit is ultimately made up of various finite individuals, each of which finds his character and individuality by relating himself to the rest, and by perceiving that they are of the same nature as himself’. The individuals that make up spirit are interdependent, united by a pattern or design akin to an organic unity. McTaggart adds that justifying this ‘would be a task beyond the limits of this paper... it could only be done by going over the whole course of Hegel’s Logic’. One way of understanding the rest of McTaggart’s career is to see that he is making good on his threat to justify Hegel’s understanding of spirit.

Just some of McTaggart’s works on Hegel include Studies in the Hegelian Dialectic (1896), Studies in Hegelian Cosmology (1901) and A Commentary on Hegel’s Logic (1910). A central theme in these books is the question of how the universe, as unified spirit, is differentiated into finite spirits ─ how can a unity also be a plurality? McTaggart takes Hegel to have solved this problem by postulating a unity which is not only in the individuals, but also for the individuals, in that reality is a system of conscious individuals wherein each individual reflects the whole: ‘If we take all reality, for the sake of convenience, as limited to three individuals, A, B, and C, and suppose them to be conscious, then the whole will be reproduced in each of them... [A will] be aware of himself, of B, and of C, and of the unity which joins them in a system’ (McTaggart, 1901, 14). Later, this is exactly the position that McTaggart himself advances. McTaggart also discusses Hegel’s dialectic method at length; this is the process whereby opposition between a thesis and an anti-thesis is resolved into a synthesis. For example, ‘being’ and ‘not being’ are resolved into ‘becoming’. Despite his admiration for this method, McTaggart does not use it in his Nature of Existence; instead of proceeding by dialectic, his argument proceeds via the more familiar method of principles and premises.

There is disagreement within contemporary Hegel scholarship as to how correct McTaggart’s reading of Hegel is. Stern argues that McTaggart’s reading of Hegel bears close similarities to contemporary readings, and that it should be seen as an important precursor (Stern, 2009, 121). In contrast, in his introduction to Some Dogmas of Religion, Broad argues that ‘if McTaggart’s account of Hegelianism be taken as a whole and compared with Hegel’s writings as a whole, the impression produced is one of profound unlikeness’. Similarly, Geach compares McTaggart’s acquaintance with Hegel’s writings to the chapter-and-verse knowledge of the Bible that out-of-the-way Protestant sectarians often have; he adds that the ‘unanimous judgement’ of Hegel scholars appears to be that McTaggart’s interpretations of Hegel were as perverse as these sectarians’ interpretations of the Bible (Geach, 1979, 17).

b. Some Dogmas of Religion

Some Dogmas of Religion (1906) is an exception to McTaggart’s main body of work, in that it assumes no knowledge of philosophy and is intended for general audience. The book covers a large number of topics, from the compatibility of God’s attributes to human freewill. This section picks out three of the book’s central themes: the role of metaphysics, McTaggart’s brand of atheism and the immortality of the soul.

McTaggart defines metaphysics as ‘the systematic study of the ultimate nature of reality’. A dogma is ‘any proposition which has a metaphysical significance’, such as belief in God (McTaggart, 1906, 1). McTaggart argues that dogmas can only be produced by reason ─ by engaging in metaphysics. Science does not produce dogmas, for scientific claims do not aim to express the fundamental nature of reality. For example, science tells us about the laws governing the part of the universe know as ‘matter’ are mechanical. Science does not go on to tell us whether these laws are manifestations of deeper laws, or the will of God (McTaggart, 1906, 13-4). In fact, McTaggart argues that the consistency of science would be unaffected if its object of study ─ matter ─ turned out to be immaterial. To learn about the ultimate nature of the world, we must look to metaphysics, not science.

McTaggart embodies two apparently contradictory characteristics: he is religious and an atheist. The apparent contradiction is resolved by McTaggart’s definition of religion. ‘Religion is clearly a state of mind... an emotion resting on a conviction of a harmony between ourselves and the universe at large’ (McTaggart, 1906, 3). McTaggart aims to define religion as broadly as possible, so as to include the traditional systems ─ such as those of the Greeks, Roman Christians, Judaism and Buddhism ─ and the idiosyncratic ones espoused by philosophers like Spinoza and Hegel. Given this very broad definition of religion, McTaggart’s own system of personal idealism qualifies as religious. However, McTaggart is an atheist, for he denies the existence of God. In Some Dogmas of Religion McTaggart does not argue for atheism, he merely rejects some of the traditional arguments for theism. He defines God as ‘a being that is personal, supreme and good’ (McTaggart, 1906, 186) and argues that theistic arguments do not prove the existence of such a being. For example, the cosmological ‘first cause’ argument claims that if every event must have a cause, including the universe’s very first event, then the first cause must being a which is uncaused: God. McTaggart argues that even if this argument is valid, it does not prove the existence of God, for it does not prove that the first existing being is either personal or good (McTaggart, 1906, 190-1). In The Nature of Existence, McTaggart goes even further than this and argues directly for atheism (McTaggart, 1927, 176-89).

Given that McTaggart denies the reality of time and the existence of God, it may seem strange that he also affirms the immortality of the human soul. However, McTaggart held all three of these claims throughout his life. In Some Dogmas of Religion, McTaggart takes the immortality of the soul as a postulate, and considers objections to it, such as the claim that the soul or self is an activity of the finite human body, or that it cannot exist without it. McTaggart argues that none of these objections are successful. For example, concerning the claim that the self is of such a nature that it cannot exist outside of its present body, McTaggart argues that while we have no evidence of disembodied selves, this shows at most that the self needs some body, not that it needs the body it currently has (McTaggart, 1906, 104-5). McTaggart concludes that the immortality of the soul is at least a real possibility, for souls can move from body to body. He argues that souls are immortal, not in the sense of existing at every time ─ for time does not exist ─ but in the sense that we enjoy a succession of lives, before and after this one. McTaggart calls this the doctrine of the ‘plurality of lives’ (McTaggart, 1906, 116). He goes on to argue that our journey throughout these lives is not guided by chance or mechanical necessity, but rather by the interests of spirit: love, which ‘would have its way’. For example, our proximity to our loved ones is not the product of chance or mechanical arrangement, but is rather caused by the fact that our spirits are more closely connected to these selves than to others. This accounts for phenomena such as ‘love at first sight’: we have loved these people before, in previous lives (McTaggart, 1906, 134-5). In The Nature of Existence, McTaggart puts forward a positive argument for the immortality of the soul and continues to emphasise that love is of the utmost importance. By affirming the immortality of the soul, McTaggart seems to take himself to be following Spinoza in making death ‘the least of all things’ (McTaggart, 1906, 299).

c. The Unreality of Time

McTaggart’s paper “The Unreality of Time” (1908) presents the argument he is best known for. (The argument of this paper is also included in the second volume of The Nature of Existence.) McTaggart argues that the belief in the unreality of time has proved ‘singularly attractive’ throughout the ages, and attributes such belief to Spinoza, Kant, Hegel and Bradley. (In the case of Spinoza, this attribution is arguable; Spinoza describes time as a general character of existents, albeit one conceived using the imagination.) McTaggart offers us a wholly new argument in favour of this belief, and here is its outline.

McTaggart distinguishes two ways of ordering events or ‘positions’ in time: the A series takes some position as present, and orders other positions as running from the past to the present and from the present to the future; meanwhile the B series orders events in virtue of whether they are earlier or later than other events. The argument itself has two steps. In the first step, McTaggart aims to show that there is no time without the A series because only the A series can account for change. On the B series nothing changes, any event N has ─ and will always have ─ the same position in the time series: ‘If N is ever earlier than O and later than M, it will always be, and has always been... since the relations of earlier and later are permanent’. In contrast, change does occur on the A series. For example an event, such as the death of Queen Anne, began by being a future event, became present and then became past. Real change only occurs on the A series when events move from being in the future, to being in the present, to being in the past.

In the second step, McTaggart argues that the A series cannot exist, and hence time cannot exist. He does so by attempting to show that the existence of the A series would generate contradiction because past, present and future are incompatible attributes; if an event M has the attribute of being present it cannot also be in the past and the future. However, McTaggart maintains that ‘every event has them all’ ─ for example, if M is past, then it has been present and future ─ which is inconsistent with change. As the application of the A series to reality involves a contradiction, the A series cannot be true of reality. This does not entail that our perceptions are false; on the contrary, McTaggart maintains that it is possible that the realities which we perceive as events in a time series do really form a non-temporal C series. Although this C series would not admit of time or change, it does admit of order. For example, if we perceive two events M and N as occurring at the same time, it may be that ─ while time does not exist ─ M and N have the same position in the ordering of the C series. McTaggart attributes this view of time to Hegel, claiming that Hegel regards the time series as a distorted reflexion of something in the real nature of the timeless reality. In “The Unreality of Time”, McTaggart does not consider at length what the C series is; he merely suggests that the positions within it may be ultimate facts or that they are determined by varying quantities within objects. In “The Relation of Mind to Eternity” (1909) ─ reprinted in his Philosophical Studies ─ McTaggart goes further than this. He compares our perception of time to viewing reality through a tinted glass, and suggests that the C series is an ordering of representations of reality according to how accurate they are. Our ersatz temporal perception that we are moving through time reflects our movement towards the end point of this series, which is the correct perception of reality. This end point will involve the fact that reality is really timeless, so time is understood as the process by which we reach the timeless. Later still, in the second volume of The Nature of Existence, McTaggart reconsiders this position and argues that while the objects of the C series are representations of reality, they are not ordered by veracity. Instead, McTaggart argues that the ‘fundamental sense’ of the C series is that it is ordered according to the ‘amount of content of the whole that is included in it’: it runs from the less inclusive to the more inclusive (McTaggart, 1927, 362). However, McTaggart does not give up his claim that the C series will reach a timeless end point. For more on this, see The Nature of Existence (1927), chapters 59-61.

Reception to “The Unreality of Time” among McTaggart’s contemporaries was mixed. Ewing describes its implausible conclusion as ‘heroic’, while Broad describes it ‘as an absolute howler’. This argument is probably the most influential piece of philosophy that McTaggart ever produced. Although the paper’s conclusions are rarely endorsed in full, it is credited with providing the framework for a debate ─ between the A and B series of time ─ which is still alive today.  For discussion, see Dummett “A Defence of McTaggart’s Proof of the Unreality of Time” (1960),  Lowe “The Indexical Fallacy in McTaggart's Proof of the Unreality of Time” (1987) and Le Poidevin & Mellor “Time, Change, and the ' Indexical Fallacy'” (1987). For an extended, more recent discussion, see Dainton (2001).

d. The Nature of Existence

McTaggart’s magnum opus aims to provide a comprehensive, systematic a priori description of the world; the conclusion of this system is personal idealism. Broad claims that The Nature of Existence may quite fairly be ranked with the Enneads of Plotinus, the Ethics of Spinoza, and the Encyclopaedia of Hegel (Broad, 1927). The central argument of The Nature of Existence is based on the nature of substance and it is extremely complex. The bare bones of the argument contains three steps but along the way, McTaggart makes use of a number of subsidiary arguments.

In the first step, McTaggart argues that the universe contains multiple substances. McTaggart defines a substance as whatever exists and has qualities, or stands in relations, but is not itself a quality or relation, entailing that the following are all substances: sneezes, parties and red-haired archdeacons (McTaggart, 1921, 73). Given this broad definition, McTaggart argues that at least one substance exists; this is true given the evidence of our senses, and that there is anything around to consider the statement at all. All substances have qualities (today, we would say ‘properties’) such as redness and squareness. If there are multiple substances, then relations hold between them. Although to contemporary philosophers the claim that relations are real is familiar, in the context of British Idealism this is a significant departure from Bradley’s claim that relations are unreal. The qualities and relations possessed by a substance are jointly called its characteristics. McTaggart puts forward two kinds of arguments for the claim that there are multiple substances. Firstly, there are empirical proofs, such as the claim that if I and the things I perceive exist, then there are at least two substances (McTaggart, 1921, 75). Secondly, as we will see below, McTaggart argues that all substances can be differentiated into further substances. If this is true then it follows that if at least one substance exists, many exist.

In the second step, McTaggart places two necessary ontological conditions on the nature of substances ─ they must admit of sufficient descriptions, and they must be differentiated into further substances ─ which results in his theory of determining correspondence.

The first ontological condition McTaggart places on substance is that they must admit of sufficient descriptions. This grows out of McTaggart’s extended discussion of the ‘Dissimilarity of the Diverse’ ─ see Chapter 10 of the first volume of the Nature of Existence ─ which argues that diverse (that is, non-identical) things are dissimilar, that two things cannot have the same nature. This ‘similarity’ involves the properties and relations a substance has. For example, McTaggart argues that if space is absolute then two things will occupy different spatial positions and stand in dissimilar spatial relations. McTaggart discusses the relationship between his principle the ‘Dissimilarity of the Diverse’, and Leibniz’s principle the ‘Identity of Indiscernibles’, which states that two things are identical if they are indiscernible. McTaggart prefers the name of his principle, for it does not suggest that there are indiscernibles which are identical but rather that there is nothing which is indiscernible from anything else. McTaggart goes on to argue that all substances admit of an ‘exclusive description’ which applies only to them via a description of their qualities. For example, the description ‘The continent lying over the South Pole’ applies to just one substance. All substances admit of exclusive descriptions because, given the Dissimilarity of the Diverse, no substance can have exactly the same nature as any other (McTaggart, 1921, 106). There are two kinds of exclusive descriptions: firstly, the kind that introduce another substance into the description, such as ‘The father of Henry VIII’; secondly, the kind known as ‘sufficient descriptions’, which describe a substance purely in terms of its qualities, without introducing another substance into the description, such as ‘The father of a monarch’. McTaggart argues that all substances must admit of sufficient descriptions: all substances are dissimilar to all other substances and as a result they admit of exclusive descriptions. If a substance could not be described without making reference to other substances then we would arrive at an infinite regress (because, as we will see, all substances are differentiated to infinity) and the description would correspondingly be infinite (McTaggart, 1921, 108). Such a regress would be vicious because it would never be completed. As substances do exist, they must admit of sufficient descriptions.

The second ontological condition placed on substances is that they are infinitely differentiated into proper parts which are also substances. By ‘differentiated,’ McTaggart implies that they are divisible and that they are divisible into parts unlike their wholes. To illustrate, a homogeneous ─ that is, uniform ─ liquid akin to milk might be infinitely divisible, but all of its parts would be like their wholes, they would merely be smaller portions of milk. In contrast, a heterogeneous ─ that is, non-uniform ─ liquid akin to a fruit smoothie would be infinitely divisible into parts that are unlike their whole: the whole might contain cherry and orange, while its parts contain pieces of cherry and orange respectively. McTaggart argues that all substances are infinitely differentiated by denying a priori that ‘simple’ partless substances are possible; he does so in two ways. The first way is based on divisibility. Simples would have to be indivisible in every dimension ─ in length, breadth and time ─ and this is impossible because even a substance like ‘pleasure’ has two dimensions, if it lasts for at least two moments of time (McTaggart, 1921, 175). The second way is based on notion of content. A simple substance would be a substance without ‘content’ in that it would lack properties and would not stand in relations. McTaggart argues that it is part of our notion of a substance that they must have a ‘filling’ of some sort ─  an ‘internal structure’ ─ and this could only be understood to mean that they must have parts (McTaggart, 1921, 181). Both of these arguments are somewhat hazy; see Broad (1933) for an extensive discussion and critique.

McTaggart’s full account of parts and wholes ─ which discusses divisibility, simples and composition ─ can be found in the first volume of The Nature of Existence, chapters 15-22. McTaggart endorses the doctrine of unrestricted composition, whereby any two substances compose a further compound substance. It follows from this that the universe or ‘all that exists’ is a single substance composed of all other substances (McTaggart, 1921, 78). While we might doubt the existence of simples (that is, partless atoms) we cannot doubt the existence of the universe because it includes all content (McTaggart, 1921, 172). Given McTaggart’s claim that all substances are differentiated and that unrestricted composition occurs, it follows that all parts and all collections of substances are themselves substances. These dual claims have made their way into an argument within contemporary metaphysics by Jonathan Schaffer. In contemporary parlance, anything that is infinitely divisible into proper parts which also have proper parts is ‘gunky’. One way of understanding McTaggart is to see that he claiming that, while all substances lack a ‘lower’ level ─ because they are gunky, infinitely divisible into further parts ─ all substances have a ‘highest’ level in the form of the universe, a substance which includes all content. Schaffer makes use of this asymmetry of existence ─ the fact that one can seriously doubt the existence of simples but not the existence of the universe as a whole ─ to argue for priority monism (Schaffer, 2010, 61).

With these two ontological conditions in place ─ that substances must admit of sufficient descriptions and be differentiated ─ McTaggart sets out to combine them into his theory of determining correspondence. This theory is extremely difficult and rather obscure; see Wisdom (1928) and Broad (1933). Essentially, McTaggart argues that the two ontological conditions result in contradiction unless substances fulfil a certain requirement. The worry is that a substance A cannot be given a sufficient description in virtue of sufficient descriptions of its parts M, for they can only be described in virtue of a sufficient descriptions of their parts... and so on to infinity. This is a vicious series because the sufficient descriptions of the members of M can only be made sufficient by means of the last stage of an unending series; in other words, they cannot be made sufficient at all (McTaggart, 1921, 199). Of course, as there are substances, they must admit of sufficient descriptions. McTaggart’s way out of this apparent contradiction seems to be to reverse the direction of epistemological priority: we have been considering deducing a sufficient description of a substance in virtue of its parts; instead, we should be deducing sufficient descriptions of the parts in virtue of the substance of which they are a whole. ‘[If] the contradiction is to be avoided, there must be some description of every substance which does imply sufficient descriptions of every part through all its infinite series of sets of parts’ (McTaggart, 1921, 204). The only way to provide such a description is via the law of determining correspondence, which asserts that each part of A is in a one-to-one correspondence with each term of its infinite series, the nature of the correspondence being such that, in the fact that a part of A corresponded in this way to a reality with a given nature, there would be implied a sufficient description of that part of A. The theory of determining correspondence involves a classification of the contents of the universe. The universe is a primary whole and it divides into primary parts, whose sufficient descriptions determine ─ by virtue of the relation of determining correspondence ─ the sufficient description of all further, secondary parts.

In the third step of his argument, McTaggart shows that the only way the nature of substance could comply with the theory of determining correspondence is if substance is spirit. He does this by eliminating the other candidates for the nature of substance, matter and sense data. His objections to both of these rival candidates are similar; we will focus on his rejection of matter. McTaggart argues that while there ‘might’ be no difficulty in the claim that matter is infinitely divisible, there is certainly is difficulty in the claim that matter can allow for determining correspondence (McTaggart, 1927, 33). This is impossible because, in a material object, the sufficient description of the parts determines the sufficient description of the whole, not the other way around. ‘If we know the shape and size of each one of a set of parts of A, and their position relatively to each other, we know the size and shape of A... we shall thus have an infinite series of terms, in which the subsequent terms imply the precedent’ (McTaggart, 1927, 36). As we have already seen above, such a series will involve a contradiction, for the description will never ‘bottom out’. One way out of this contradiction might be to postulate that, at each level of composition, the parts bear a ‘new’ property ─ such as a new colour or taste ─ which would be sufficient to describe them. McTaggart swiftly dispenses with this reply by arguing that it would require matter to possess an infinite number of sorts of qualities ─ ‘one sort for each of the infinite series of grades of secondary parts’ ─ and there is no reason to suppose that matter possesses more than the very limited number of qualities that are currently known to us (McTaggart, 1927, 35). McTaggart briefly considers dividing matter to infinity in time but dismisses the idea because of course, for McTaggart, time is unreal. McTaggart concludes that matter cannot exist. Interestingly, he does not take this conclusion to imply anti-realism about science or common sense, for when those disciplines use terms which assume the existence of matter, what is meant by those terms ‘remains just as true’ if we take the view that matter does not exist (McTaggart, 1927, 53).

Having dispensed with its rivals, McTaggart turns to idealism. Spiritual substances include selves, their parts, and compounds of multiple selves. Idealism is compatible with the theory of determining correspondence when the primary parts of the universe are understood to be selves, and the secondary parts their perceptions which are differentiated to infinity (McTaggart, 1927, 89).   While this does not amount to a positive proof of idealism, it gives us good reason to believe that nothing but spirit exists, for there is no other option on the table (McTaggart, 1927, 115). McTaggart also describes how the universe is a ‘self-reflecting unity’, in that the parts of the universe reflect every other part (McTaggart, 1921, 299). As we saw above, this is the view that McTaggart attributed to Hegel. McTaggart’s system also bears some similarity to the monadism advanced in Leibniz’s Monadology, wherein each monad is a spirit that reflects every other monad. Furthermore, in Leibniz’s system the highest ranks of monads are capable of entering into a community with God of pure love. Similarly, in McTaggart’s system (although there is no divine monarch) the souls are bound together by the purest form of love which results in the purest form of happiness (McTaggart, 1927, 156). These arguments are but developments of principles that McTaggart had espoused his entire life.

This section merely describes the main thread of argument in The Nature of Existence; the work itself covers many more topics. These include the notion of organic unity, the nature of cogitation, volition, emotion, good and evil, and error. Further topics are also covered in McTaggart’s Philosophical Studies, such the nature of causality and the role of philosophy as opposed to science.

4. References and Further Reading

a. Primary Sources

  • (1893) “The Further Determination of the Absolute”. Pamphlet designed for private distribution only. Reprinted in McTaggart’s Philosophical Studies.
  • (1896) “Time and the Hegelian Dialectic”. Mind Vol. 2, 490–504.
  • (1896) Studies in the Hegelian Dialectic. CUP: GB.
  • (1897) “Hegel's Treatment of the Categories of the Subjective Notion”. Mind Vol. 6, 342–358.
  • (1899) “Hegel's Treatment of the Categories of the Objective Notion”. Mind Vol. 8, 35–62.
  • (1900) “Hegel's Treatment of the Categories of the Idea”. Mind Vol. 9, 145–183.
  • (1901) Studies in Hegelian Cosmology. CUP: Glasgow.
  • (1906) Some Dogmas of Religion. Edward Arnold press: GB.
  • (1908) “The Unreality of Time”. Mind Vol. 17, 457–474.
  • (1909) “The Relation of Time to Eternity” Mind Vol. 18, 343-362.
  • (1910) A Commentary on Hegel's Logic. CUP: GB.
  • (1916) Human Immortality and Pre-Existence. Edward Arnold Press: GB.
  • (1921) The Nature of Existence I. CUP: London.
  • (1927)The Nature of Existence II. Edited by C. D. Broad. CUP: London.
  • (1934) Philosophical Studies. Edited by S.V. Keeling. Theomes Press: England.
    • [A large collection of McTaggart’s papers]

b. Selected Secondary Sources

  • Baldwin, Thomas (1990). G. E. Moore. Routledge: UK.
    • [Describes the relationship between Moore and McTaggart]
  • Bradley, F. (1920). Appearance and Reality. George Allen & Unwin Ltd: GB
    • [Bradley is the arch British idealist]
  • Broad, C. D. (1927). “John McTaggart Ellis McTaggart 1866-1925”, Proceedings of the British Academy Vol. XIII, 307-334.
  • Broad, C.D. (1933) An Examination of McTaggart's Philosophy. CUP: GB
  • Dainton, Barry (2001). Time and Space. Acumen Publishing Ltd: GB.
    • [Provides an excellent discussion of McTaggart’s argument on the unreality of time]
  • Dickinson, G. Lowes (1931). J. M. E. McTaggart. CUP: GB.
  • Geach, Peter (1979). Truth, Love and Immortality: an Introduction to McTaggart's Philosophy. University of California Press: GB.
  • Moore, G.E. (1925). “Death of Dr. McTaggart”, Mind Vol. 34, 269–271.
  • Moore, G.E. (1942). “An Autobiography”, in The Philosophy of G.E. Moore. Tudor Publishing Company: GB.
  • Russell, Bertrand (1998). The Autobiography of Bertrand Russell. Routledge: GB.
  • Schaffer, Jonathan (2010). “Monism: The Priority of the Whole”, Philosophical Review, Vol. 119, pp. 31-76.
    • [Utilises McTaggart’s asymmetry of existence – between the non-existence of simples and the existence of the universe as a whole – in a new way]
  • Stern, Robert (2009). Hegelian Metaphysics. OUP: GB.
    • [Gives an excellent history of the movement, and discusses how close McTaggart’s interpretation of Hegel is to Hegel himself]
  • Wisdom, John. 1928. “McTaggart's Determining Correspondence of Substance: a Refutation”, Mind Vol. 37, 414–438.


Author Information

Emily Thomas
University of Cambridge
United Kingdom

The Lucas-Penrose Argument about Gödel's Theorem

In 1961, J.R. Lucas published “Minds, Machines and Gödel,” in which he formulated a controversial anti-mechanism argument.  The argument claims that Gödel’s first incompleteness theorem shows that the human mind is not a Turing machine, that is, a computer.  The argument has generated a great deal of discussion since then.  The influential Computational Theory of Mind, which claims that the human mind is a computer, is false if Lucas’s argument succeeds.  Furthermore, if Lucas’s argument is correct, then “strong artificial intelligence,” the view that it is possible at least in principle to construct a machine that has the same cognitive abilities as humans, is false.  However, numerous objections to Lucas’s argument have been presented.  Some of these objections involve the consistency or inconsistency of the human mind; if we cannot establish that human minds are consistent, or if we can establish that they are in fact inconsistent, then Lucas’s argument fails (for reasons made clear below).  Others object to various idealizations that Lucas’s argument makes.  Still others find some other fault with the argument.  Lucas’s argument was rejuvenated when the physicist R. Penrose formulated and defended a version of it in two books, 1989’s The Emperor's New Mind and 1994’s Shadows of the Mind. Although there are similarities between Lucas’s and Penrose’s arguments, there are also some important differences.  Penrose argues that the Gödelian argument implies a number of claims concerning consciousness and quantum physics; for example, consciousness must arise from quantum processes and it might take a revolution in physics for us to obtain a scientific explanation of consciousness.  There have also been objections raised to Penrose’s argument and the various claims he infers from it: some question the anti-mechanism argument itself, some question whether it entails the claims about consciousness and physics that he thinks it does, while others question his claims about consciousness and physics, apart from his anti-mechanism argument.

Section one discusses Lucas’s version of the argument.  Numerous objections to the argument – along with Lucas’s responses to these objections – are discussed in section two. Penrose’s version of the argument, his claims about consciousness and quantum physics, and various objections that are specific to Penrose’s claims are discussed in section three. Section four briefly addresses the question, “What did Gödel himself think that his theorem implied about the human mind?”  Finally, section five mentions two other anti-mechanism arguments.

Table of Contents

  1. Lucas’s Original Version of the Argument
  2. Some Possible Objections to Lucas
    1. Consistency
    2. Benacerraf’s Criticism
    3. The Whiteley Sentence
    4. Issues Involving “Idealizations”
    5. Lewis’s Objection
  3. Penrose’s New Version of the Argument
    1. Penrose’s Gödelian Argument
    2. Consciousness and Physics
  4. Gödel’s Own View
  5. Other Anti-Mechanism Arguments
  6. References and Further Reading

1. Lucas’s Original Version of the Argument

Gödel’s (1931) first incompleteness theorem proves that any consistent formal system in which a “moderate amount of number theory” can be proven will be incomplete, that is, there will be at least one true mathematical claim that cannot be proven within the system (Wang 1981: 19).  The claim in question is often referred to as the “Gödel sentence.”  The Gödel sentence asserts of itself: “I am not provable in S,” where “S” is the relevant formal system.  Suppose that the Gödel sentence can be proven in S.  If so, then by soundness the sentence is true in S.  But the sentence claims that it is not provable, so it must be that we cannot prove it in S.  The assumption that the Gödel sentence is provable in S leads to contradiction, so if S is consistent, it must be that the Gödel sentence is unprovable in S, and therefore true, because the sentence claims that it is not provable.  In other words, if consistent, S is incomplete, as there is a true mathematical claim that cannot be proven in S. For an introduction to Gödel’s theorem, see Nagel and Newman (1958).

Gödel’s proof is at the core of Lucas’s (1961) argument, which is roughly the following.  Consider a machine constructed to produce theorems of arithmetic.  Lucas argues that the operations of this machine are analogous to a formal system.  To explain, “if there are only a definite number of types of operation and initial assumptions built into the [machine], we can represent them all by suitable symbols written down on paper” (Lucas 1961: 115).  That is, we can associate specific symbols with specific states of the machine, and we can associate “rules of inference” with the operations of the machine that cause it to go from one state to another.  In effect, “given enough time, paper, and patience, [we could] write down an analogue of the machine’s operations,” and “this analogue would in fact be a formal proof” (ibid).  So essentially, the arithmetical claims that the machine will produce as output, that is, the claims the machine proves to be true, will “correspond to the theorems that can be proved in the corresponding formal system” (ibid).  Now suppose that we construct the Gödel sentence for this formal system.  Since the Gödel sentence cannot be proven in the system, the machine will be unable to produce this sentence as a truth of arithmetic.  However, a human can look and see that the Gödel sentence is true.  In other words, there is at least one thing that a human mind can do that no machine can.  Therefore, “a machine cannot be a complete and adequate model of the mind” (Lucas 1961: 113).  In short, the human mind is not a machine.

Here is how Lucas (1990: paragraph 3) describes the argument:

I do not offer a simple knock-down proof that minds are inherently better than machines, but a schema for constructing a disproof of any plausible mechanist thesis that might be proposed.  The disproof depends on the particular mechanist thesis being maintained, and does not claim to show that the mind is uniformly better than the purported mechanist representation of it, but only that it is one respect better and therefore different.  That is enough to refute that particular mechanist thesis.

Further, Lucas (ibid) believes that a variant of his argument can be formulated to refute any future mechanist thesis.  To explain, Lucas seems to envision the following scenario:  a mechanist formulates a particular mechanistic thesis by claiming, for example, that the human mind is a Turing machine with a given formal specification S.  Lucas then refutes this thesis by producing S’s Gödel sentence, which we can see is true, but the Turing machine cannot.  Then, a mechanist puts forth a different thesis by claiming, for example, that the human mind is a Turing machine with formal specification S’.  But then Lucas produces the Gödel sentence for S’, and so on, until, presumably, the mechanist simply gives up.

One who has not studied Gödel’s theorem in detail might be wondering: why can’t we simply add the Gödel sentence to the list of theorems a given machine “knows” thereby giving the machine the ability Lucas claims it does not have?  In Lucas’s argument, we consider some particular Turing machine specification S, and then we note that “S-machines” (that is, those machines that have formal specification S) cannot see the truth of the Gödel sentence while we can, so human minds cannot be S-machines, at least.  But why can’t we simply add the Gödel sentence to the list of theorems that S-machines can produce?  Doing so will presumably give the machines in question the ability that allegedly separates them from human minds, and Lucas’s argument falters.  The problem with this response is that even if we add the Gödel sentence to S-machines, thereby producing Turing machines that can produce the initial Gödel sentence as a truth of arithmetic, Lucas can simply produce a new Gödel sentence for these updated machines, one which allegedly we can see is true but the new machines cannot, and so on ad infinitum.  In sum, as Lucas (1990: paragraph 9) states,  “It is very natural…to respond by including the Gödelian sentence in the machine, but of course that makes the machine a different machine with a different Gödelian sentence all of its own.”  This issue is discussed further below.

One reason Lucas’s argument has received so much attention is that if the argument succeeds, the widely influential Computational Theory of Mind is false.  Likewise, if the argument succeeds, then “strong artificial intelligence” is false; it is impossible to construct a machine that can perfectly mimic our cognitive abilities.  But there are further implications; for example, a view in philosophy of mind known as Turing machine functionalism claims that the human mind is a Turing machine, and of course, if Lucas is right, this form of functionalism is false. (For more on Turing machine functionalism, see Putnam (1960)).  So clearly there is much at stake.

2. Some Possible Objections to Lucas

Lucas’s argument has been, and still is, very controversial.  Some objections to the argument involve consistency; if we cannot establish our own consistency, or if we are in fact inconsistent, then Lucas’s argument fails (for reasons made clear below).  Furthermore, some have objected that the algorithm the human mind follows is so complex we might be forever unable to formulate our own Gödel sentence; if so, then maybe we cannot see the truth of our own Gödel sentence and therefore we might not be different from machines after all.  Others object to various idealizations that Lucas’s argument makes.  Still others find some other fault with the argument.  In this section, some of the more notable objections to Lucas’s argument are discussed.

a. Consistency

Lucas’s argument faces a number of objections involving the issue of consistency; there are two related though distinct lines of argument on this issue.  First, some claim that we cannot establish our own consistency, whether we are consistent or not.  Second, some claim that we are in fact inconsistent.  The success of either of these objections would be sufficient to defeat Lucas’s argument.  But first, to see why these objections (if successful) would defeat Lucas’s argument, recall that Gödel’s first incompleteness theorem states that if a formal system (in which we can prove a suitable amount of number theory) is consistent, the Gödel sentence is true but unprovable in the system.  That is, the Gödel sentence will be true and unprovable only in consistent systems.  In an inconsistent system, one can prove any claim whatsoever because in classical logic, any and all claims follow from a contradiction; that is, an inconsistent system will not be incomplete.  Now, suppose that a mechanist claims that we are Turing machines with formal specification S, and this formal specification is inconsistent (so the mechanist is essentially claiming that we are inconsistent).  Lucas’s argument simply does not apply in such a situation; his argument cannot defeat this mechanist.  Lucas claims that any machine will be such that there is a claim that is true but unprovable for the machine, and since we can see the truth of the claim but the machine cannot, we are not machines.  But if the machine in question is inconsistent, the machine will be able to prove the Gödel sentence, and so will not suffer from the deficiency that Lucas uses to separate machines from us.  In short, for Lucas’s argument to succeed, human minds must be consistent.

Consequently, if one claims that we cannot establish our own consistency, this is tantamount to claiming that we cannot establish the truth of Lucas’s conclusion.  Furthermore, there are some good reasons for thinking that even if we are consistent, we cannot establish this.  For example, Gödel’s second incompleteness theorem, which quickly follows from his first theorem, claims that one cannot prove the consistency of a formal system S from within the system itself, so, if we are formal systems, we cannot establish our own consistency.  In other words, a mechanist can avoid Lucas’s argument by simply claiming that we are formal systems and therefore, in accordance with Gödel’s second theorem, cannot establish our own consistency.  Many have made this objection to Lucas’s argument over the years; in fact, Lucas discusses this objection in his original (1961) and attributes it to Rogers (1957) and Putnam.  Putnam made the objection in a conversation with Lucas even before Lucas’s (1961) (see also Putnam (1960)).  Likewise, Hutton (1976) argues from various considerations drawn from Probability Theory to the conclusion that we cannot assert our own consistency.  For example, Hutton claims that the probability that we are inconsistent is above zero, and that if we claim that we are consistent, this “is a claim to infallibility which is insensitive to counter-arguments to the point of irrationality” (Lucas 1976: 145).  In sum, for Lucas’s argument to succeed, we must be assured that humans are consistent, but various considerations, including Gödel’s second theorem, imply that we can never establish our own consistency, even if we are consistent.

Another possible response to Lucas is simply to claim that humans are in fact inconsistent Turing machines.  Whereas the objection above claimed that we can never establish our own consistency (and so cannot apply Gödel’s first theorem to our own minds with complete confidence), this new response simply outright denies that we are consistent.  If humans are inconsistent, then we might be equivalent to inconsistent Turing machines, that is, we might be Turing machines.  In short, Lucas concludes that since we can see the truth of the Gödel sentence, we cannot be Turing machines, but perhaps the most we can conclude from Lucas’s argument is that either we are not Turing machines or we are inconsistent Turing machines.  This objection has also been made many times over the years; Lucas (1961) considers this objection too in his original article and claims that Putnam also made this objection to him in conversation.

So, we see two possible responses to Lucas: (1) we cannot establish our own consistency, whether we are consistent or not, and (2) we are in fact inconsistent.  However, Lucas has offered numerous responses to these objections.  For example, Lucas thinks it is unlikely that an inconsistent machine could be an adequate representation of a mind.  He (1961: 121) grants that humans are sometimes inconsistent, but claims that “it does not follow that we are tantamount to inconsistent systems,” as “our inconsistencies are mistakes rather than set policies.”  When we notice an inconsistency within ourselves, we generally “eschew” it, whereas “if we really were inconsistent machines, we should remain content with our inconsistencies, and would happily affirm both halves of a contradiction” (ibid).  In effect, we are not inconsistent machines even though we are sometimes inconsistent; we are fallible but not systematically inconsistent.   Furthermore, if we were inconsistent machines, we would potentially endorse any proposition whatsoever (ibid).  As mentioned above, one can prove any claim whatsoever from a contradiction, so if we are inconsistent Turing machines, we would potentially believe anything.  But we do not generally believe any claim whatsoever (for example, we do not believe that we live on Mars), so it appears we are not inconsistent Turing machines.  One possible counter to Lucas is to claim that we are inconsistent Turing machines that reason in accordance with some form of paraconsistent logic (in paraconsistent logic, the inference from a contradiction to any claim whatsoever is blocked); if so, this explains why we do not endorse any claim whatsoever given our inconsistency (see Priest (2003) for more on paraconsistent logic).  One could also argue that perhaps the inconsistency in question is hidden, buried deep within our belief system; if we are not aware of the inconsistency, then perhaps we cannot use the inconsistency to infer anything at all (Lucas himself mentions this possibility in his (1990)).

Lucas also argues that even if we cannot prove the consistency of a system from within the system itself, as Gödel’s second theorem demonstrates, there might be other ways to determine if a given system is consistent or not.  Lucas (1990) points out that there are finitary consistency proofs for both the propositional calculus and the first-order predicate calculus, and there is also Gentzen’s proof of the consistency of Elementary Number Theory.  Discussing Gentzen’s proof in more detail, Lucas (1996) argues that while Gödel's second theorem demonstrated that we cannot prove the consistency of a system from within the system itself, it might be that we can prove that a system is consistent with considerations drawn from outside the system.  One very serious problem with Lucas’s response here, as Lucas (ibid) himself notes, is that the wider considerations that such a proof uses must be consistent too, and this can be questioned.  Another possible response is the following: maybe we can “step outside” of, say, Peano arithmetic and argue that Peano arithmetic is consistent by appealing to considerations that are outside of Peano arithmetic; however, it isn’t clear that we can “step outside” of ourselves to show that we are consistent.

Lucas (1976: 147) also makes the following “Kantian” point:

[perhaps] we must assume our own consistency, if thought is to be possible at all.  It is, perhaps like the uniformity of nature, not something to be established at the end of a careful chain of argument, but rather a necessary assumption we must make if we are to start on any thinking at all.

A possible reply is that assuming we are consistent (because this assumption is a necessary precondition for thought) and our actually being consistent are two different things, and even if we must assume that we are consistent to get thought off of the ground, we might be inconsistent nevertheless.  Finally, Wright (1995) has argued that an intuitionist, at least, who advances Lucas’s argument, can overcome the worry over our consistency.

b. Benacerraf’s Criticism

Benacerraf (1967) makes a well-known criticism of Lucas’s argument.  He points out that it is not easy to construct a Gödel sentence and that in order to construct a Gödel sentence for any given formal system one must have a solid understanding of the algorithm at work in the system.  Further, the formal system the human mind might implement is likely to be extremely complex, so complex, in fact, that we might never obtain the insight into its character needed to construct our version of the Gödel sentence.  In other words, we understand some formal systems, such as the one used in Russell and Whitehead’s (1910) Principia, well enough to construct and see the truth of the Gödel sentence for these systems, but this does not entail that we can construct and see the truth of our own Gödel sentence.  If we cannot, then perhaps we are not different from machines after all; we might be very complicated Turing machines, but Turing machines nevertheless.  To rephrase this objection, suppose that a mechanist produces a complex formal system S and claims that human minds are S.  Of course, Lucas will then try to produce the Gödel sentence for S to show that we are not S.  But S is extremely complicated, so complicated that Lucas cannot produce S’s Gödel sentence, and so cannot disprove this particular mechanistic thesis.  In sum, according to Benacerraf, the most we can infer from Lucas’s argument is a disjunction: “either no (formal system) encodes all human arithmetical capacity – the Lucas-Penrose thought – or any system which does has no axiomatic specification which human beings can comprehend” (Wright, 1995, 87).  One response Lucas (1996) makes is that he [Lucas] could be helped in the effort to produce the Gödel sentence for any given formal system/machine.  Other mathematicians could help and so could computers.  In short, at least according to Lucas, it might be difficult, but it seems that we could, at least in principle, determine what the Gödelian formula is for any given system.

c. The Whiteley Sentence

Whiteley (1962) responded to Lucas by arguing that humans have similar limitations to the one that Lucas’s argument attributes to machines; if so, then perhaps we are not different from machines after all.  Consider, for example, the “Whiteley sentence,” that is, “Lucas cannot consistently assert this formula.”  If this sentence is true, then it must be that asserting the sentence makes Lucas inconsistent.  So, either Lucas is inconsistent or he cannot utter the sentence on pain of inconsistency, in which case the sentence is true and so Lucas is incomplete.  Hofstadter (1981) also argues against Lucas along these lines, claiming that we would not even believe the Whiteley sentence, while Martin and Engleman (1990) defend Lucas on this point by arguing against Hofstadter (1981).

d. Issues Involving “Idealizations”

A number of objections to Lucas’s argument involve various “idealizations” that the argument makes (or at least allegedly makes).  Lucas’s argument sets up a hypothetical scenario involving a mind and a machine, “but it is an idealized mind and an idealized machine,” neither of which are subject to limitations arising from, say, human mortality or the inability of some humans to understand Gödel’s theorem, and some believe that once these idealizations are rejected, Lucas’s argument falters (Lucas 1990: paragraph 6).  Several specific instances of this line of argument are considered in successive paragraphs.

Boyer (1983) notes that the output of any human mind is finite.  Since it is finite, it could be programmed into and therefore simulated by a machine.  In other words, once we stop ignoring human finitude, that is, once we reject one of the idealizations in Lucas’s argument, we are not different from machines after all.  Lucas (1990: paragraph 8) thinks this objection misses the point: “What is in issue is whether a computer can copy a living me, when I have not yet done all that I shall do, and can do many different things.  It is a question of potentiality rather than actually that is in issue.”  Lucas’s point seems to be that what is really at issue is what can be done by a human and a machine in principle; if, in principle, the human mind can do something that a machine cannot, then the human mind is not a machine, even if it just so happens that any particular human mind could be modeled by a machine as a result of human finitude.

Lucas (1990: paragraph 9) remarks, “although some degree of idealization seems allowable in considering a mind untrammeled by mortality…, doubts remain about how far into the infinite it is permissible to stray.”    Recall the possible objection discussed above (in section 1) in which the mechanist, when faced with Lucas’s argument, responds by simply producing a new machine that is just like the last except it contains the Gödel sentence as a theorem.  As Lucas points out, this will simply produce a new machine that has a different Gödel sentence, and this can go on forever.  Some might dispute this point though.  For example, some mechanists might try “adding a Gödelizing operator, which gives, in effect a whole denumerable infinity of Gödelian sentences” (Lucas 1990: paragraph 9).  That is, some might try to give a machine a method to construct an infinite number of Gödel sentences; if this can be done, then perhaps any Gödel sentence whatsoever can be produced by the machine.  Lucas (1990) argues that this is not the case, however; a machine with such an operator will have its own Gödel sentence, one that is not on the initial list produced by the operator.  This might appear impossible: how, if the initial list is infinite, can there be an additional Gödel sentence that is not on the list?  It is not impossible though: the move from the initial infinite list of Gödel sentences to the additional Gödel sentence will simply be a move into the “transfinite,” a higher level of infinity than that of the initial list.  It is widely accepted in mathematics, and has been for quite some time, that there are different levels of infinity.

Coder (1969) argues that Lucas has an overly idealized view of the mathematical abilities of many people; to be specific, Coder thinks that Lucas overestimates the degree to which many people can understand Gödel’s theorem and this somehow creates a problem for Lucas’s argument.  Coder holds that since many people cannot understand Gödel’s theorem, all Lucas has shown is that a handful of competent mathematical logicians are not machines (the idea is that Lucas’s argument only shows that those who can produce and see the truth of the Gödel sentence are not machines, but not everyone can do this).  Lucas (1970a) responds by claiming, for example, that the only difference between those who can understand Gödel’s theorem and those who cannot is that, in the case of the former, it is more obvious that they are not machines; it isn’t, say, that some people are machines and others are not.

Dennett (1972) has claimed there is something odd about Lucas’s argument insofar as it seems to treat humans as creatures that simply wander around asserting truths of first-order arithmetic.  Dennett (1972: 530) remarks,

Men do not sit around uttering theorems in a uniform vocabulary, but say things in earnest and jest, makes slips of the tongue, speak several languages…, and – most troublesome for this account – utter all kinds of nonsense and contradictions….

Lucas’s (1990: paragraph 7) response is that these differences between humans and machines that Dennett points to are sufficient for some philosophers to reject mechanism, and that he [Lucas] is simply giving mechanism the benefit of the doubt by assuming that they can explain these differences.  Furthermore, humans can, and some actually do, produce theorems of elementary number theory as output, so any machine that cannot produce all of these theorems cannot be an adequate model of the human mind.

e. Lewis’s Objection

Lewis (1969) has also formulated an objection to Lucas’s argument:

Lewis argues that I [that is, Lucas] have established that there is a certain Lucas arithmetic which is clearly true and cannot be the output of some Turing machine. If I could produce the whole of Lucas arithmetic, then I would certainly not be a Turing machine. But there is no reason to suppose that I am able in general to verify theoremhood in Lucas arithmetic (Lucas 1970: 149).

To clarify, “Peano arithmetic” is the arithmetic that machines can produce and “Lucas arithmetic” is the arithmetic that humans can produce, and Lucas arithmetic will contain Gödel sentences while Peano arithmetic will not, so humans are not machines, at least according to Lucas’s argument.  But Lewis (1969) claims that Lucas has not shown us that he (or anyone else, for that matter) can in fact produce Lucas arithmetic in its entirety, which he must do if his argument is to succeed, so Lucas’s argument is incomplete.   Lucas responds that he does not need to produce Lucas arithmetic in its entirety for his argument to succeed.  All he needs to do to disprove mechanism is produce a single theorem that a human can see is true but a machine cannot; this is sufficient.  Lucas (1970: 149) holds that “what I have to do is to show that a mind can produce not the whole of Lucas arithmetic, but only a small, relevant part.  And this I think I can show, thanks to Gödel's theorem.”

3. Penrose’s New Version of the Argument

Penrose has formulated and defended versions of the Gödelian argument in two books, 1989’s The Emperor’s New Mind and 1994’s Shadows of the Mind. Since the latter is at least in part an attempt to improve upon the former, this discussion will focus on the latter.  Penrose’s (1994) consists of two main parts: (a) a Gödelian argument to show that humans minds are non-computable and (b) an attempt to infer a number of claims involving consciousness and physics from (a).  (a) and (b) are discussed in successive sections.

a. Penrose’s Gödelian Argument

Penrose has defended different versions of the Gödelian argument.  In his earlier work, he defended a version of the argument that was relatively similar to Lucas’s (although there were some minor differences (for example, in his argument, Penrose used Turing’s theorem, which is closely related to Gödel’s first incompleteness theorem)).  Insofar as this version of the argument overlaps with Lucas’s, this version faces many of the same objections as Lucas’s argument.  In his (1994) though, Penrose formulates a version of the argument that has some more significant differences from Lucas’s version.  Penrose regards this version “as the central (new) core argument against the computational modelling of mathematical understanding” offered in his (1994) and notes that some commentators seem to have completely missed the argument (Penrose 1996: 1.3).

Here is a summary of the new argument (this summary closely follows that given in Chalmers (1995: 3.2), as this is the clearest and most succinct formulation of the argument I know of): (1) suppose that “my reasoning powers are captured by some formal system F,” and, given this assumption, “consider the class of statements I can know to be true.”  (2) Since I know that I am sound, F is sound, and so is F’, which is simply F plus the assumption (made in (1)) that I am F (incidentally, a sound formal system is one in which only valid arguments can be proven).  But then (3) “I know that G(F’) is true, where this is the Gödel sentence of the system F’” (ibid).  However, (4) Gödel’s first incompleteness theorem shows that F’ could not see that the Gödel sentence is true.  Further, we can infer that (5) I am F’ (since F’ is merely F plus the assumption made in (1) that I am F), and we can also infer that I can see the truth of the Gödel sentence (and therefore given that we are F’, F’ can see the truth of the Gödel sentence). That is, (6) we have reached a contradiction (F’ can both see the truth of the Gödel sentence and cannot see the truth of the Gödel sentence).  Therefore, (7) our initial assumption must be false, that is, F, or any formal system whatsoever, cannot capture my reasoning powers.

Chalmers (1995: 3.6) thinks the “greatest vulnerability” with this version of the argument is step (2); specifically, he thinks the claim that we know that we are sound is problematic (he attempts to show that it leads to a contradiction (see Chalmers 1995: section 3)).  Others aside from Chalmers also reject the claim that we know that we are sound, or else they reject the claim that we are sound to begin with (in which case we do not know that we are sound either since one cannot know a falsehood).  For example, McCullough (1995: 3.2) claims that for Penrose’s argument to succeed, two claims must be true: (1) “Human mathematical reasoning is sound.  That is, every statement that a competent human mathematician considers to be “unassailably true” actually is true,” and (2) “The fact that human mathematical reasoning is sound is itself considered to be “unassailably true.””  These claims seem implausible to McCullough (1995: 3.4) though, who remarks, “For people (such as me) who have a more relaxed attitude towards the possibility that their reasoning might be unsound, Penrose's argument doesn't carry as much weight.”  In short, McCullough (1995) thinks it is at least possible that mathematicians are unsound so we do not definitively know that mathematicians are sound.  McDermott (1995) also questions this aspect (among others) of Penrose’s argument.  Looking at the way that mathematicians actually work, he (1995: 3.4) claims, “it is difficult to see how thinkers like these could even be remotely approximated by an inference system that chugs to a certifiably sound conclusion, prints it out, then turns itself off.”  For example, McDermott points out that in 1879 Kempe published a proof of the four-color theorem which was not disproved until 1890 by Heawood; that is, it appears there was an 11 year period where many competent mathematicians were unsound.

Penrose attempts to overcome such difficulties by distinguishing between individual, correctable mistakes that mathematicians sometimes make and things they know are “unassailably” true.  He (1994: 157) claims “If [a] robot is…like a genuine mathematician, although it will still make mistakes from time to time, these mistakes will be correctable…according to its own internal criteria of “unassailable truth.””  In other words, while mathematicians are fallible, they are still sound because their mistakes can be distinguished from things they know are unassailably true and can also be corrected (and any machine, if it is to mimic mathematical reasoning, must be the same way).  The basic idea is that mathematicians can make mistakes and still be sound because only the unassailable truths are what matter; these truths are the output of a sound system, and we need not worry about the rest of the output of mathematicians.  McDermott (1995) remains unconvinced; for example, he wonders what “unassailability” means in this context and thinks Penrose is far too vague on the subject.  For more on these issues, including further responses to these objections from Penrose, see Penrose (1996).

b. Consciousness and Physics

One significant difference between Lucas’s and Penrose’s discussions of the Gödelian argument is that, as alluded to above, Penrose infers a number of further claims from the argument concerning consciousness and physics.  Penrose thinks the Gödelian argument implies, for example, that consciousness must somehow arise from the quantum realm (specifically, from the quantum properties of “microtubules”) and that we “will have no chance…[of understanding consciousness]… until we have a much more profound appreciation of the very nature of time, space, and the laws that govern them” (Penrose 1994: 395).  Many critics focus their attention on defeating Penrose’s Gödelian argument, thinking that if it fails, we have little or no reason to endorse Penrose’s claims about consciousness and physics.  McDermott (1995: 2.2) remarks, “all the plausibility of Penrose's theory of “quantum consciousness” in Part II of the book depends on the Gödel argument being sound,” so, if we can refute the Gödelian argument, we can easily reject the rest.  Likewise, Chalmers (1995: 4.1) claims that the “reader who is not convinced by Penrose’s Gödelian arguments is left with little reason to accept his claims that physics is non-computable and that quantum processes are essential to cognition...”  While there is little doubt that Penrose’s claims about consciousness and physics are largely motivated by the Gödelian argument, Penrose thinks that one might be led to such views in the absence of the Gödelian argument (for example, Penrose (1994) appeals to Libet’s (1992) work in an effort to show that consciousness cannot be explained by classical physics).  Some (such as Maudlin (1995)) doubt that there even is a link between the Gödelian argument and Penrose’s claims about consciousness and physics; therefore, even if the Gödelian argument is sound, this might not imply that Penrose’s views about consciousness and physics are true.  Still others have offered objections that directly and specifically attack Penrose’s claims about consciousness and physics, apart from his Gödelian argument; some of these objections are now briefly discussed.

Some have expressed doubts over whether quantum effects can influence neural processes.  Klein (1995: 3.4) states “it will be difficult to find quantum effects in pre-firing neural activity” because the brain operates at too high of temperature and “is made of floppy material (the neural proteins can undergo an enormously large number of different types of vibration).”  Furthermore, Penrose “discusses how microtubules can alter synaptic strengths…but nowhere is there any discussion of the nature of synaptic modulations that can be achieved quantum-mechanically but not classically” (Klein 1995: 3.6).  Also, “the quantum nature of neural activity across the brain must be severely restricted, since Penrose concedes that neural firing is occurring classically” (Klein 1995: 3.6).  In sum, at least given what we know at present, it is far from clear that events occurring at the quantum level can have any effect, or at least much of an effect, on events occurring at the neural level.  Penrose (1994) hopes that the specific properties of microtubules can help overcome such issues.

As mentioned above, the Gödelian argument, if successful, would show that strong artificial intelligence is false, and of course Penrose thinks strong A.I. is false.   However, Chalmers (1995: 4.2) argues that Penrose’s skepticism about artificial intelligence is driven largely by the fact that “it is so hard to see how the mere enaction of a computation should give rise to an inner subjective life.”  But it isn’t clear how locating the origin of consciousness in quantum processes that occur in microtubules is supposed to help: “Why should quantum processes in microtubules give rise to consciousness, any more than computational processes should?  Neither suggestion seems appreciably better off than the other” (ibid).  According to Chalmers, Penrose has simply replaced one mystery with another.  Chalmers (1995: 4.3) feels that “by the end of the book the “Missing Science of Consciousness” seems as far off as it ever was.”

Baars (1995) has doubts that consciousness is even a problem in or for physics (of course, some philosophers have had similar doubts).  Baars (1995: 1.3) writes,

The…beings we see around us are the products of billions of years of biological evolution. We interact with them – with each other – at a level that is best described as psychological. All of our evidence regarding consciousness …would seem to be exclusively psychobiological.

Furthermore, Baars cites much promising current scientific work on consciousness, points out that some of these current theories have not yet been disproven, that, relatively speaking, our attempt to explain consciousness scientifically is still in its infancy, and concludes that “Penrose's call for a scientific revolution seems premature at best” (Baars 1995: 2.3).  Baars is also skeptical of the claim that the solution to the problem of consciousness will come from quantum mechanics specifically.  He claims “there is no precedent for physicists deriving from [quantum mechanics] any macro-level phenomenon such as a chair or a flower…much less a nervous system with 100 billion neurons” (section 4.2) and remarks that it seems to be a leap of faith to think that quantum mechanics can unravel the mystery of consciousness.

4. Gödel’s Own View

One interesting question that has not yet been addressed is: what did Gödel think his first incompleteness theorem implied about mechanism and the mind in general?  Gödel, who discussed his views on this issue in his famous “Gibbs lecture” in 1951, stated,

So the following disjunctive conclusion is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified . . . (Gödel 1995: 310).

That is, his result shows that either (i) the human mind is not a Turing machine or (ii) there are certain unsolvable mathematical problems.  However, Lucas (1998: paragraph 1) goes even further and argues “it is clear that Gödel thought the second disjunct false,” that is Gödel “was implicitly denying that any Turing machine could emulate the powers of the human mind.”  So, perhaps the first thinker to endorse a version of the Lucas-Penrose argument was Gödel himself.

5. Other Anti-Mechanism Arguments

Finally, there are some alternative anti-mechanism arguments to Lucas-Penrose.  Two are briefly mentioned.  McCall (1999) has formulated an interesting argument.  A Turing machine can only know what it can prove, and to a Turing machine, provability would be tantamount to truth.  But Gödel’s theorem seems to imply that truth is not always provability.  The human mind can handle cases in which truth and provability diverge.  A Turing machine, however, cannot.  But then we cannot be Turing machines.  A second alternative anti-mechanism argument is formulated in Cogburn and Megill (2010).  They argue that, given certain central tenets of Intuitionism, the human mind cannot be a Turing machine.

6. References and Further Reading

  • Benacerraf, P. (1967). “God, the Devil, and Gödel,” Monist 51:9-32.
    • Makes a number of objections to Lucas’s argument; for example, the complexity of the human mind implies that we might be unable to formulate our own Gödel sentence.
  • Boyer, D. (1983). “J. R. Lucas, Kurt Godel, and Fred Astaire,” Philosophical Quarterly 33:147-59.
    • Argues, among other things, that human output is finite and so can be simulated by a Turing machine.
  • Chalmers, D. J. (1996). “Minds, Machines, and Mathematics,” Psyche 2:11-20.
    • Contra Penrose, we cannot know that we are sound.
  • Coder, D. (1969). “Gödel’s Theorem and Mechanism,” Philosophy 44:234-7.
    • Not everyone can understand Gödel, so Lucas’s argument does not apply to everyone.
  • Cogburn, J. and Megill, J. (2010).  “Are Turing machines Platonists?  Inferentialism and the Philosophy of Mind,” Minds and Machines 20(3): 423-40.
    • Intuitionism and Inferentialism entail the falsity of the Computational Theory of Mind.
  • Dennett, D.C. (1972). “Review of The Freedom of the Will,” The Journal of Philosophy 69: 527-31.
    • Discusses Lucas’s The Freedom of the Will, and specifically his Gödelian argument.
  • Feferman, S. (1996). “Penrose's Godelian argument,” Psyche 2(7).
    • Points out some technical mistakes in Penrose’s discussion of Gödel’s first theorem.  Penrose responds in his (1996).
  • Gödel, K. (1931). “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monash. Math. Phys. 38: 173-198.
    • Gödel’s first incompleteness theorem.
  • Gödel, K. (1995). Collected Works III (ed. S. Feferman). New York: Oxford University Press.
    • Gödel discusses his first theorem and the human mind.
  • Dennett, D.C. and Hofstadter, D. R. (1981).  The Mind's I: Fantasies and Reflections on Self and Soul.  New York: Basic Books.
    • Contains Hofstadter’s discussion of the Whiteley sentence.
  • Hutton, A. (1976). “This Gödel is Killing Me,” Philosophia 3:135-44.
    • Probabilistic arguments that show that we can’t know we are consistent.
  • Klein, S.A.  “Is Quantum Mechanics Relevant to Understanding Consciousness,” Psyche 2(3).
    • Questions Penrose’s claims about consciousness arising from the quantum mechanical realm.
  • Lewis, D. (1969). “Lucas against Mechanism,” Philosophy 44:231-3.
    • Lucas cannot produce all of “Lucas Arithmetic.”
  • Libet, B. (1992). “The Neural Time-factor in Perception, volition and free will,” Review de Metaphysique et de Morale 2:255-72.
    • Penrose appeals to Libet to show that classical physics cannot account for consciousness.
  • Lucas, J. R. (1961). “Minds, Machines and Gödel,” Philosophy 36:112-127.
    • Lucas’s first article on the Gödelian argument.
  • Lucas, J. R. (1968). “Satan Stultified: A Rrejoinder to Paul Benacerraf,” Monist 52:145-58.
    • A response to Benacerraf’s (1967).
  • Lucas, J. R. (1970a). “Mechanism: A Rejoinder,” Philosophy 45:149-51.
    • Lucas’s response to Coder (1969) and Lewis (1969).
  • Lucas, J. R. (1970b). The Freedom of the Will. Oxford: Oxford University Press.
    • Discusses and defends the Gödelian argument.
  • Lucas, J. R. (1976). “This Gödel is killing me: A rejoinder,” Philosophia 6:145-8.
    • Lucas’s reply to Hutton (1976).
  • Lucas, J. R. (1990). “Mind, machines and Gödel: A retrospect.”  A paper read to the Turing Conference at Brighton on April 6th.
    • Overview of the debate; Lucas considers numerous objections to his argument.
  • Lucas, J. R. (1996).  “The Godelian Argument: Turn Over the Page.”  A paper read at a BSPS conference in Oxford.
    • Another overview of the debate.
  • Lucas, J. R. (1998).  “The Implications of Gödel’s Theorem.”  A paper read to the Sigma Club.
    • Another overview.
  • Nagel, E. and Newman J.R. (1958).  Gödel’s Proof.  New York: New York University Press.
    • Short and clear introduction to Gödel’s first incompleteness theorem.
  • Martin, J. and Engleman, K. (1990). “The Mind’s I Has Two Eyes,” Philosophy 510-16.
    • More on the Whiteley sentence.
  • Maudlin, T. (1996).  “Between the Motion and the Act…” Psyche 2:40-51.
    • There is no connection between Penrose’s Gödelian argument and his views on consciousness and physics.
  • McCall, S. (1999).  “Can a Turing Machine Know that the Gödel Sentence is True?”  Journal of Philosophy 96(10): 525-32.
    • An anti-mechanism argument.
  • McCullough, D. (1996). “Can Humans Escape Gödel?” Psyche 2:57-65.
    • Among other things, doubts that we know we are sound.
  • McDermott, D. (1996). “*Penrose is wrong,” Psyche 2:66-82.
    • Criticizes Penrose on a number of issues, including the soundness of mathematicians.
  • Penrose, R. (1989). The Emperor's New Mind. Oxford: Oxford University Press.
    • Penrose’s first book on the Gödelian argument and consciousness.
  • Penrose, R. (1994).  Shadows of the Mind.  Oxford: Oxford University Press.
    • Human reasoning cannot be captured by a formal system; consciousness arises from the quantum realm; we need a revolution in physics to fully understand consciousness.
  • Penrose, R. (1996). “Beyond the Doubting of a Shadow,” Psyche 2(23).
    • Responds to various criticisms of his (1994).
  • Priest, G. (2003). “Inconsistent Arithmetic: Issues Technical and Philosophical,” in Trends in Logic: 50 Years of Studia Logica (eds. V. F. Hendricks and J. Malinowski), Dordrecht: Kluwer Academic Publishers.
    • Discusses paraconistent logic.
  • Putnam, H. (1960). “Minds and Machines,” Dimensions of Mind. A Symposium (ed. S. Hook). London: Collier-Macmillan.
    • Raises the consistency issue for Lucas.
  • Rogers, H. (1957). Theory of Recursive Functions and Effective Computability (mimeographed).
    • Early mention of the issue of consistency for Gödelian arguments.
  • Whitehead, A. N. and Russell, B. (1910, 1912, 1913). Principia Mathematica, 3 vols, Cambridge: Cambridge University Press.
    • An attempt to base mathematics on logic.
  • Wang, H. (1981).  Popular Lectures on Mathematical Logic. Mineolam NY: Dover.
    • Textbook on formal logic.
  • Whiteley, C. (1962). “Minds, Machines and Gödel: A Reply to Mr. Lucas,” Philosophy 37:61-62.
    • Humans are limited in ways similar to machines.
  • Wright, C. (1995).  “Intuitionists are Not Turing Machines,” Philosophia Mathematica 3(1):86-102.
    • An intuitionist who advances the Lucas-Penrose argument can overcome the worry over our consistency.

Author Information

Jason Megill
Carroll College
U. S. A.

Philosophy of Medicine

While philosophy and medicine, beginning with the ancient Greeks, enjoyed a long history of mutually beneficial interactions, the professionalization of “philosophy of medicine” is a nineteenth century event.  One of the first academic books on the philosophy of medicine in modern terms was Elisha Bartlett’s Essay on the Philosophy of Medical Science, published in 1844.  In the mid to late twentieth century, philosophers and physicians contentiously debated whether philosophy of medicine was a separate discipline distinct from the disciplines of either philosophy or medicine.  The twenty-first century consensus, however, is that it is a distinct discipline with its own set of problems and questions. Professional journals, books series, individual monographs, as well as professional societies and meetings are all devoted to discussing and answering that set of problems and questions.  This article examines—by utilizing a traditional approach to philosophical investigation—all aspects of the philosophy of medicine and the attempts of philosophers to address its unique set of problems and questions.  To that end, the article begins with metaphysical problems and questions facing modern medicine such as reductionism vs. holism, realism vs. antirealism, causation in terms of disease etiology, and notions of disease and health.  The article then proceeds to epistemological problems and questions, especially rationalism vs. empiricism, sound medical thinking and judgments, robust medical explanations, and valid diagnostic and therapeutic knowledge.  Next, it will address the vast array of ethical problems and questions, particularly with respect to principlism and the patient-physician relationship.  The article concludes with a discussion of what constitutes the nature of medical knowledge and practice, in light of recent trends towards both evidence-based and patient-centered medicine.  Finally, even though a vibrant literature on nonwestern traditions is available, this article is concerned only with the western tradition of philosophy of medicine (Kaptchuk, 2000; Lad, 2002; Pole, 2006; Unschuld, 2010).

Table of Contents

  1. Metaphysics
    1. Reductionism vs. Holism
    2. Realism vs. Antirealism
    3. Causation
    4. Disease and Health
  2. Epistemology
    1. Rationalism vs. Empiricism
    2. Medical Thinking
    3. Explanation
    4. Diagnostic and Therapeutic Knowledge
  3. Ethics
    1. Principlism
    2. Patient-Physician Relationship
  4. What is Medicine?
  5. References and Further Reading

1. Metaphysics

Traditionally, metaphysics pertains to the analysis of objects or events and the forces or factors causing or impinging upon them.  One branch of metaphysics, denoted ontology, investigates problems and questions concerning the nature and existence of objects or events and their associated forces or factors.  For philosophy of medicine in the twenty-first century, the two chief objects are the patient’s disease and health, along with the forces or factors responsible for causing them.  “What is/causes health?” or “What is/causes disease?” are contentious questions for philosophers of medicine.  Another branch of metaphysics involves the examination of presuppositions that inform a given ontology.  For philosophy of medicine, the most controversial debate centers around the presuppositions of reductionism and holism.  Questions like “Can a disease be sufficiently reduced to its elemental components?” or “Is the patient more than simply the sum of physical parts?” drive discussion among philosophers of medicine.  In addition, the debate between realism and antirealism has important traction within the field.  This debate centers on questions like, “Are disease-causing entities real?” or “Are these entities socially constructed?”   This section first explores the reductionism-holism and realism-antirealism debates, along with the notion of causation, before turning attention to the notions of disease and health.

a. Reductionism vs. Holism

The reductionism-holism debate enjoys a lively history, especially from the middle to the latter part of the twentieth century.  Reductionism, broadly construed, is the diminution of complex objects or events to their component parts.  In other words, the properties of the whole are simply the addition or summation of the properties of the individual parts.  Such reductionism is often called metaphysical or ontological reductionism to distinguish it from methodological or epistemological reductionism.  Methodological reductionism refers to the investigation of complex objects and events and their associated forces or factors by using technology that isolates and analyzes individual components only.  Epistemological reductionism involves the explanation of complex objects and events and their associated forces or factors in terms of their individual components only.  For the life sciences vis-à-vis reductionism, an organism is made of component parts like bio-macromolecules and cells, whose properties are sufficient for investigating and explaining the organism, if not life itself.  Life scientists often sort these parts into a descending hierarchy. Beginning with the organism,  they proceed downward through organ systems, individual organs, tissues, cells, and macromolecules until reaching the atomic and subatomic levels.  Albert Szent-Gyorgyi once remarked, as he descended this hierarchy in his quest for understanding living organisms, “life slipped between his fingers.”  Holism, however, is the position that the properties of the whole are not reducible to properties of its individual components.  Jan Smuts (1926) introduced the term in the early part of the twentieth century, especially with respect to biological evolution, to account for living processes—without the need for immaterial components.

The relevance of the reductionism-holism debate pertains to both medical knowledge and practice.  Reductionism influences not only how a biomedical scientist investigates and explains disease, but also how a clinician diagnoses and treats it.  For example, if a biomedical researcher believes that the underlying cause of a mental disease is dysfunction in brain processes or mechanisms, especially at the molecular level, then that disease is often investigated exclusively at that level.  In turn, a clinician classifies mental diseases in terms of brain processes or mechanisms at the molecular level, such as depletion in levels of the neurotransmitter serotonin.  Subsequently, the disease is treated pharmacologically by prescribing drugs to elevate the low levels of the neurotransmitter in the depressed brain to levels considered normal within the non-depressed brain.  Although the assumption of reductionism produces a detailed understanding of diseases in molecular or mechanistic terms, many clinicians and patients are dissatisfied with the assumption.  Both clinicians and patients feel that the assumption excludes important information concerning the nature of the disease, as it influences both the patient’s illness and life experience.  Rather than simply treating the disease, such information is vital for treating patients with chronic cases.  In other words, patients often feel as if physicians reduce them to their disease or diseased body part  rather than on the overall illness experience.  The assumption of holism best suits the approach to medical knowledge and practice that includes the patient’s illness experience.  Rather than striving exclusively for restoration of the patient to a pre-diseased state, the clinician assists the patient in redefining what the illness means for their life.  The outcome is not a physical cure necessarily, as it is healing of wholeness from the fragmentation in the patient’s life caused by the illness.

b. Realism vs. Antirealism

Realism is the philosophical notion that observable objects and events are actual objects and events, independent of the person observing them.  In other words, since it exists outside the minds of those observing it, reality does not depend on conceptual structures or linguistic formulations..  Proponents of realism also espouse that even unobservable objects and events, like subatomic particles, exist.  Historically, realists believe that universals—abstractions of objects and events—are separate from the mind cognizing them.  For example, terms like bacteria and cell denote real objects in the natural world, which exist apart from the human minds trying to examine and understand them.  Furthermore, scientific investigations into natural objects like bacteria and cells yield true accounts of these objects.  Anti-realism, on the other hand, is the philosophical notion that observable objects and events are not actual objects and events as observed by a person but rather they are dependent upon the person observing them.  In other words, these objects and events are mind-dependent—not mind-independent.   Anti-realists deny the existence of objects and events apart from the mind cognizing them.  Human minds construct these objects and events based on social or cultural values.  Historically, anti-realists subscribe to nominalism, in which universals do not exist and predicates of particular objects do.  Finally, they question the truth of scientific accounts of the world since no mind-independent framework can be correct absolutely.  Antirealists hold that such truth is framework dependent, so when one changes the framework, truth itself changes.

The debate among realists and anti-realists has important implications for philosophers of medicine, as well as for the practice of clinical medicine.  For example, a contentious issue is whether disease entities or conditions for the expression of a disease are real or not.  Realists argue that such entities or conditions are real and exist independent of medical researchers investigating them, while anti-realists deny their reality and existence.  Take the example of depression.  According to realists, the neurotransmitter serotonin is a real entity that exists in a real brain—apart from clinical investigations or investigators.  A low level of that transmitter is a real condition for the disease’s expression.  For anti-realists, however, serotonin is a laboratory or clinical construct based on experimental or clinical conditions.  Changes in that construct lead to changes in understanding the disease.  The debate is not simply academic but has traction for the clinic.  Clinical realists believe that restoring serotonin levels cures depression.  Clinical anti-realists are less confident about restoring levels of the neurotransmitter to affect a cure.  Rather, they believe that both diagnosis and treatment of depression do not devolve into simplistic measurements of serotonin levels.  Importantly, the anti-realists do not harbor the hope that additional information is likely to remedy the clinical problems associated with serotonin replacement therapy.  The problems are ontological to their core.  The neurotransmitter is a mental construct entirely dependent on efforts to investigate and translate laboratory investigations into clinical practice.

c. Causation

Causation has a long philosophical history, beginning with the ancient Greek philosophers.  Aristotle in particular provided a robust account of causation in terms of material cause, what something is made of; formal cause, how something is made; efficient cause, forces responsible for making something; and, final cause, the purpose for or end to which something is made.  In the modern period, Francis Bacon pruned the four Aristotelian causes to material and efficient causation.  With the rise of British empiricism, especially with David Hume’s philosophical analysis of causation, causation comes under critical scrutiny.  For Hume, in particular, causation is simply the constant conjunction of object and events, with no ontological significance in terms of hooking up the cause with the effect.  According to Hume, society indoctrinates us to assume something real exists between the cause and its effect.  Although Hume’s skepticism of causal forces prevailed in his personal study, it did not prevail in the laboratory.  During the nineteenth century, with the maturation of the scientific revolution, only one cause survived for accounting for natural entities and phenomena—the material cause, which is not strictly Aristotle’s original notion of material causation.  The modern notion involves mechanisms and processes and thereby eliminates efficient causation.  The material cause became the engine driving mechanistic ontology.  During the twentieth century, after the Einsteinian and quantum revolutions, mechanistic ontology gave way to physical ontology that included forces such as gravity as causal entities.  A century later, efficient causation is the purview of philosophers, who argue endlessly about it, while scientists take physical causation as unproblematic in constructing models of natural phenomena based on cause and effect.

For philosophers of medicine, causation is an important notion for analyzing both disease etiology and therapeutic efficacy (Carter, 2003).  At the molecular level, causation operates physico-chemically to investigate and explain disease mechanisms.  In the example of depression, serotonin is a neurotransmitter that binds specific receptors within certain brain locations, which in turn causes a cascade of molecular events in maintaining mental health.  This underlying causal or physical mechanism is critical not only for understanding the disease, but also for treating it.  Medical causation also operates at other levels.  For infectious diseases, the Henle-Koch postulates are important in determining the causal relationship between an infectious microorganism and an infected host (Evans, 1993).  To secure that relationship the microorganism must be associated with every occurrence of the disease, be isolated from the infected host, be grown under in vitro conditions, and be shown to elicit the disease upon infection of a healthy host.  Finally, medical causation operates at the epidemiological or population level.  Here, Austin Bradford Hill’s nine criteria are critical for establishing a causal relationship between environmental factors and disease incidence (Hill, 1965).  For example, the relationship between cigarette smoking and lung cancer involves the strength of the association between smoking and lung cancer, as well as the consistency of that association for the biological mechanisms.  These examples establish the importance of causal mechanisms involved in disease etiology and treatment, especially for diseases with an organic basis; however, some diseases, particularly mental disorders, do not reduce to such readily apparent causality.  Instead, they represent rich areas of investigations for philosophers of medicine.

d. Disease and Health

“What is disease?” is a contentious question among philosophers of medicine.  These philosophers distinguish among four different notions of disease.  The first is an ontological notion.  According to its proponents, disease is a palpable object or entity whose existence is distinct from that of the diseased patient.  For example, disease may be the condition brought on by the infection of a microorganism, such as a virus.  Critics, who champion a physiological notion of disease, argue that advocates of the ontological notion confuse the disease condition, which is an abstract notion, with a concrete entity like a virus.  In other words, proponents of the first notion often combine the disease’s condition and cause.  Supporters of this second notion argue that disease represents a deviation from normal physiological functioning.  The best-known defender of this notion is Christopher Boorse (1987), who defines disease as a value-free statistical norm with respect to “species design.”  Critics who object to this notion, however, cite the ambiguity of the term “norm” in terms of a reference class.  Instead of a statistical norm, evolutionary biologists propose a notion of disease as a maladaptive mechanism, which factors in the organism’s biological history.  Critics of this third notion claim that disease manifests itself, especially clinically, in terms of the individual patient and not a population.  A population may be important to epidemiologists but not to clinicians who must treat individual patients whose manifestation of a disease and response to therapy for that disease may differ from each other significantly.  The final notion of disease addresses this criticism.  The genetic notion claims that disease is the mutation in or absence of a gene.  Its champions assert that each patient’s genomic constitution is unique. By knowing the genomic constitution, clinicians are able to both diagnose the patient’s disease and tailor a specific therapeutic protocol.  Critics of the genetic notion claim that disease, especially its experience, cannot be reduced to nucleotide sequences.  Instead, it requires a larger notion including social and cultural factors.

“What is health?” is an equally contentious question  among philosophers of medicine.   The most common notion of health is simply absence of disease.  Health, according to proponents of this notion, represents a default state as opposed to pathology.  In other words, if an organism is not sick then it must be healthy.  Unfortunately, this notion does not distinguish between various grades of health or preconditions towards illness.  For example, as cells responsible for serotonin stop producing the neurotransmitter a person is prone to depression.  Such a person is not as healthful as a person who is making sufficient amounts of serotonin.  An adequate understanding of health should account for such preconditions.  Moreover, health as absence of disease often depends upon personal and social values of what is health.  Again, ambiguity enters into defining health given these values.  For one person, health might be very different from that of another.  The second notion of health does permit distinction between grades of health, in terms of quantifying it, and does not depend upon personal or social values.  Proponents of this notion, such as Boorse, define health in terms of normal functioning, where the normal reflects a statistical norm with respect to species design.  For example, a person with low levels of serotonin who is not clinically symptomatic in terms of depression is not as healthful as a person with statistically normal neurotransmitter levels.  Criticisms of the second notion revolve around its lack of incorporating the social dimension of health and jettison the notion altogether opting for the notion of wellbeing.  Wellbeing is a normative notion that combines both a person’s values, especially in terms of his or her life goals, and objective physiological states.  Because normative notions contain a person’s value system, they are often difficult to define and defend since values vary from person to person and culture to culture.  Proponents of this notion include Lennart Nordenfelt (1995), Carol Ryff and Burton Singer (1998), Carolyn Whitbeck (1981).

2. Epistemology

Epistemology is the branch of philosophy concerned with the analysis of knowledge, in terms of both its origins and justification.  The overarching question is, “What is knowing or knowledge?”  Subsidiary questions include, “How do we know that we know?”; “Are we certain or confident in our knowing or knowledge?”; “What is it we know when we claim we know?” Philosophers generally distinguish three kinds or theories of knowledge.  The first pertains to knowledge by acquaintance, in which a knowing or an epistemic agent is familiar with an object or event.  It is descriptive in nature, that is, a knowing-about knowledge.  For example, a surgeon is well acquainted with the body’s anatomy before performing an operation.  The second is competence knowledge, which is the species of knowledge useful for performing a task skillfully.  It is performative or procedural in nature, that is, a knowing-how knowledge.  Again, by way of example, the surgeon must know how to perform a specific surgical procedure before executing it.  The third, which interests philosophers most, is propositional knowledge.  It pertains to certain truths or facts.  As such, philosophers traditionally call this species of knowledge, “justified true belief.”  Rather than descriptive or performative in nature, it is explanatory, or a knowing-that knowledge.  Again, by way of illustration, the surgeon must know certain facts or truths about the body’s anatomy, such as the physiological function of the heart, before performing open-heart surgery.  This section begins with the debate between rationalists and empiricists over the origins of knowledge, before turning to medical thinking and explanation and then concluding with the nature of diagnostic and therapeutic knowledge.

a. Rationalism vs. Empiricism

The rationalism-empiricism debate has a long history, beginning with the ancient Greeks, and focuses on the origins of knowledge and its justification.  “Is that origin rational or empirical in nature?”  “Is knowledge deduced or inferred from first principles or premises?”  “Or, is it the result of careful observation and experience?”  These are just a few of the questions fueling the debate, along with similar questions concerning epistemic justification.  Rationalists, such as Socrates,Plato,  Descartes, and Kant, appeal to reason as being both the origin and the justification of knowledge.  As such, knowledge is intuitive in nature, and in contrast to the senses or perception, it is exclusively the product of the mind.  Given the corruptibility of the senses, argue the rationalists, no one can guarantee or warrant knowledge—except through the mind’s capacity to reason.  In other words, rationalism provides a firm foundation not only for the origin of knowledge but also for warranting its truth.    Empiricists, such as Aristotle, Avicenna, Bacon, Locke, Hume, and Mill, avoid the fears of rationalists and exalt observation and experience with respect to the origin and justification of knowledge.  According to empiricists, the mind is a blank slate (Locke’s tabula rasa) upon which observations and experiences inscribe knowledge.  Here, empiricists champion the role of experimentation in the origin and justification of knowledge.

The rationalism-empiricism debate originates specifically with ancient Greek and Roman medicine.  The Dogmatic school of medicine, founded by Hippocrates’ son and son-in-law in the fourth century BCE, claimed that reason is sufficient for understanding the underlying causes of diseases and thereby for treating them.  Dogmatics relied on theory, especially the humoral theory of health and disease, to practice medicine.  The Empiric school of medicine, on the other hand, asserted that only observation and experience, not theory, is a sufficient foundation for medical knowledge and practice.  Theory is an outcome of medical observation and experience, not their foundation.  Empirics relied on palpable, not underlying, causes to explain health and disease and to practice medicine.  Philinus of Cos and his successor Serapion of Alexandria, both third century BCE Greek physicians, are credited with founding the Empiric school, which included the influential Alexandrian school.  A third school of medicine arose in response to the debate between the Dogmatics and Empirics, the Methodic school of medicine.  In contrast to Dogmatics, and in agreement with Empirics, Methodics argued that underlying causes are superfluous to the practice of medicine.  Rather, the patient’s immediate symptoms, along with common sense, are sufficient and provide the necessary information to treat the patient.  Thus, in contrast to Empirics, Methodics argued that experience is unnecessary to treat disease and that the disease’s symptoms provide all the knowledge needed to practice medicine.

The Dogmatism-Empiricism debate, with Methodism representing a minority position, raged on and was still lively in the seventeenth and eighteenth centuries.  For example, Giorgio Baglivi (1723), an Armenian-born seventeenth century Italian physician, decried the polarization of physicians along dogmatic and empiric boundaries and recommended resolving the debate by combining the two.  Contemporary philosophical commentators on medicine recognize the importance of both epistemic positions, and several commentators propose synthesis of them.  For example, Jan van Gijn (2005) advocates an “empirical cycle” in which experiments drive hypothetical thinking, which in turn results in additional experimentation.  Although no clear resolution of the rationalism-empiricism debate in medicine appears on the immediate horizon, the debate does emphasize the importance of and the need for additional philosophical analysis of epistemic issues surrounding medical knowledge.

b. Medical Thinking

“How doctors think” is the title of two  twenty-first century books on medical thinking.  The first is by a medical humanities scholar, Kathryn Montgomery (2006).  Montgomery addresses vital questions about how physicians go about making clinical decisions when often faced with tangible uncertainty.  She argues for medical thinking based not on science but on Aristotelian phronesis, or practical or intuitive reasoning.  The second book is by a practicing clinician, Jerome Groopman (2007).  Groopman also addresses questions about medical thinking, and he too pleads for clinical reasoning based on practical or intuitive foundations.  Both books call for introducing the art of medical thinking to offset the over dependence on the science of medical thinking.  In general, medical thinking reflects the cognitive faculties of clinicians to make rational decisions about what ails patients and how best to go about treating them both safely and effectively.  That thinking, during the twentieth century, mimicked the technical thinking of natural scientists—and, for good reason.  As Paul Meehl (1954) convincingly demonstrated, statistical reasoning in the clinical setting out performs intuitive clinical thinking.  Although Montgomery’s and Groopman’s attempts to swing the pendulum back to the art of medical thinking, the risk of medical errors often associated with such thinking demands clearer analysis of the science of medical thinking.  That analysis centers traditionally on both logical and algorithmic methods of clinical judgment and decision-making, to which the twenty-first century has turned.

Georg Stahl’s De logico medica, published in 1702, is one of the first modern treatises on medical logic.  However, not until the nineteenth century did logic of medicine become an important area of sustained analysis or have an impact on medical knowledge and practice.  For example, Friedrich Oesterlen’s Medical logic, published in English translation in 1855, promoted medical logic not only as a tool for assessing the formal relationship between propositional statements and thereby avoiding clinical error, but also for analyzing the relationship among medical facts and evidence in the generation of medical knowledge.  Oesterlen’s logic of medicine was indebted to the Paris school of clinical medicine, especially Pierre Louis’ numerical method (Morabia, 1996).  Contemporary logic of medicine continues this tradition, especially in terms of statistical analysis of experimental and clinical data.  For example, Edmond Murphy’s The Logic of Medicine (1997) represents an analysis of logical and statistical methods used to evaluate both experimental and clinical evidence.  Specifically, Murphy identifies several “rules of evidence” critical for interpreting such evidence as medical knowledge.  A particularly vigorous debate concerns the role of frequentist vs. Bayesian statistics in determining the statistical significance of data from clinical trials.  The logic of medicine, then, represents an important and a fruitful discipline in which medical scientists and clinical practitioners can detect and avoid errors in the generation and substantiation of medical knowledge and in its application or translation to the clinic.

Philosophers of medicine actively debate the best courses of action for making clinical decisions.  For, clinical judgment is an informal process in which a clinician assesses a patient’s clinical signs and symptoms to come to an accurate judgment about what is ailing the patient. To make such a judgment requires an insight into the intelligibility of the clinical evidence.  The issue for philosophers of medicine is what role intuition should play in clinical judgment when facing the ideals of objective scientific reasoning and judgment.  Meehl’s work on clinical judgment, as noted earlier casted suspicion on the effectiveness of intuition in clinical judgment; and yet, some philosophers of medicine champion  the understood dimension in such decision-making.  The debate often reduces to whether clinical judgment is an art or a science; however, some, like Alvan Feinstein (1994), argue for a reconciliatory position between them.  Once a physician comes to a judgment then the physician must make a decision as to how to proceed clinically.  Although clinical decision making, with its algorithmic-like decision trees, is a formal procedure compared to clinical judgment, philosophers of medicine actively argue about the structure of these trees and procedures for generating and manipulating them.  The main issue is how best to define the utility grounding the trees.

c. Explanation

Epistemologists are generally interested in the nature of propositions especially the explanatory power of those justified true beliefs.  To know something truly is to understand and explain the hidden causes behind it.  Explanations operate at a variety of levels.  For example, neuroscientific explanations account for human behavior in terms of the neurological activity while astrological explanations account for such behavior with respect to astronomical activity.  Philosophers, especially philosophers of science, distinguish several kinds of explanations, including the covering law explanation, causal explanation, and inference to the best explanation.  In twenty-first century medicine, explanations are important for understanding disease mechanisms and, in understanding those mechanisms, for developing therapeutic modalities to treat the patient’s disease.  This line of reasoning runs deep in medical history, beginning, as we have seen, with the Dogmatics.  Twenty-first century philosophers of medicine utilize the explanatory schemes developed by philosophers of science to account for medical phenomena.  The Following section will briefly examine each of these explanatory schemes and their relevance for medical explanations.

Carl Hempel and Paul Oppenheim introduced covering law explanation in the late 1940s.  According to Hempel and Oppenheim (1948), explanations function as arguments with the conclusion or explanandum—that which is explained—deduced or induced from premises or explanans—that which does the explaining.  At least one of the explanans must be a scientific law, which can be a mechanistic or statistical law.  Although covering law explanations are useful for those medical phenomena that reduce to mechanistic or statistical laws, such as explaining cardiac output in terms of heart rate and stroke volume, not all such phenomena lend themselves to such reductive explanations.  The next explanatory scheme, causal explanation, attempts to rectify that problem.  Causal explanation relies on the temporal or spatial regularity of phenomena and events and utilizes antecedent causes to explain phenomena and events.  The explanations can be simplistic in nature, with only a few antecedent causes arranged linearly, or very complex, with multiple antecedent causes operating in a matrix of interrelated and integrated interactions.  For example, causal explanations of cancer involve at least six distinct sets of genetic factors controlling cellular phenomena such as cell growth and death, immunological response, and angiogenesis.  Finally, Gilbert Harman articulated the contemporary form of inference to the best explanation, or IBE, in the 1960s.  Harman (1965) proposed that based on the totality of evidence one must choose the explanation that best accounts for or infers that evidence and reject its competitors.  The criteria for “bestness” range from the explanation’s simplicity to its generality or consilience to account for analogous phenomena.  Peter Lipton (2004) offers Ignaz Semmelweis’ explanation of increased mortality of women giving birth in one ward compared to another, as an example of IBE.  Donald Gillies (2005) provides an analysis of it in terms of Kuhnian paradigm.

d. Diagnostic and Therapeutic Knowledge

Diagnostic knowledge pertains to the clinical judgments and decisions made about what ails a patient.  Epistemologically, the issues concerned with such knowledge are its accuracy and certainty.  Central to both these concerns are clinical symptoms and signs.  Clinical symptoms are subjective manifestations of the disease that the patient articulates during the medical interview, while clinical signs are objective manifestations that the physician discovers during the physical examine.  What is important for the clinician is how best to quantify those signs and symptoms, and then to classify them in a robust nosology or disease taxonomy.  The clinical strategy is to collect the empirical data through the physical examination and laboratory tests, to deliberate on that data, and then to draw a conclusion as to what the data means in terms of the patient’s disease condition.  The strategy is fraught with questions for philosophers of medicine, from “What constitutes symptoms and signs and how they differ?” to “How best to measure and quantify the signs and to classify the diseases?”  Philosophers of medicine debate the answers to these questions, but the discussion among philosophers of science over the strategy by which natural scientists investigate the natural world guides much of the debate.  Thus, a clinician generates hypotheses about a patient’s disease condition, which he or she then assesses by conducting further medical tests.  The result of this process is a differential diagnosis, which represents a set of hypothetical explanations for the patient’s disease condition.  The clinician then narrows this set to one diagnostic hypothesis that best explains most, and hopefully all, of the relevant clinical evidence.  The epistemic mechanism that accounts for this process and the factors involved in it are unclear.  Philosophers of medicine especially dispute the role of tacit factors in the process.  Finally, the heuristics of the process are an active area of philosophical investigation in terms of identifying rules for interpreting clinical evidence and observations.

Therapeutic knowledge refers to the procedures and modalities used to treat patients.  Epistemologically, the issues concerned with such knowledge are its efficacy and safety.  Efficacy refers to how well the pharmacological drug or surgical procedure treats or cures the disease, while safety refers to possible patient harm caused by side effects.  The questions animating discussion among philosophers of medicine range from “What is a cure?” to “How to establish or justify the efficacy of a drug or procedure?” The latter question occupies a considerable amount of the philosophy of medicine literature, especially the nature and role of clinical trials.  Although basic medical research into the etiology of disease mechanisms is important, the translation of that research and the philosophical problems that arise from it are foremost on the agenda for philosophers of medicine.  The origin of clinical trials dates at least to the eighteenth century but not until the twentieth century is consensus reached over the structure of these trials.  Today, four phases define a clinical trial.  During the first phase, clinical investigators establish the maximum tolerance of healthy volunteers to a drug.  The next phase involves a small patient population to determine the drug’s efficacy and safety.  In the third phase, which is the final phase required to obtain FDA approval, clinical investigators utilize a large and relatively diverse patient population to establish the drug’s efficacy and safety.  A fourth stage is possible in which clinical investigators chart the course of the drug’s use and effectiveness in a diverse patient population over a longer period.  The following are topics of active discussion among philosophers of medicine: The nature of clinical trials with respect to features like randomizing in which test subjects are arbitrarily assigned to either experimental or control groups, blinding of patients and physicians to randomizing to remove assessment bias, concurrent control in which the control group does not receive the experimental treatment that the test group receives, and the role of the placebo effect or the expected benefit patient’s anticipate from receiving treatment represent.  However, the most pressing problem is the type of statistics utilized for analyzing clinical trial evidence.   Some philosophers of medicine champion frequentist statistics, while others Bayesian statistics.

3. Ethics

Ethics is the branch of philosophy concerned with the right or moral conduct or behavior of a community and its members.  Traditionally, philosophers divide ethics into descriptive, normative, and applied ethics.  Descriptive ethics involves detailing ethical conduct without evaluating it in terms of moral codes of conduct, whereas normative ethics pertains to how a community and its members should act under given situations, generally in terms of an ethical code.  This code is often a product of certain values held in common within a community.  For example, ethical codes against murder reflect values community members place upon taking human life without just cause.  Besides values, ethicists base normative ethics on a particular theoretical perspective.  Within western culture, three such perspectives predominate.  The first and historically oldest ethical theory—although it experienced a Renaissance in the late twentieth century—is virtue ethics.  Virtue ethics claims that ethical conduct is the product of a moral agent who possesses certain virtues, such as prudence, courage, temperance, or justice—the traditional cardinal virtues.  The second ethical theory is deontology and bases moral conduct on adherence to ethical precepts and rules reflecting moral duties and obligations.  The third ethical theory is consequentialism, which founds moral conduct on the outcome or consequence of an action.  The chief example of this theory is utilitarianism, or the maximization of an action’s utility, which claims that an action is moral if it realizes the greatest amount of happiness for the greatest number of community members.   Finally, applied ethics is the practical use of ethics within a profession such as business or medicine.  Medical or biomedical ethics reflects applied ethics and is a major feature within the landscape of twenty-first century medicine.  Historically, ethical issues are a conspicuous component of medicine beginning with Hippocrates.  Throughout medical history several important treatises on medical ethics have been published.  Probably the best-known example is Thomas Percival’s Medical Ethics, published in 1803, which influenced the development of the American Medical Association’s ethical code.  Today, medical ethics is founded not on any particular ethical theory but on four ethical principles.

a. Principlism

The origins of the predominant system for contemporary medical or biomedical ethics began in 1932.  In that year, the Public Health Service, in conjunction with the Tuskegee Institute in Macon County, Alabama, undertook a clinical study to document the course of syphilis on untreated test subjects.  The subjects were Afro-American males.  Over the next forty years, healthcare professionals observed the course of the disease, even after the introduction of antibiotics.  Not until 1972, did the study end and only after public outcry from newspaper articles—especially an article in the New York Times—reporting the study’s atrocities.  What made the study so atrocious was that the healthcare professionals misinformed the subjects about treatment or failed to treat the subjects with antibiotics.  To insure that such flagrant abuse of test subjects did not happen again, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research met from February 13-16, 1976.  At the Smithsonian Institute’s Belmont Conference Center in Maryland, the commission drafted guidelines for the treatment of research subjects.  The outcome was a report entitled, Ethical Principles and Guidelines for the Protection of Human Subjects of Research, or known simply as the Belmont Report, published in 1979.  The report lists and discusses several ethical principles necessary for protecting human test subjects and patients from unethical treatment at the hands of healthcare researchers and providers.  The first is respect for persons, in that researchers must respect the test subject’s autonomy to make informed decisions based on accurate and truthful information concerning the test study’s procedures and risks.  The next principle is beneficence or maximizing the benefits to risk ratio for the test subject.  The final ethical principle is justice, which ensures that the cost to benefit ratio is equitably distributed among the general population and that no one segment of it bears an unreasonable burden with respect to the ratio.

One of the framers of the Belmont Report was a young philosopher named Tom Beauchamp.  While working on the report, Beauchamp, in collaboration with a colleague, James Childress, was also writing a book on the role of ethical principles in guiding medical practice.  Rather than ground biomedical ethics on any particular ethical theory, such as deontology or utilitarianism, Beauchamp and Childress looked to ethical principles for guiding and evaluating moral decisions and judgments in healthcare.  The fruit of their collaboration was Principles of Biomedical Ethics, first published in the same year as the Belmont Report, 1979.  In the book, Beauchamp and Childress apply the ethical principles approach of the report to regulate the activities of biomedical researchers, to assist physicians in deliberating over the ethical issues associated with the practice of clinical medicine.  However, besides the three guiding principles of the report, they added a fourth—nonmaleficence.  Moreover, the first principle became patient autonomy, rather than respect of persons as denoted in the report.  For the autonomy principle, Beauchamp and Childress stress the patient’s liberty to make critical decisions concerning treatment options, which have a direct impact on the patient’s own values and life plans.  The authors’ second principle, nonmaleficence, instructs the healthcare provider to avoid doing harm to the patient, while the next principle of beneficence emphasizes removing harm and doing good to the patient.  Beauchamp and Childress articulate the final principle, justice, in terms reminiscent of the Belmont report with respect to equitable distribution of risks and benefits, as well as healthcare resources, among both the general and patient populations.  The bioethical community quickly dubbed  the Beauchamp and Childress approach to biomedical ethics as principlism.

Principlism’s impact on the bioethical discipline is unparalleled.  Beauchamp and Childress’ book is now in its fifth edition and is a standard textbook for teaching biomedical ethics at medical schools and graduate programs in medical ethics.  One of the chief advocates of principlism is Raanan Gillon (1986)  Gillon expanded the scope of the principles to aid in their application to difficult bioethical issues, especially where the principles might conflict with one another.  However, principlism is not without its critics.  A fundamental complaint is the lack of theoretical support for the four principles, especially when the principles collide with one another in terms of their application to a bioethical problem. In its use, ethicists and clinicians generally apply the principles in an algorithmic manner to justify practically any ethical position on a biomedical problem.  What critics want is a unified theoretical basis for grounding the principles, in order to avoid or adjudicate conflicts among the principles.  Moreover, Beauchamp and Childress’ emphasis on patient autonomy had serious ramifications on the physician’s role in medical care, with that role at times marginalized to the patient’s role.  Alfred Tauber (2005), for instance, charges that such autonomy itself is “sick” and often results in patients abandoned to their own resources with detrimental outcomes for them.  In response to their critics, Beauchamp and Childress argue that no single ethical theory is available to unite the four principles to avoid or adjudicate conflicts among them.  However, they did introduce in the fourth edition of Principles, a notion of common morality—a set of shared moral standards—to provide theoretical support for the principles.  Unfortunately, their notion of common morality lacks the necessary theoretical robustness to unify the principles effectively.  Although principlism still serves a useful function in biomedical ethics, particularly in the clinic, early twenty-first century trends towards healthcare ethics and global bioethics have made its future unclear.

b. Patient-Physician Relationship

According to many philosophers of medicine, medicine is more than simply a natural or social science; it is a moral enterprise.  What makes medicine moral is the patient-physician or therapeutic relationship.  Although some philosophers of medicine criticize efforts to model the relationship, given the sheer number of contemporary models proposed to account for it, modeling the relationship has important ramifications for understanding and framing the moral demands of medicine and healthcare.  The traditional medical model within the industrialized West, especially in mid twentieth century America, was paternalism or “doctor knows best.”  The paternalistic model is doctor-centered in terms of power distribution, with the patient representing a passive agent who simply follows the doctor’s orders.  The patient is not to question those orders, unless to clarify them.  The source for this power distribution is the doctor’s extensive medical education and training relative to the patient’s lack of medical knowledge.  In this model, the doctor represents a parent, generally a father figure and the patient a child—especially a sick child.  The motivation of this model is to relieve a patient burdened with suffering from a disease, to benefit the patient from the doctor’s medical knowledge, and to affect a cure while returning the patient to health.  In other words, the model’s guiding principle is beneficence.  Besides the paternalistic model, other doctor-centered models include the priestly and mechanic models.  However, the paternalistic model, as well as the other doctor-centered models, ran into severe criticism with abuses associated with the models and with the rise of patient advocacy groups to correct the abuses.

Within the latter part of the twentieth century and the rise of patient autonomy as a guiding principle for medical practice, alternative patient-physician models challenged traditional medical paternalism.  Instead of doctor-centered, one set of models are patient-centered in which patients are the locus of power.  The most predominant patient-centered model is the business model, where the physician is a healthcare provider and the patient a consumer of healthcare goods and services.  The business model is an exchange relationship and relies heavily on a free market system.  Thus, the patient possesses the power to pick and choose among physicians until a suitable healthcare provider is found.  The legal model is another patient-centered model, in which the patient is a client and the guiding forces are patient autonomy and justice.  Patient and physician enter into a contract for healthcare services.  Another set of models in which patients have significant power in the therapeutic relationship are the mutual models.  In these models, neither patients nor physicians have the upper hand in terms of power-they share it.  The most predominant model is the partnership model in which patient and physician are associates in the therapeutic relationship.  The guiding force of this model is informed consent in which the physician apprises the patient of the available therapeutic options and the patient then chooses which is best.  Both the patient and physician share decision making over the best means for affecting a cure.  Finally, other examples of mutual models include the covenant model, which stresses promise instead of contract; the friendship model, which involves a familial-like relationship; and, the neighbor model, which maintains a “safe” distance and yet familiarity between patient and physician.

4. What is Medicine?

The nature of medicine is certainly an important question facing twenty-first century philosophers of medicine.  One reason for its importance is that the question addresses the vital topic of how physicians should practice medicine.  During the turn of the twenty-first century, clinicians and other medical pundits have begun to accept evidence-based medicine, or EBM, as the best way to practice medicine.  Proponents of EBM claim that physicians should engage in medical practices based on the best scientific and clinical evidence available, especially from randomized controlled clinical trials, systematic observations, and meta-analyses of that evidence, rather than on pathophysiology or an individual physician’s clinical experience. Proponents also claim that EBM represents a paradigmatic shift away from traditional medicine.  Traditional practitioners doubt the radical claims of EBM proponents.  One specific objection is that application of evidence from population based clinical trials to the individual patient within the clinic is not as easy to accomplish as EBM proponents realize.  In response, some clinicians propose patient-centered medicine (PCM).   Patient-centered advocates include the patient’s personal information in order to apply the best available scientific and clinical evidence in treatment.  The focus then shifts from the patience’s disease to the patience’s illness experience.  The key for the practice of patient-centered medicine is a physician’s effective communication with the patient.  While some commentators present EBM and PCM as competitors, others propose a combination or integration of the two medicines.  The debate between advocates of EBM and PCM is reminiscent of an earlier debate between the science and art of medicine and belies a deep anxiety over the nature of medicine.  Certainly, philosophers of medicine can play a strategic role in the debate and assist towards its satisfactory resolution.

Besides its nature,  twenty-first century medicine also faces a number of crises, including economic, malpractice, healthcare insurance, healthcare policy, professionalism, public or global health, quality-of-care, primary or general care, and critical care—to name a few (Daschle, 2008; Relman, 2007).  Philosophers of medicine can certainly contribute to the resolution of these crises by carefully and insightfully analyzing the issues associated with them.  For example, considerable attention has been paid in the literature to the crisis over the nature of medical professionalism (Project of the ABIM Foundation, et al., 2002; Tallis, 2006).  The question that fuels this crisis is what type of physician best meets the patient’s healthcare needs and satisfies medicine’s social contract.  The answer to this question involves the physician’s professional demeanor or character.  However, little consensus as to how best to define professionalism is palpable in the literature.  Philosophers of medicine can aid by furnishing guidance towards a consensus on the nature of medical professionalism.

Philosophy of medicine is a vibrant field of exploration into the world of medicine in particular, and of healthcare in general.  Along traditional lines of metaphysics, epistemology, and ethics, a cadre of questions and problems face philosophers of medicine and cry out for attention and resolution.  In addition, many competing forces are vying for the soul of medicine today.  Philosophy of medicine is an important resource for reflecting on those forces in order to forge a medicine that meets both physical and existence needs of patients and society.

5. References and Further Reading

  • Achinstein, P. 1983. The nature of explanation. Oxford: Oxford University Press.
  • Andersen, H. 2001. The history of reductionism versus holism approaches to scientific research. Endeavor 25:153-156.
  • Aristotle. 1966. Metaphysics. H.G. Apostle, trans. Bloomington: Indiana University Press.
  • Baglivi, G. 1723. Practice of physick, 2nd edition. London: Midwinter.
  • Bartlett, E. 1844. Essay on the philosophy of medical science. Philadelphia: Lea & Blanchard.
  • Beauchamp, T., and Childress, J.F. (2001) Principles of biomedical ethics, 5th edition. Oxford:Oxford University Press.
  • Black, D.A.K. (1968) The logic of medicine. Edinburgh: Oliver & Boyd.
  • Bock, G.R., and Goode, J.A., eds. 1998. The limits of reductionism in biology. London: John Wiley.
  • Boorse, C. 1975. On the distinction between disease and illness. Philosophy and Public Affairs 5:49-68.
  • Boorse, C. 1987. Concepts of health. In Health care ethics: an introduction, D. VanDeVeer and T. Regan, eds.  Philadelphia: Temple University Press, pp. 359-393.
  • Boorse, C. 1997. A rebuttal on health. In What is disease?, J.M. Humber and R.F.  Almeder, eds. Totowa, N.J.: Humana Press, pp. 1-134.
  • Brody, H. 1992. The healer’s power. New Haven, CT: Yale University Press.
  • Caplan, A.L. 1986 Exemplary reasoning? A comment on theory structure in biomedicine. Journal of Medicine and Philosophy 11:93-105.
  • Caplan, A.L. 1992. Does the philosophy of medicine exist? Theoretical Medicine 13:67-77.
  • Carter, K.C. 2003. The rise of causal concepts of disease: case histories. Burlington, VT: Ashgate.
  • Cassell, E.J. 2004. The nature of suffering and the goals of medicine, 2nd edition. New York: Oxford University Press.
  • Clouser, K.D., and Gert, B. 1990. A critique of principlism. Journal of Medicine and Philosophy 15:219-236.
  • Collingwood, R.G. 1940. An essay on metaphysics. Oxford: Clarendon Press.
  • Coulter, A. 1999. Paternalism or partnership? British Medical Journal 319:719-720.
  • Culver, C.M., and Gert, B. 1982. Philosophy in medicine: conceptual and ethical issues in medicine and psychiatry. New York: Oxford University Press.
  • Daschle, T. 2008. Critical: what we can do about the health-care crisis. New York: Thomas Dunne Books.
  • Davis, R.B. 1995. The principlism debate: a critical overview. Journal of Medicine and Philosophy 20:85-105.
  • Davis-Floyd, R., and St. John, G. 1998. From doctor to healer: the transformative journey. New Brunswick, NJ: Rutgers University Press.
  • Dummett, M.A.E. 1991. The logical basis of metaphysics. Cambridge: Harvard University Press.
  • Elsassar, W.M. 1998. Reflections on a theory of organisms: holism in biology. Baltimore: Johns Hopkins University Press.
  • Emanuel, E.J., and Emanuel, L.L. 1992 Four models of the physician-patient relationship. Journal of American Medical Association 267:2221-2226.
  • Engel, G.L. 1977. The need for a new medical model: a challenge for biomedicine. Science 196:129-136.
  • Engelhardt, Jr., H.T. 1996. The foundations of bioethics, 2nd edition. New York: Oxford University Press.
  • Engelhardt, Jr., H.T., ed., 2000. Philosophy of medicine: framing the field. Dordrecht: Kluwer.
  • Engelhardt, Jr., H.T., and Erde, E.L. 1980. Philosophy of medicine. In A guide to culture of science, technology, and medicine, P.T. Durbin, ed. New York: Free Press, pp. 364-461.
  • Engelhardt, Jr., H.T., and Wildes, K.W. Philosophy of medicine. 2004. In Encyclopedia of bioethics, 3rd edition, S.G. Post, ed. New York: Macmillan, pp. 1738-1742.
  • Evans, A.S. 1993. Causation and disease: a chronological journey. New York: Plenum.
  • Evans, M., Louhiala, P. and Puustinen, P., eds. 2004. Philosophy for medicine: applications in a clinical context. Oxon, UK: Radcliffe Medical Press.
  • Evidence-Based Medicine Working Group. 1992. Evidence-based medicine: a new approach to teaching the practice of medicine. Journal of American Medical Association 268:2420- 2425.
  • Feinstein, A.R. .1967. Clinical judgment. Huntington, NY: Krieger.
  • Feinstein, A.R. 1994. Clinical judgment revisited: the distraction of quantitative models. Annals of Internal Medicine 120:799-805.
  • Fulford, K.W.M. 1989. Moral theory and medical practice. Cambridge: Cambridge University Press.
  • Gardiner, P. 2003. A virtue ethics approach to moral dilemmas in medicine. Journal of Medical Ethics 29:297-302.
  • Gert, B., Culver, C.M., and Clouser, K.D. 1997. Bioethics: a return to fundamentals. Oxford, Oxford University Press.
  • Gillies, D.A. 2005. Hempelian and Kuhnian approaches in the philosophy of medicine: the Semmelweis case. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Science 36:159-181.
  • Gillon, R. 1986. Philosophical medical ethics. New York: John Wiley and Sons.
  • Goldman, G.M. 1990. The tacit dimension of clinical judgment. Yale Journal of Biology and Medicine 63:47-61.
  • Golub, E.S. 1997. The limits of medicine: how science shapes our hope for the cure. Chicago: University of Chicago Press.
  • Goodyear-Smith, F., and Buetow, S. 2001. Power issues in the doctor-patient relationship. Health Care Analysis 9:449-462.
  • Groopman, J. 2007. How doctors think. New York: Houghton Mifflin.
  • Halpern, J. 2001. From detached concern to empathy: humanizing medical practice. New York: Oxford University Press.
  • Hampton, J.R. 2002. Evidence-based medicine, opinion-based medicine, and real-world medicine. Perspectives in Biology and Medicine 45:549-68.
  • Harman, G.H. 1965. The inference to the best explanation. Philosophical Review 74:88-95.
  • Haug, M.R., and Lavin, B. 1983. Consumerism in medicine: challenging physician authority. Beverly Hills, CA: Sage Publications.
  • Häyry, H. 1991. The limits of medical paternalism. London: Routledge.
  • Hempel, C.G. 1965. Aspects of scientific explanation and other essays in the philosophy of science. New York: Free Press.
  • Hempel, C.G., and Oppenheim, P. 1948. Studies in the logical of explanation. Philosophy of science 15:135-175.
  • Hill, A.B. 1965. The environment and disease: association or causation? Proceedings of the Royal Society of Medicine 58:295-300.
  • Howick, J.H. 2011. The philosophy of evidence-based medicine. Hoboken, NJ: Wiley- Blackwell.
  • Illari, P.M., Russo, F., and Williamson, J., eds. 2011. Causality in the sciences. New York: Oxford University Press.
  • Illingworth, P.M.L. 1988. The friendship model of physician/patient relationship and patient autonomy. Bioethics 2:22-36.
  • James, D.N. 1989. The friendship model: a reply to Illingworth. Bioethics 3:142-146.
  • Johansson, I., and Lynøe, N. 2008. Medicine and philosophy: a twenty-first century introduction. Frankfurt: Ontos Verlag.
  • Jonsen, A.R. 2000. A short history of medical ethics. New York: Oxford University Press.
  • Kaptchuk, T.J. 2000. The web that has no weaver: understanding Chinese medicine. Chicago, IL: Contemporary Books.
  • Kadane, J.B. 2005. Bayesian methods for health-related decision making. Statistics in Medicine 24:563-567.
  • Katz, J. 2002. The silent world of doctor and patient. Baltimore: Johns Hopkins University Press.
  • King, L.S. 1978. The philosophy of medicine. Cambridge: Harvard University Press.
  • Kleinman, A. 1988. The illness narratives: suffering, healing and the human condition. New York: Basic Books.
  • Knight, J.A. 1982. The minister as healer, the healer as minister. Journal of Religion and Health 21:100-114.
  • Konner, M. 1993. Medicine at the crossroads: the crisis in health care. New York: Pantheon Books.
  • Kovács, J. 1998. The concept of health and disease. Medicine, Health Care and Philosophy 1:31- 39.
  • Kulkarni, A.V. 2005. The challenges of evidence-based medicine: a philosophical perspective. Medicine, Health Care and Philosophy 8:255-260.
  • Lad, V. D. 2002. Textbook of Ayurveda: fundamental principles of Ayurveda, volume 1. Albuquerque, NM: Ayurvedic Press.
  • Larson, J.S. 1991. The measurement of health: concepts and indicators. New York: Greenwood Press.
  • Le Fanu, J. 2002. The rise and fall of modern medicine. New York: Carroll & Graf.
  • Levi, B.H. 1996. Four approaches to doing ethics. Journal of Medicine and Philosophy 21:7-39.
  • Liberati, A. Vineis, P. 2004. Introduction to the symposium: what evidence based medicine is and what it is not. Journal of Medical Ethics 30:120-121.
  • Lipton, P. 2004. Inference to the best explanation, 2nd edition. New York: Routledge.
  • Little, M. 1995. Humane medicine Cambridge: Cambridge University Press.
  • Loewy, E.H. 2002. Bioethics: past, present, and an open future. Cambridge Quarterly of Healthcare Ethics 11:388-397.
  • Looijen, R.C. 2000. Holism and reductionism in biology and ecology: the mutual dependence of higher and lower level research programmes. Dordrecht: Kluwer.
  • Maier, B., and Shibles, W.A. 2010. The philosophy and practice of medicine and bioethics: a naturalistic-humanistic approach. New York: Springer.
  • Marcum, J.A. 2005. Metaphysical presuppositions and scientific practices: reductionism and organicism in cancer research. International Studies in the Philosophy of Science 19:31-45.
  • Marcum, J.A. 2008. An introductory philosophy of medicine: humanizing modern medicine. New York: Springer.
  • Marcum, J.A. 2009. The conceptual foundations of systems biology: an introduction. Hauppauge, NY: Nova Scientific Publishers.
  • Marcum, J.A., and Verschuuren, G.M.N. 1986. Hemostatic regulation and Whitehead’s philosophy of organism. Acta Biotheoretica 35:123-133.
  • Matthews, J.N.S. 2000. An introduction to randomized controlled clinical trials. London: Arnold.
  • May, W.F. 2000. The physician’s covenant: images of the healer in medical ethics, 2nd edition. Louisville: Westminster John Knox Press.
  • Meehl, P.E. 1954. Clinical versus statistical prediction: a theoretical analysis and a review of the literature. Minneapolis: University of Minnesota Press.
  • Montgomery, K. 2006. How doctors think: clinical judgment and the practice of medicine. New York: Oxford University Press.
  • Morabia, A. 1996. P.C.A. Louis and the birth of clinical epidemiology. Journal of Clinical Epidemiology 49:1327-1333.
  • Murphy, E.A. 1997. The logic of medicine, 2nd edition. Baltimore: The Johns Hopkins University Press.
  • Nesse, R.M. (2001) On the difficulty of defining disease: a Darwinian perspective. Medicine, Health Care and Philosophy 4:37-46.
  • Nordenfelt, L. 1995. On the nature of health: an action-theory approach, 2nd edition. Dordrecht: Kluwer.
  • Overby, P. 2005. The moral education of doctors. New Atlantis 10:17-26.
  • Papakostas, Y.G., and Daras, M.D. 2001. Placebos, placebo effects, and the response to the healing situation: the evolution of a concept. Epilepsia 42:1614-1625.
  • Parker, M. 2002. Whither our art? Clinical wisdom and evidence-based medicine. Medicine, Health Care and Philosophy 5:273-280.
  • Pellegrino, E.D., and Thomasma, D.C. 1981. A philosophical basis of medical practice: toward a philosophy and ethic of the healing professions. New York: Oxford University Press.
  • Pellegrino, E.D., and Thomasma, D.C. 1988. For the patient’s good: the restoration of beneficence in health care. New York: Oxford University Press.
  • Pellegrino, E.D., and Thomasma, D.C. 1993. The virtues in medical practice. New York: Oxford University Press.
  • Pole, S. 2006. Ayurvedic medicine: the principles of traditional practice. Philadelphia, PA: Elsevier.
  • Post, S.G. 1994. Beyond adversity: physician and patient as friends? Journal of Medical Humanities 15:23-29.
  • Project of the ABIM Foundation, ACP-ASIM Foundation, and European Federation of Internal Medicine 2002. Medical professionalism in the new millennium: a physician charter. Annals of Internal Medicine 136:243-246.
  • Quante, M., and Vieth, A. 2002. Defending principlism well understood. The Journal of Medicine and Philosophy 27:621 – 649.
  • Reeder, L.G. 1972. The patient-client as a consumer: some observations on the changing professional-client relationship. Journal of Health and Social Behavior 13:406-412.
  • Reiser, S.J. 1978. Medicine and the reign of technology. Cambridge: Cambridge University Press.
  • Relman, A.S. 2007. A second opinion: rescuing America’s healthcare. New York: Perseus Books.
  • Reznek, L. 1987. The nature of disease. London: Routledge & Kegan Paul.
  • Rizzi, D.A., and Pedersen, S.A. 1992. Causality in medicine: towards a theory and terminology. Theoretical Medicine 13:233-254.
  • Roter, D. 2000. The enduring and evolving nature of the patient-physician relationship. Patient Education and Counseling 39:5-15.
  • Rothman, K.J. 1976. Causes. Journal of Epidemiology 104:587-592.
  • Ryff, C.D., and Singer, B. 1998. Human health: new directions for the next millennium. Psychological Inquiry 9:69-85.
  • Sackett, D.L., Richardson, W.S., Rosenberg, W., and Haynes, R.B. 1998. Evidence-based medicine: how to practice and teach EBM. London: Churchill Livingstone.
  • Salmon, W. 1984. Scientific explanation and the causal structure of the world. Princeton: Princeton University Press.
  • Samaniego, F.J. 2010. A comparison of the Bayesian and frequentist approaches to estimation. New York: Springer.
  • Schaffner, K.F. 1993. Discovery and explanation in biology and medicine. Chicago: University of Chicago Press.
  • Schaffner, K.F., and Engelhardt, Jr., H.T. 1998. Medicine, philosophy of. In Routledge Encyclopedia of Philosophy, E. Craig, ed. London: Routledge, pp. 264-269.
  • Schwartz, W.B., Gorry, G.A., Kassirer, J.P., and Essig, A. 1973. Decision analysis and clinical judgment. American Journal of Medicine 55:459-472.
  • Seifert, J. 2004. The philosophical diseases of medicine and their cures: philosophy and ethics of medicine, vol. 1: foundations. New York: Springer.
  • Senn, S. 2007. Statistical issues in drug development, 2nd edition. Hoboken, NJ: John Wiley & Sons.
  • Simon, J.R. 2010. Advertisement for the ontology of medicine. Theoretical Medicine and Bioethics 31:333-346.
  • Smart, J.J.C. 1963. Philosophy and scientific realism. London: Routledge & Kegan Paul.
  • Smuts, J. 1926. Holism and evolution. New York: Macmillan.
  • Solomon, M.J., and McLeod, R.S. 1998. Surgery and the randomized controlled trial: past, present and future. Medical Journal of Australia 169:380-383.
  • Spodick, D.H. 1982. The controlled clinical trial: medicine’s most powerful tool. The Humanist 42:12-21, 48.
  • Stempsey, W.E. 2000. Disease and diagnosis: value-dependent realism. Dordrecht: Kluwer.
  • Stempsey, W.E. 2004. The philosophy of medicine: development of a discipline. Medicine, Health Care and Philosophy 7:243-251.
  • Stempsey, W.E. 2008. Philosophy of medicine is what philosophers of medicine do. Perspectives in Biology and Medicine 51:379-371.
  • Stewart, M., Brown, J.B., Weston, W.W., McWhinney, I.R., McWilliam, C.L., and Freeman, T.R. 2003. Patient-centered medicine: transforming the clinical method, 2nd edition. Oxon, UK: Radcliffe Medical Press.
  • Straus, S.E., and McAlister, F.A. 2000. Evidence-based medicine: a commentary on common criticisms. Canadian Medical Association Journal 163:837-840.
  • Svenaeus, F. 2000. The hermeneutics of medicine and the phenomenology of health: steps towards a philosophy of medical practice. Dordrecht: Kluwer.
  • Tallis, R.C. 2006. Doctors in society: medical professionalism in a changing world. Clinical Medicine 6:7-12.
  • Tauber, A.I. 1999. Confessions of a medicine man: an essay in popular philosophy. Cambridge: MIT Press
  • Tauber, A.I. 2005. Patient autonomy and the ethics of responsibility. Cambridge: MIT Press.
  • Thagard, P. 1999. How scientists explain disease. Princeton: Princeton University Press.
  • Tonelli, M.R. 1998. The philosophical limits of evidence-based medicine. Academic Medicine 73:1234-1240.
  • Tong, R. 2007. New perspectives in health care ethics: an interdisciplinary and crosscultural approach. Upper Saddle River, NJ: Pearson Prentice Hall.
  • Toombs, S.K. 1993. The meaning of illness: a phenomenological account of the different perspectives of physician and patient. Dordrecht: Kluwer.
  • Toombs, S. K., ed. 2001. Handbook of phenomenology and medicine. Dordrecht: Kluwer.
  • Unschuld, P.U. 2010. Medicine in China: a history of ideas, 2nd edition. Berkeley, CA: University of California Press.
  • van der Steen, W.J., and Thung, P.J. 1988. Faces of medicine: a philosophical study.         Dordrecht: Kluwer.
  • van Gijn, J. 2005. From randomized trials to rational practice. Cardiovascular Diseases 19:69- 76.
  • Veatch, R.M. 1981. A theory of medical ethics. New York: Basic Books.
  • Veatch, R.M. 1991. The patient-physician relations: the patient as partner, part 2. Bloomington, IN: Indiana University Press.
  • Velanovich, V. 1994. Does philosophy of medicine exist? A commentary on Caplan. Theoretical Medicine 15:88-91.
  • Weatherall, D. 1996. Science and the quiet art: the role of medical research in health care. New York: Norton.
  • Westen, D., and Weinberger, J. 2005. In praise of clinical judgment: Meehl’s forgotten legacy. Journal of Clinical Psychology 61:1257-1276.
  • Whitbeck, C. 1981. A theory of health. In Concepts of health and disease: interdisciplinary perspectives, A.L. Caplan, H.T. Engelhardt, Jr., and J.J. McCartney, eds. London: Addison- Wesley, pp. 611-626.
  • Wildes, K.W. 2001. The crisis of medicine: philosophy and the social construction of medicine. Kennedy Institute of Ethics Journal 11:71-86.
  • Woodward, J. 2003. Making things happen: a theory of causal explanation. Oxford: Oxford University Press.
  • Worrall, J. 2002. What evidence in evidence-based medicine? Philosophy of Science 69:S316- S330.
  • Worrall, J. 2007. Why there’s no cause to randomize. British Journal for the Philosophy of Science 58:451-488.
  • Wulff, H.R., Pedesen, S.A., and Rosenberg, R. 1990. Philosophy of medicine: an introduction, 2nd edition. Oxford: Blackwell.
  • Zaner, R.M. 1981. The context of self: a phenomenological inquiry using medicine as a clue. Athens, OH: Ohio University Press.


Author Information

James A. Marcum
Baylor University
U. S. A.


The word “synesthesia” or “synaesthesia,” has its origin in the Greek roots, syn, meaning union, and aesthesis, meaning sensation: a union of the senses.  Many researchers use the term “synesthesia” to refer to a perceptual anomaly in which a sensory stimulus associated with one perceptual modality automatically triggers another insuppressible sensory experience which is usually, but not always, associated with a different perceptual modality as when musical tones elicit the visual experience of colors (“colored-hearing”).  Other researchers consider additional unusual correspondences under the category of synesthesias, including the automatic associations of specific objects with genders, ascriptions of unique personalities to numbers, and the involuntary assignment of spatial locations to months or days of the week.  Many synesthetes experience more than one cross-modal correspondence, and others who have unusual cross-modal sensory experiences also have some non-sensory correspondences such as those mentioned above.

Researchers from fields as varied as neurology, neuroscience, psychology and aesthetics have taken an interest in the phenomenon of synesthesia.  Consideration of synesthesia has also shed light on important subjects in philosophy of mind and cognitive science.  For instance, one of the most widely discussed problems in recent philosophy of mind has been to determine how consciousness fits with respect to physical descriptions of the world.  Consciousness refers to the seemingly irreducible subjective feel of ongoing experience, or the character of what it is like.  Philosophers have attempted to reduce consciousness to properties that will ultimately be more amenable to physical characterizations such as representational or functional properties of the mind.  Some philosophers have argued that reductive theories such as representationalism and functionalism cannot account for synesthetic experience.

Another metaphysical project is to provide an account of the nature of color.  There are two main types of views on the nature of color.  Color objectivists take color to be a real feature of the external world.  Color subjectivists take color to be a mind-dependent feature of the subject (or the subject’s experience).  Synesthesia has been used as a counter-example to color objectivism.  Not everyone agrees, however, that synesthesia can be employed to this end.  Synesthesia has also been discussed in regards to the issue of what properties perceptual experiences can represent objects as having (for example, colors).  The standard view is that color experiences represent objects as having color properties, but a special kind of grapheme-color synesthesia may show that color experience can signify numerical value.  If this is right, it shows that perceptual experiences can represent so-called “high-level” properties.

Synesthesia may also be useful in arbitrating the question of how mental processing can be so efficient given the abundance of mentally stored information and the wide variety of problems that we encounter, which must each require highly specific albeit different, processing solutions.  The modular theory of mind is a theory about mental architecture and processing aimed at solving these problems.  On the modular theory, at least some processing is performed in informationally encapsulated sub-units that evolved to perform unique processing tasks.  Synesthesia has been used as support for mental modularity in several different ways.  While some argue that synesthesia is due to an extra module, others argue that synesthesia is better explained as a breakdown in the barrier that keeps information from being shared between modules.

This article begins with an overview of synesthesia followed by a discussion of synesthesia as it has been relevant to philosophers and cognitive scientists in their discussions of the nature of consciousness, color, mental architecture, and perceptual representation, as well as several other topics.

Table of Contents

  1. Synesthesia
  2. Consciousness
    1. Representationalism
    2. Functionalism
  3. Modularity
  4. Theories of Color
  5. An Extraordinary Feature of Color-Grapheme Synesthesia
  6. Wittgenstein’s Philosophical Psychology
  7. Individuating the Senses
  8. Aesthetics and “Literary Synesthesia”
  9. Synesthesia and Creativity
  10. References and Further Reading

1. Synesthesia

Most take synesthesia to be a relatively rare perceptual phenomenon. Reports of prevalence vary, however, from 1 in 25,000 (Cytowic, 1997) to 1 in 200 (Galton, 1880), to even 1 in 20 (Simner et al., 2006).  It typically involves inter-modal experiences such as when a sound triggers a concurrent color experience (a photism), but it can also occur within modalities.  For example, in grapheme-color synesthesia the visual experience of alpha-numeric graphemes such as of a “4” or a “g,” induces color photisms.  These color photisms may appear to the synesthete as located inside the mind, in the peri-personal space surrounding the synesthete’s body (Grossenbacher & Lovelace, 2001), or as being projected right where the inducing grapheme is situated perhaps as if a transparency were placed on top of the grapheme (Dixon, et al., 2004).  Reported cross-modal synesthesias also include olfactory-tactile (where a scent induces a tactile experience such as of smoothness), tactile-olfactory, taste-color, taste-tactile and visual-olfactory, among others.  It is not clear which of these is most common.  Some researchers report that colored-hearing is the most commonly occurring form of synesthesia (Cytowic, 1989; Harrison & Baron-Cohen, 1997), and others report that approximately 68% of synesthetes have the grapheme-color variety (Day, 2005).  Less common forms include sound-olfactory, taste-tactile and touch-olfactory.  In recent years, synesthesia researchers have increasingly been attending to associations that don’t fit the typical synesthesia profile of cross activations between sensory modalities, such as associations of specific objects with genders, ascriptions of unique personalities to particular numbers, and the involuntary assignment of spatial locations to months or days of the week.  Many synesthetes report having these unusual correspondences in addition to cross-modal associations.

Most studied synesthesias are assumed to have genetic origins (Asher et al., 2009).  It has long been noted that synesthesia tends to run in families (Galton, 1883) and the higher proportion of female synesthetes has led some to speculate that it is carried by the X chromosome (Cytowic, 1997; Ward & Simner, 2005).  However, there are also reports of acquired synesthesias induced by drugs such as LSD or mescaline (Rang & Dale, 1987) or resulting from neurologic conditions such as epilepsy, trauma or other lesion (Cytowic, 1997; Harrison & Baron-Cohen, 1997; Critchley, 1997).  Recent studies suggest it may even be brought on through training (Meier & Rothen, 2009; Proulx, 2010) or post-hypnotic suggestion (Kadosh et al., 2009).  Another hypothesis is that synesthesia may have both genetic and developmental origins.  Additionally, some researches propose that synesthesia may arise in genetically predisposed children in response to demanding learning tasks such as the development of literacy.

Up until very recently, the primary evidence for synesthesia has come from introspectively based verbal reports.  According to Harrison and Baron-Cohen (1997), synesthesia is late in being a subject of scientific interest because the previously prevailing behaviorists rejected the importance of subjective phenomena and introspective report.  Some other researchers continue to downplay the reality of synesthesia, claiming that triggered concurrents are likely ideational in character rather than perceptual (for discussion and criticism of this view see Cytowic, 1989; Harrison, 2001; Ramachandran & Hubbard, 2001a).  One hypothesis is that synesthetic ideas result from learned associations that are so vivid in the minds of synesthetes that subjects mistakenly construe them to be perceptual phenomena.  As psychologists swung from physicalism back to mentalism, however, subjective experience became more accepted as an area of scientific inquiry.  In recent years, scientists have begun to study aspects of subjectivity, such as the photisms of synesthetes, using third person methods of science.

Recent empirical work on synesthesia suggests its perceptual reality.  For example, synesthesia is thought to influence attention (Smilek et al., 2003). Moreover, synesthetes have long reported that photisms can aid with memory (Luria, 1968).  And indeed, standard memory tests show synesthetes to be better with recall where photisms would be involved (Cytowic 1997; Smilek et al., 2002).

Other studies aimed at confirming the legitimacy of synesthesia have demonstrated that genuine synesthesia can be distinguished from other common types of learned associations in that it is remarkably consistent; over time synesthetes’ sensation pairings (for example, the grapheme 4 with the color blue) remain stable across multiple testings whereas most learned associations do not.  Synesthetes tested and retested to confirm consistency of pairings on multiple occasions, at an interval of years and without warning, exhibit consistency as high as 90% (Baron-Cohen, et al., 1987).  Non-synesthete associators are not nearly as consistent.

Grouping experiments are used to distinguish between perceptual and non-perceptual features of experience (Beck, 1966; Treisman, 1982).  In common grouping experiments, subjects view a scene comprised of vertical and tilted lines.  In perception, the tilted and vertical lines appear as grouped independently.  Studies seem to show some grapheme-color synesthetes to be subject to pop-out and grouping effects based on colored photisms (Ramachandran & Hubbard, 2001a, b; Edquist et al., 2006).  If an array of 2’s in the form of a triangle are hidden within a field of distracter graphemes such as 5’s, the 2’s may “pop-out” or appear immediately and saliently in experience as forming a triangle so long as the color ascribed to the 2’s is incongruent with the color of the 5’s (Ramachandran and Hubbard, 2001b).

synesthesia graphic

Some take these studies to show that, for at least some synesthetes, the concurrent colors are genuinely perceptual phenomena arising at a relatively early pre-conscious stage of visual processing, rather than associated ideas, which would arise later in processing.

Another study often cited as substantiating the perceptual reality of synesthetic photisms shows that synesthetes are subject to Stroop effects on account of color photisms.  When synesthetes were shown a hand displaying several fingers colored to match the color photism the synesthetes typically projected onto things signifying that quantity, they were quicker at identifying the actual quantity of fingers displayed than when the fingers were painted a color that was incongruent with the photism typically associated with things significant of that quantity (Ward and Sagiv, 2007).

Finally, Smilek et al. (2001) have conducted a study with a synesthete they refer to as “C,” that suggests the perceptual reality of synesthesia.  In the study, significant graphemes are presented individually against backgrounds that are either congruent or incongruent with the photism associated with the grapheme.  If graphemes really are experienced as colored, then they should be more difficult to discern by synesthetes when they are presented against congruent backgrounds.  C did indeed have difficulty discerning the grapheme on congruent but not incongruent trials.  In a similar study, C was shown a digit “2” or “4” hidden in a field of other digits.  Again, the background was either congruent or incongruent with the photism C associated with the target digit.  C had difficulty locating the target digit when the background was congruent with the target’s photism color, but not when it was incongruent.

Nevertheless, another set of recent studies could be seen as calling into question whether some of the above studies really demonstrate the perceptual reality of synesthesia.  Meier and Rothen (2009) have shown that non-synesthetes trained over several weeks to associate specific numbers and colors behave similarly to synesthetes on synesthetic Stroop studies.  The colors that the non-synesthetes were taught to associate with certain graphemes interfered with their ability to identify target graphemes.  Moreover, Kadosh et al. (2009) have shown that highly suggestible non-synesthetes report abnormal cross-modal experiences similar to congenital synesthetes and behave similarly to Smilek’s synesthete C on target identification after receiving post-hypnotic suggestions aimed to trigger grapheme-color pairings.  Some researchers conclude from these studies that genuine synesthetic experiences can be induced through training or hypnosis.  But it isn’t clear that the evidence warrants this conclusion as the results are consistent with the presence of merely strong non-perceptual associations.  In the cases of post-hypnotic suggestion, participants may simply be behaving as if they experienced genuine synesthesia.  An alternative conclusion to draw from these studies might be that Stroop and the identification studies conducted with C do not demonstrate the perceptual reality of synesthesia.  Nonetheless, it has not been established that training and hypnotism can replicate all the effects, such as the longevity of associations in “natural” synesthetes, and few doubt that synesthetes experience genuine color photisms in the presence of inducing stimuli.

For most grapheme-color synesthetes, color photisms are induced by the formal properties of the grapheme (lower synesthesia).  In some, however, color photisms can be correlated with high-level cognitive representations specifying what the grapheme is taken to represent (higher synesthesia).  Higher synesthesia can be distinguished from lower synesthesia by several testable behaviors.

First, individuals with higher synesthesia frequently have the same synesthetic experiences (for example, see the same colors) in response to multiple inducers that share meaning—for instance, 5, V, and an array of five dots may all induce a green photism (Ramachandran & Hubbard, 2001b; Ward & Sagiv, 2007).  Second, some higher-grapheme-color synesthetes will experience color photisms both when they are veridically perceiving an external numeral, and also when they are merely imagining or thinking about the numerical concept.  Dixon et al. (2000) showed one synesthete the equation “4 + 3” followed by a color patch.  Their participant was slower at naming the color of the patch when it was incongruent with the photism normally associated with the number that is the solution to the equation.  If thinking about the numerical concept alone induces a photism then we should expect that the photism would interfere with identifying the patch color.

Moreover, when an individual with higher synesthesia sees a grapheme that is ambiguous, for example a shape that resembles both a 13 and a B, he or she may mark it with different colors when it is presented in different contexts.  For instance, when the grapheme is presented in the series, “12, 13, 14,” it may induce one photism, but it may induce a different photism when it is presented in the series, A, 13, C.  This suggests that it isn’t merely the shape of the grapheme that induces the photism here, but also the ascribed semantic value (Dixon et al., 2006).  Similarly, if an array of smaller “3”s are arranged in the form of a larger “5,” an individual with higher-grapheme synesthesia may mark the figure with one color photism when attending to it as an array of “3”s, but mark it with a different color photism when attending to it as a single number “5” (Ramachandran & Hubbard, 2000).

2. Consciousness

Some contend that synesthesia presents difficulties for certain theories of mind when it comes to conscious experience, such as representationalism (Wager, 1999, 2001; Rosenberg, 2004) and functionalism (J.A. Gray, 1998, 2003, 2004, J.A. Gray et al.; 1997, 2002, 2006).  These claims are controversial and discussed in some depth in the following two sections.

a. Representationalism

Representationalism is the view that the phenomenal character of experience (or the properties responsible for “what it is like” to undergo an experience) is exhausted by, or at least supervenes on, its representational content (Chalmers, 2004).  This means that there can be no phenomenal difference in the absence of a representational difference, and, if two experiential states are indiscernible with respect to representational content, then they must have the same phenomenal character.  Reductive brands of representationalism say that the qualitative aspects of consciousness are just the properties represented in perceptual experience (that is, the representational contents).  For instance, perhaps the conscious visual sensation of a faraway aircraft travelling across the sky is just the representation of a silver object moving across a blue background (Tye, 1995, p.93).

According to Wager (1999, 2001) and Rosenberg (2004) synesthesia shows that phenomenal character does not always depend on representational content because mental states can be the same representationally, but differ when it comes to experiential character.  Wager dubs this the “extra qualia” problem (1999, p.268) noting that his objection specifically targets externalist versions of representationalism (p.276) contending that phenomenal content depends on what the world is like (such that perfect physical duplicates could differ in experiential character given that their environments differ).  Meanwhile, Rosenberg (2004, p.101) employs examples of synesthetes who see colors when feeling pain, or hearing loud noises.  According to Rosenberg, there is no difference between the representational content of the synesthete and the ordinary person: in the case of pain, they could both be representing damage to the body of, let us suppose, a certain intensity, location and duration.  Again, the examples are claimed to show that mental states with the same representational content can differ experientially.  However, others reject this sort of argument.

Alter (2006, p.4) argues that Rosenberg’s analysis overlooks plausible differences between the representational contents in question.  A synesthete who is consciously representing bodily damage as, say, orange, is representing pain differently than an ordinary person.  The nature of this representational difference might be understood in more than one way: perhaps the manner in which they represent their intentional objects differs, or, perhaps their intentional objects differ (or both).  In short, it is suggested that the synesthete and the ordinary person are not representationally the same, and it is no threat to representationalism that different kinds of experience represent differently.  To take a trivial case, the conscious difference between touching and seeing a snowball is accounted for in that they represent differently (only one represents the snowball as cold).

Turning to Wager, he considers three cases which all concern a synesthete named Cynthia who experiences extra visual qualia in the form of a red rectangle when she hears the note Middle C.  The cases vary according to the version of externalism in question.  Case 1 examines a simple casual co-variation theory of phenomenal content, case 2 a theory that mixes co-variation and teleology (such as Tye’s, 1995), while case 3 concerns a purely teleological account, (such as Dretske’s, 1995).  These cases purportedly show that synesthetic and ordinary experience can share the same contents despite the differences in qualitative character.  R. Gray’s (2001a, 2004, pp.68-9) general reply is that synesthetic experience does indeed differ representationally in that it misrepresents.

For example, instead of attributing the redness and rectangularity to Middle C, why not attribute these to a misrepresentation of a red rectangle triggered by the auditory stimulus?  Whether representationalism can supply a plausible account of misrepresentation is an open question, however, perhaps its problems with synesthesia can be resolved by discharging this explanatory debt.

Regarding case 1, perhaps there is no extra representational content had by Cynthia.  If content is determined by the co-variation of the representation and the content it tracks, then since there is no red triangle in the external world, perhaps her experience only represents Middle C, just as it does in the case of an ordinary person (Wager, 1999, p.269).  If so, then there would be a qualitative difference in the absence of a representational difference, and this version of representationalism would be refuted.  On the other hand, Wager concedes that the objection might fail if Cynthia has visually experienced red bars in the past, for then her synesthetic experience is arguably not representationally the same as that of an ordinary person hearing Middle C.  This is because it would be open to the externalist to reply that Cynthia’s experience represents the disjunction “red bar or Middle C” (p.270) thus differing from an ordinary person’s.   However, Wager then argues that a synesthete who has never seen red bars because she is congenitally blind (Blind Cynthia) would have the same representational contents as an ordinary person (they would both just represent Middle C) and yet since she would also experience extra qualia, the objection goes through after all.

In reply, R. Gray (2001a, p.342) points out that this begs the question against the externalist, since it assumes that synesthetic color experience does not depend on a background of ordinary color experience.  If this is so, there could not be a congenitally blind synesthete, since whatever internal states Blind Cynthia had would not be representing colors.  Wager has in turn acknowledged this point (2001, p.349) though he maintains that it is more natural to suppose that Blind Cynthia’s experience would nevertheless be very different.  Support for Wager’s view might be found in such examples as color blind synesthetes who report “Martian” colors inaccessible to ordinary visual perception (Ramachandran and Hubbard, 2003a).

Wager also acknowledges that case 1 overlooks theories allowing representational contents to depend on evolutionary functions, and so the possibility that the blind synesthete functions differently when processing Middle C needs to be examined.  This leads to the second and third cases.

Case 2 is designed around Tye’s hybrid theory according to which phenomenal character depends on evolutionary functions for beings that evolved, and causal co-variation for beings that did not--such as Swampman (your perfect physical duplicate who just popped into existence as a result of lightening striking in swamp material).  Wager argues that on Tye’s view Middle C triggers an internal state with the teleological function of tracking red in the congenitally blind synesthete.  Hence Tye can account for the idea that Blind Cynthia would be representing differently than an ordinary person.

However, now the problem is that it seems the externalist must, implausibly, distinguish between the phenomenal contents of the hypothetical blind synesthete and a blind Swampsynesthete (Blind Swamp Cynthia) when they each experience Middle C.  Recall that Tye’s theory does not allow teleology to be used to account for representational contents in Swampperson cases.  But if Tye falls back on causal co-variation the problem, discussed in the first case, returns.  Since the blind Swampsynesthete’s causal tracking of Middle C does not differ from that of an ordinary person, externalism seems committed to saying that their contents and experiences do not differ—that is, since Blind Swamp Cynthia’s state reliably co-varies with Middle C, not red, it cannot be a phenomenal experience of red.

This, however, is not the end of the matter.  R. Gray could try to recycle his reply that there could not be a blind synesthete (whether of swampy origins or not) since synesthesia is parasitic on ordinary color experience.  Still another response offered on behalf of Tye (Gray, 2001a, p.343) is that Wager fails to take note of the role played by “optimal” conditions in Tye’s theory.  Where optimal conditions fail to obtain, co-variation is mere misrepresentation.  But what counts as optimal and how do we know it?  Perhaps optimal conditions would fail to obtain if the co-varying relationships are one-many (that is, if an internal state co-varies with many stimuli, or, a stimulus co-varies with many internal states, Gray, 2001a, p.343).  Such may be the case for synesthetes, and if so, then synesthetic experience would misrepresent and so differ in content.  On the other hand, Wager disputes Gray’s conception of optimal conditions (2001, p.349) arguing that Tye himself accepts they can obtain in situations where co-variation is one-many.  In addition, Wager (2001, p.349) contends Blind Swamp Cynthia’s co-varying relationship is not one-many since her synesthetic state co-varies only with Middle C.  As for Gray’s claim that optimal conditions fail for the Blind Swamp Cynthia because Middle-C co-varies with too many internal states, Wager (2001, p.349) responds that optimal conditions should indeed obtain—for it is plausible that a creature with a backup visual system could have multiple independent states co-varying with, and bearing content about, a given stimulus.  To this, however, it can be replied that having primary and backup states with content says nothing about whether the content of the backup state is auditory or visual; in other words, does Blind Swamp Cynthia both hear and synesthetically see Middle C, or, does she just hear it by way of multiple brain states (cf. Gray, 2001a, pp.343-344)?  While this summary does not exhaust the debate between Wager and Gray, the upshot for case 2 seems to turn on contentious questions about optimal conditions: what are they, and how do we know when they obtain or fail to obtain?

Finally, Case 3 considers the view that phenomenal content always depends on the state’s content tracking function as determined by natural selection.  Hence, an externalist such as Dretske could maintain that the blind synesthete undergoes a misfiring of a state that is supposed to indicate the presence of red, not Middle C.  Wager’s criticism here concerns a hypothetical case whereby synesthesia comes to acquire the evolutionary function of representing Middle C while visual perception has faded from the species though audition remains normal.  This time the problem is that it seems plausible that two individuals with diverging evolutionary histories could undergo the same synesthetic experience, but according to the externalist their contents would differ (Wager, 1999, p.273).  Perhaps worse, it follows from externalism that a member of this new synesthetic species listening to Middle C would have the very same content and experience as an ordinary member of our own species.

R. Gray replies that he does not see why the externalist must agree that synesthesia has acquired an evolutionary function just because it is adaptive (2001a, p.344).  Returning to his point about cases 1 & 2, synesthesia might well result from a breakdown in the visual system, and saying that it has no function is compatible with saying that it is fitness-enhancing.  If synesthesia does not have a teleological function, then a case 3 externalist can deny that the mutated synesthete’s contents are indiscernible with respect to those of an ordinary person.

And yet even if R. Gray is right that the case for counting synesthesia as functional is inconclusive, it seems at least possible some being evolves such that it has states with the function of representing Middle C synesthetically.  Whether synesthesia is a bug or a feature depends on, as Gray acknowledges, evolutionary considerations (p.345, see also Gray, 2001b), so Wager need only appeal to the possible world in which those considerations favor his interpretation and he can have his counterexample to externalist representationalism (cf. Wager, 2001, p.348).

On the other hand, and as R. Gray notices, Wager’s strongest cases are not drawn from the real world – and so his objections likewise turn on the very sort of controversial, “thought experiments and intuitions about possibility” he aims to distance his own arguments from (Wager, 1999, p.264).  Consider that for case 3 externalists, since Swamppeople don’t have evolutionary functions, they are unconscious zombies.  Anybody who is willing to accept that outcome will probably not be troubled by Wager’s imagined examples about synesthetes.  After all, someone who thinks having no history makes one a zombie already believes that differing evolutionary histories can have a dramatic impact on the qualitative character of experience.  In short, a lot rides on whether synethesia in fact is the result of malfunction, or, the workings of a separate teleofunctional module.

Finally, the suggestion that representational properties can explain the “extra-qualia” in synesthesia courts controversy given worries about whether this is consilient with synesthetes’ self-reports (that is, would further scrutiny of the self-reports strongly support claims about additional representational content?).  There is also general uncertainty as to what evidential weight these reports ought to be granted.  Despite Ramachandran and Hubbard’s enthusiasm for the method of, “probing the introspective phenomenological reports of these subjects” (2001b, p.7, n.3), they acknowledge skepticism on the part of many psychologists about this approach.

b. Functionalism

Synesthesia might present difficulties for the functionalist theory of mind’s account of conscious experience.  Functionalism defines mental states in terms of their functions or causal roles within cognitive systems, as opposed to their intrinsic character (that is, regardless of how they are physically realized or implemented).  Here, mental states are characterized in terms of their mediation of causal relationships obtaining between sensory input, behavioral output, and each other.  For example, an itch is a state caused by, inter alia, mosquito bites, and which results in, among other things, a tendency to scratch the affected area.  As a theory of consciousness, functionalism claims that the qualitative aspects of experience are constituted by (or at least determined by) functional roles (for example, Lycan, 1987).

In a series of articles, J.A. Gray has argued that synesthesia serves as a counter-example to functionalism, as well as to Hurley and Noë’s (2003a) specific hypothesis that sensorimotor patterns best explain variations in phenomenal experience.

Hurley and Noë’s theory employs a distinction between what they call “deference” and “dominance.”  Sensory deference occurs when experiential character conforms to cortical role rather than sensory input, and dominance the reverse.  Sometimes,

nonstandard sensory inputs “defer” to cortical activity, as when the stimulation of a patient’s cheek is felt as a touch on a missing arm.  Here cortex “dominates,” in the sense that it produces the feel of the missing limb, despite the unusual input.  One explanation is that nerve impulses arriving at the cortical region designated for producing the feel of a touch on the cheek “spill over” triggering a neighboring cortical region assigned to producing sensation of the arm.  But the cortex can also “defer” to nonstandard input, as in the case of tactile qualia experienced by Braille readers corresponding to activity in the visual cortex. J.A. Gray (2003, p.193) observes that cortical deference, not dominance, is expected given functionalism, since the character of a mental state is supposed to depend on its role in mediating inputs and outputs. If that efferent-afferent mediating role changes, then the sensory character of the state should change with it.

Hurley and Noë (2003a) propose that cortical regions implicated in one sensory modality can shift to another (and, thus be dominated by input) if there are novel sensorimotor relationships available for exploitation.  For support they point out that the mere illusion of new sensorimotor relationships can trigger cortical deference.  Such is the case with phantom limb patients who can experience the illusion of seeing and moving a missing limb with the help of an appropriately placed mirror.  In time, the phantom often disappears, leading to the conjecture that the restored sensory-motor feedback loop dominates the cortex, forcing it to give up its old role of producing sensation of the missing limb.

Hurley and Noë (2003a, p.160) next raise a worry for their theory concerning synesthesia.   Perceptual inputs are “routed differently” in synesthetes, as in the case of an auditory input fed to both auditory and visual cortex in colored hearing (p.137). This is a case of intermodal cortical dominance, since the nonstandard auditory input “defers” to the visual cortex’s ordinary production of color experience.  But theirs is a theory assuming intermodal deference, that is, qualia is supposed to be determined by sensory inputs, not cortex (pp.140, 160).  It would appear that the visual cortex should not be stuck in the role of producing extra color qualia if their account is correct.

Hurley & Noë believe synesthesia raises a puzzle for any account of color experience, namely, why color experience defers to the colors of the world in some cases but not others.  For example, subjects wearing specially tinted goggles devised by Kohler at first see one side of the world as yellow, the other, blue.  However, color experience adapts and the subjects eventually report that the world looks normal once more (so a white object would still look white even as it passes through the visual field from yellow to blue).  On the other hand, synesthetic colors differ in that they “persist instead of adapting away.”

J.A. Gray points out that since colored hearing emerges early in life, there should be many opportunities for synesthetes to explore novel sensorimotor contingencies, such as conflicts between heard color names and the elicited “alien” qualia--a phenomenon reminiscent of the Stroop effect in which it takes longer to say “blue” if it’s written in red ink (Gray, et al., 2006; see also Hurley and Noë, 2003a, p.164, n.27).  Once again, why isn’t the visual cortex dominated by these sensory-motor loops and forced to cease producing the alien colors?  Gray (2003, p.193) calls this a “major obstacle” to Hurley and Noë’s theory since the visual cortex stubbornly refuses to yield to sensorimotor dominance.

In reply, Hurley and Noë have suggested that synesthetes are relatively impoverished with respect to their sensorimotor contingencies (2003a, pp.160, 165, n.27).  For example, unlike the case of normal subjects, where unconsciously processed stimuli can influence subsequent judgment, synesthetic colors need to be consciously perceived for there to be priming effects.  In short, the input-output relationships might not be robust enough to trigger cortical deference.  Elsewhere, Noë and Hurley (2003, p.195) propose that deference might fail to occur because the synesthetic function of the visual cortex is inextricably dependent on normal cortex functioning.  Whether sensorimotor accounts of experience can accommodate synesthesia is a matter of ongoing debate and cannot be decided here.

J.A. Gray, as mentioned earlier, also thinks synesthesia (specifically, colored hearing) poses a broader challenge to functionalism, since it shows that function and qualia come apart in two ways (2003, p.194).  His first argument contends that a single quale is compatible with different functions: seeing and hearing are functionally different, and yet either modality can result in exactly the same color experience (see also Gray, et al., 2002, 2006).  A second argument claims that different qualia are compatible with the same function.  Hearing is governed by only one set of input-output relationships, but gives rise to both auditory and visual qualia in the colored-hearing synesthete (Gray, 2003, p.194).

Functionalist replies to J.A. Gray et al.’s first argument (that is, that there can be functional differences in the absence of qualia differences) are canvassed by MacPherson (2007) and R. Gray (2004).  Macpherson points out (p.71) that a single quale associated with multiple functions is no threat to a “weak” functionalism not committed to the claim that functional differences necessarily imply qualia differences—qualia might be “multiply realizable” at the functional, as well as implementational level (note that qualia differences could still imply functional differences).  She continues arguing that even for “strong” functionalisms that do assert the same type of qualitative state cannot be implemented by different functions, the counter-example still fails.  Token mental states of the same type will inevitably differ in terms of some fine-grained causes and effects (for example, two persons can each have the same green visual experience even though the associated functional roles will tend to be somewhat different, for example, as green might lead to thoughts of Islam in one person, Ireland in another, ecology in still another, or envy, and so on).  In light of this, a natural way to interpret claims about functional role indiscernibility is to restrict the experience type individuating function to a “core” or perhaps “typical” or even “normal” role.  Perhaps a core role operates at a particular explanatory level—sort of as a MAC and a PC can be functionally indiscernible at the user-level while running a web browser, despite differing in terms of their underlying operating systems.  An alternative is to argue that the synesthetic “role” is really a malfunction, and so no threat to the claim that qualia differences imply normal role differences (R. Gray 2004, pp.67-8 offers a broadly similar response).

As for the other side of J.A. Gray’s challenge, namely that synesthesia shows functional indiscernibility does not imply qualia indiscernibility, Macpherson questions whether there really is qualia indiscernibility between normal and synesthetic experience (2007, p.77).  Perhaps synesthetes only imagine, rather than perceptually experience colors (Macpherson, 2007, pp.73ff.).  She also expresses doubts about experimental tests utilizing pop-out, and questions the interpretation of brain imaging studies (p.75)—for example, is an active “visual” cortex in colored hearing evidence of visual experience, or, evidence that this part of the brain has a non-visual role in synesthetes (cf. Hardcastle, 1997, p.387)?  In short, she contends there are grounds for questioning whether there is a clear case in which the experience of a synesthetic color is just like some non-synesthetic color.

Finally, although MacPherson does not make the point, J.A. Gray’s second argument is vulnerable to a response fashioned from her reply to his first argument.  Perhaps the qualia differences aren’t functionally indiscernible because core roles are not duplicated, or because the synesthetic “role” is really just a malfunction.  To make this more concrete, consider Gray’s example in which hearing the word “train” results in both hearing sound and seeing color (2003, p.194).  He claims that this shows that one-and-the-same function can have divergent qualia.  But this is a hasty inference, and conflates the local auditory uptake of a signal with divergent processing further downstream. Perhaps there are really two quite different input-output sets involved--the auditory signal is fed to both auditory and visual cortexes, after all, and so perhaps a single signal is fed into functionally distinct subsystems one of which is malfunctioning.  Malfunction or not, the functionalist could thus argue that Gray has not offered an example of a single function resulting in divergent qualia.

3. Modularity

The modular theory of mind, most notably advanced by Jerry Fodor (1983), holds that the mind is comprised of multiple sub-units or modules within which representations are processed in a manner akin to the processing of a classical computer.  Processing begins with input to a module, which is transformed into a representational output by inductive or deductive inferences called “computations.”  Modules are individuated by the functions they perform.  The mental processing underlying visual perception, auditory perception, and the like, take place in individual modules that are specially suited to performing the unique processing tasks relevant to each.  One of the main benefits of modularity is thought to be processing efficiency.  The time-cost involved if computations were to have access to all of the information stored in the mind would be considerable.  Moreover, since an organism encounters a wide variety of problems, it would have been economical for independent systems to have evolved for performing different tasks.  Some argue that synesthesia supports the modular theory.  Before discussing how synesthesia is taken as evidence for modularity, it will help to understand a bit more precisely, the important role that the concept of modularity plays in psychology.

Many, including Fodor, believe that scientific disciplines reveal the nature of natural kinds.  Natural kinds are thought to be mind-independent natural classes of phenomena that, “have many scientifically interesting properties in common over and above whatever properties define the class” (Fodor, 1983, p.46).  Those who believe that there are natural kinds commonly take things such as water, gold, zebras and penicillin to be instances of natural kinds.  If scientific disciplines reveal the nature of natural kinds, then for psychology to be a bona fide science, the mental phenomena that it takes as its objects of study would also have to be natural kinds.  For those like Fodor, who are interested in categorically delineating special sciences like psychology from more basic sciences, it must be that the laws of the special science cannot be reduced to those of the basic science.  This means that the natural kind terms used in a particular science to articulate that science’s laws cannot be replaced with terms for other more fundamental natural phenomena.  From this perspective, it is highly desirable to see whether modules meet the criteria for natural kinds.

According to Fodor, in addition to the properties that define specific types of modules, all modules share most, if not all, of the following nine scientifically interesting characteristics:  1. They are subserved by a dedicated neural architecture, that is, specific brain regions and neural structures uniquely perform each module’s task.  2. Their operations are mandatory, once a module receives a relevant input the subject cannot override or stop its processing.  3. Modules are informationally encapsulated, their processing cannot utilize information from outside of that module.  4. The information from inside the module cannot be accessed by external processing areas.  5. The processing in modules is very quick.  6. Outputs of modules are shallow and conceptually impoverished, requiring only limited expenditure of computational resources.  7. Modules have a fixed pattern of development that, like physical attributes, may most naturally be attributed to a genetic property.  8. The processing in modules is domain specific, it only responds to certain types of inputs.  9. When modules break down, they tend to do so in characteristic ways.

It counts in favor of a theory if it is able to accommodate, predict and explain some natural phenomena, including anomalous phenomena.  In this vein, some argue that the modular theory is particularly useful for explaining the perceptual anomaly of synesthesia.  But there are competing accounts for how modularity is implicated in synesthesia.  Some think that insofar synesthesia has all the hallmarks of modularity, it likely results from the presence of an extra cognitive module (Segal, 1997).  According to the extra-module thesis, synesthetes possess an extra module whose function is the mapping of, for example, sounds or graphemes (input) to color representations (output).  This grapheme-color module would, according to Segal, possess at least most of the nine scientifically interesting characteristics of modules identified by Fodor:

1. There seems to be a dedicated neural architecture, as lexical-color synesthesia appears uniquely associated with multimodal areas of the brain including the posterior inferior temporal cortex and parieto-occipital junctions (Pausenu et al., 1995).  2. Processing is mandatory, once synesthetes are presented with a lexical or grapheme stimulus the induction of a color photism is automatic and insuppressible.  3. Processing in synesthesia seems encapsulated, information that is available to the subject which might negate the effect has no effect on processing in the color-grapheme module.  4. The information and processing in the module is not made available outside of the module, for example, the synesthete does not know how the system affects mapping.  5. Since the processing in synesthesia happens pre-consciously, it meets the rapid speed requirement.  6. The outputs are shallow, they don’t involve any higher-order theoretically inferred features, just color.  7. Since synesthesia runs in families, is dominant in females, and subjects report having had it for as long as they can remember, synesthesia seems to be heritable, and this suggests that it would have a fixed pattern of development.  The features 8 and 9, domain specificity and characteristic pattern of breakdown, are the only two that Segal cannot easily attribute to the grapheme-color module.  Segal doesn’t doubt that a grapheme-color module could be found to have domain specific processing.  But on account of the rarity of synesthesia, he suspects that it may be too hard to find cases where the lexical or grapheme-color module breaks down.  Harrison and Baron-Cohen (1997) and Cytowic (1997) among others, however, note that for some, synesthesia fades with age and has been reported to disappear with stroke or trauma.

Another explanation for synesthesia that draws on the modular framework is that synesthesia is caused by a breakdown in the barriers that ordinarily keep modules and their information and processing separate (Baron-Cohen et al., 1993; Paulesu et al., 1995).  This failure of encapsulation would allow information from one module to be shared with others.  Perhaps in lexical or grapheme-color synesthesia, information is shared between the speech or text processing module and the color-processing module.  There are two hypotheses for how this might occur.  One hypothesis is that the failure of encapsulation originates with a faulty inhibitory mechanism that normally prevents information from leaking out of a module (Grossenbacher & Lovelace, 2001; Harrison & Baron-Cohen, 1997).  Alternatively, some propose that we are born without modules but sensory processes are pre-programmed to become modularized.  So infants are natural synesthetes, but during the process of normal development extra dendritic connections are paired off, resulting in the modular encapsulation typical of adult cognition (Maurer, 1993; Maurer and Mondloch 2004; see Baron-Cohen 1996 for discussion).  In synesthetes, the normal process of pairing off of extra dendritic connections fails to occur.  Kadosh et al. (2009) claim that the fact that synesthesia can be induced in non-synesthetes post-hypnotically, demonstrates that a faulty inhibitory mechanism is responsible for synesthesia rather than excessive dendritic connections; given the time frame of their study, new cortical connections could not have been established.

The modular breakdown theory may also be able to explain why synesthesia has the appearance of the nine scientifically interesting characteristics that Fodor identifies with mental modules (R. Gray, 2001b).  If this is right, then what reason is there to prefer either the breakdown theory or the extra module theory over the other?  Gray (2001b) situates this problem within the larger debate between computational and biological frameworks in psychology; he argues that the concept of function is central to settling the issue over which account of synesthesia we should prefer.  His strategy is to first determine what the most desirable view of function is.  Based on this, we can then use empirical means to arbitrate between the extra-module theory and the modular breakdown theory.

On the classical view of modularity developed by Fodor, function is elaborated in purely computational terms.  Computers are closed symbol-manipulating devices that perform tasks merely on account of the dispositions of their physical components.  We can describe the module’s performance of a task by appealing to just the local causal properties of the underlying physical mechanisms.  R. Gray thinks that it is desirable for a functional description to allow for the possibility of a breakdown.  To describe something as having broken down seems to mean understanding it as having failed to achieve its proper goal.  The purely computational/causal view of function does not seem to easily accommodate the possibility of there being a breakdown in processing.

R. Gray promotes an alternative conception of function that he feels better allows for the possibility of breakdown.  Gray’s alternative understanding is compatible with traditional local causal explanations.  But it also considers the role that a trait such as synesthesia would have in facilitating the organism’s ability to thrive in its particular external environment, its fitness utility.  Crucially, Gray finds the elaboration of modules using this theory of function to be compatible with Fodor’s requirement that a science’s kind predicates “are ones whose terms are the bound variables of proper laws” (1974, p. 506).  Assuming such an account, whether synesthesia is the result of an extra module or a breakdown in modularity will ultimately depend on how it contributes to the fitness of individuals.  According to Baron-Cohen, in order to establish that synesthesia results from a breakdown in modularity, it would have to be shown that it detracts from overall fitness.  The problem is that synesthesia has not been shown to compromise the bearer of the trait.  In contrast, Gray claims that the burden of proof lies with those who propose that synesthesia results from the presence of an extra-module to show that synesthesia is useful in a particular environment.  But at present, according to Gray, we have no reason to think that it is.  For instance, one indicator that something has some positive fitness benefit for organisms possessing it is the proliferation of that trait in a population.  But synesthesia is remarkably rare (Gray, 2001b).  Gray admits, however, that whether or not synesthesia has such a utility is an open empirical question.

4. Theories of Color

Visual perception seems to, at the very least, provide us with information about colored shapes existing in various spatial locations.  An account of the visual perception of objects should therefore include some account of the nature of color.  Some theorists working on issues pertaining to the nature of color and color experience draw on evidence from synesthesia.

Theories about the nature of color fall broadly into two categories.  On the one hand, color objectivism is the view that colors are mind-independent properties residing out in the world, for example, in objects, surfaces or the ambient light.  Typically, objectivists identify color with a physical property.  The view that color is a mind-independent physical property of the perceived world is motivated both by commonsense considerations and the phenomenology of color experience.  It is part of our commonsense or folk understanding of color, as reflected in ordinary language, that color is a property of objects.  Moreover, the experience of color is transparent, which is to say that colors appear to the subject as belonging to external perceptual objects; one doesn’t just see red, one sees a red fire hydrant or a yellow umbrella.  Color objectivism vindicates both the commonsense view of color and the phenomenology of color experience.  But some take it to be an unfortunate implication of the theory that colors are physical properties of objects, since it seems to entail that each color will be identical to a very long disjunctive chain of physical properties.  Multiple external physical conditions can all cause the same color experience both within and across individuals.  This means that popular versions of objectivism cannot identify a single unifying property behind all instances of a single color.

Subjectivist views, on the other hand, take colors to be mind-dependent properties of the subject or of his or her experience, rather than properties of the distal causal stimulus.  Subjectivist theories of color include the sense-data theory, adverbialism and certain varieties of representationalism.  The primary motivation for color subjectivism is to accommodate various types of non-veridical color experience where perceivers have the subjective experience of color in the absence of an external distal stimulus to which the color could properly be attributed.  One commonly cited example is the after-image. Some claim that the photisms of synesthetes provide another example of non-veridical non-referring color experiences (Fish, 2010; Lycan, 2006; Revonsuo, 2001).  But others argue that the door is open to regarding at least some cases of synesthesia as veridical perceptual experiences rather than hallucinations since photisms are often:  i) perceptually and cognitively beneficial, ii) subjectively like non-synesthetic experiences, and iii) fitness-enhancing.

Still, synesthesia may pose additional difficulties for objectivism.  Consider the implications for objectivism if color synesthesias were to become the rule rather than the exception.  How then would objectivism account for color photisms in cases where they are caused by externally produced sounds?  Revonsuo (2001) suggests that the view that colors can be identified with the objective disjunctive collections of physical properties that cause color experiences would have to add the changes of air pressure that produce sounds to that disjunctive collection of color properties.  This means that if synesthesia became the rule, despite the fact that nothing else about the world would have changed, physical properties that weren’t previously colored would suddenly become colored.  Revonsuo (2001) takes this to be an undesirable consequence for a theory of color.

Enactivism is a theory of perception that takes active engagement with perceptual objects along with other contextual relations to be highly relevant to perception.  Typically, enactivists take perception to consist in a direct relation between perceivers and objective properties.  Ward uses synesthesia in an argument for enactivism about color, proposing that the enactivist theory of color actually combines elements of both objectivism and subjectivism, and is therefore the only theory of color that can account for various facts about anomalous color experiences like synesthesia.

For instance, Kohler fitted normal perceivers with goggles, each of whose lenses were vertically bisected with yellow tinting on one side and blue on the other (Kohler, 1964).  When perceivers first donned the goggles, they reported anomalous color experiences consistent with the lens colors; the world appeared to be tinted yellow and blue.  But after a few weeks of wear, subjects reported that the abnormal tint adapted away.  Ward proposes that synesthetic photisms are somewhat similar to the tinted experiences of Kohler’s goggle wearers.  In both cases, the subject is aware of the fact that their anomalous color experiences are not a reliable guide to the actual colors of things around them.  The two cases are not alike, however, in one important respect.  Whereas goggle wearers’ color experiences adapt to fall in line with what they know to be true about their color experiences, synesthetes’ experiences do not.  This asymmetry calls for explanation and Ward demonstrates that the enactive theory of color provides an elegant explanation for this asymmetry.

According to Ward’s enactive view of color, “An object’s color is its property of modifying incident reflected light in a certain way.”  This is an objective property.  But, “we perceive this [objective] property by understanding the way [subjective] color appearances systematically vary with lighting conditions.”  This view explains the asymmetry noted above in the following way.  Kohler’s goggles interfere with regular color perception.  According to the enactive view of color, the tinted goggles introduce, “a complex new set of relationships between apparent colors, viewing conditions and objective color properties.”  So it is necessary for them to adapt away.  As perceivers acclimate to the fact that their color appearances no longer refer to the colors they had previously indicated, their ability to normally perceive color returns.  Ward assumes that synesthetes do not experience their color photisms as attributed to perceived objects, so they do not impact the synesthetes’ ability to veridically perceive color.  Synesthetes’ photisms fail to adapt away because they do not need to.

Another philosophical problem having to do with the nature of color concerns whether or not phenomenal color experiences are intentional.  If they are, we might wonder what sorts of properties they are capable of representing.  A popular view is that color experiences can only represent objects to have specific color or spectral reflectance properties. Matey draws on synesthesia to support the view that perceptual experiences can represent objects to have high-level properties such as having a specific  semantic value (roughly, as representing some property, thing or concept). This argument for high-level representational contents from synesthesia, it is argued, withstands several objections that can be lodged against other popular arguments such as arguments from phenomenal contrast.  The basic idea is that a special category of grapheme-color synesthesia depends on high-level properties.  In higher-grapheme-color synesthesia, perceivers mark with a particular color, graphemes that share conceptual significance such as the property of representing a number.  Matey argues that these high-level properties penetrate color experiences, and infect their contents so that the color-experiences of these synesthetes represent the objects they are projected onto as being representative of certain numbers or letters.  Matey  demonstrates that the conclusions of the argument from synesthesia may generalize to the common perceptual experiences of ordinary perceivers as well.

5. An Extraordinary Feature of Color-Grapheme Synesthesia

What the subject says about his or her own phenomenal experience usually carries great weight.  However, in the case of color-grapheme synesthesia, Macpherson urges caution (2007, p.76).  A striking and odd aspect of color-grapheme synesthesia is that it may seem to involve the simultaneous experience of different colors in exactly the same place at exactly the same time.  Consider synesthetes who claim to see both colors simultaneously: What could it be like for someone to see the grapheme 5 printed in black ink, but see it as red as well?  How are we to characterize their experience?  To Macpherson this “extraordinary feature” suggests that synesthetic colors are either radically unlike ordinary experience, or perhaps more likely, not experiences at all.  A third possibility would be to find an interpretation compatible with ordinary color experience.  For example, perhaps the synesthetic colors are analogous to a colored-transparency laid over ink (as suggested by Kim et al. 2006, p.196;  see also Cytowic 1989, pp.41, 51 and Cytowic & Eagleman 2009, p.72).  However, this analogy is unsatisfying and gives rise to further puzzlement.

One might expect that the colors would interfere with each other, for example, they should see a darker red when the 5 is printed in black ink, and a lighter red when in white.  And yet synesthetes tend to insist that the colors do not blend (Ramachandran & Hubbard 2001b, p.7, n.3) although if the ink is in the “wrong” color this can result in task performance delays analogous to Stroop-test effects and even induce discomfort (Ramachandran & Hubbard, 2003b, p.50).  Another possibility is that the overlap is imperfect, despite the denials, for example, perhaps splotches of black ink can be distinguished from the red (as proposed by Ramachandran & Hubbard 2001b, p.7, n.3).  Or, maybe there can be a “halo” or edge where the synesthetic and ordinary colors do not overlap—this might make sense of the claims of some that the synesthetic color is not “on” the number, but, as it were, “floating” somewhere between the shape and the subject.  But against these suggestions are other reports that the synesthetic and regular colors match up perfectly (Macpherson, 2007, p.76).

A second analogy from everyday experience is simultaneously seeing what is both ahead of and behind oneself by observing a room’s reflection in a window.  This, however, only recycles the problem.  In seeing a white lamp reflected in a window facing a blue expanse of water, the colors mix (for example, the reflected lamp looks to be a pale blue). Moreover, one does not undergo distinct impressions of the lamp and the region occupied by the waves overlapping with the reflected image (though of course one can alter the presentation by either focusing on the lamp or on the waves).

A third explanation draws on the claim mentioned earlier that the extra qualia can depend on top-down processing, appearing only when the shape is recognized as a letter, or as a number (as in seeing an ambiguous shape in FA5T versus 3456).  There is some reason to think that the synesthetic color can “toggle” on and off depending on whether it is recognized and attended to, as opposed to appearing as a meaningless shape in the subject’s peripheral vision (Ramachandran & Hubbard 2001a, 2001b).  Toggling might also explain reports that emphasize seeing the red, as opposed to (merely?) knowing the ink is black (cf. Ramachandran & Hubbard, 2001b, p.7, n.3).  Along these lines, Kim et al. tentatively suggest that the “dual experience” phenomenon might be explained by rapid switching modulated by changes in attention (2006, p.202).

Cytowic and Eagleman (2009, p.73), in contrast to these ruminations, deny there is anything mysterious or conceptually difficult about the dual presentation of imagined and real objects sharing exactly the same location in physical space.  They contend that the dual experience phenomenon is comparable to visualizing an imaginary apple in the same place as a real coffee cup, “you’ll see there is nothing impossible, or even particularly confusing about two objects, one real and one imagined, sharing the same coordinates.”  This dismissal, however, fails to come to terms with the conundrum.  Instead of an apple, try visualizing a perfect duplicate of the actual coffee cup in precisely the same location (for those who believe they can do this, continue visualizing additional coffee cups until the point becomes obvious).  If Cytowic and Eagleman are to be taken literally this ought to be easy.  The visualization of a contrasting color also meets a conceptual obstacle.  What does it even mean to visualize a red surface in exactly the same place as a real black surface in the absence of alternating presentations (as in binocular rivalry) or blending?

Another perplexing feature of synesthetic color experience are reports of strange “alien” colors somehow different from ordinary color experience.  These “Martian” colors may or may not indicate a special kind of color qualia inaccessible to non-synesthetes, though given the apparent causal role differences from ordinary colors when it comes to such things as “lighting, viewing geometry and chromatic context” (Noë & Hurley, 2003, p.195) this is unsurprising and even expected by broadly functionalist theories of phenomenal experience.  Ramachandran and Hubbard (2001b, pp.5, 26, 30) offer some discussion and conjectures about the underlying neural processes.

Whether the more bizarre testimony can be explained away along one (or more) of the above suggestions, or has deep implications about synesthesia, self-report, and the nature of color experience, demands further investigation by philosophers and scientists.

6. Wittgenstein’s Philosophical Psychology

Ter Hark (2009) offers a Wittgensteinian analysis of color-grapheme synesthesia, arguing that it fails to fit the contrast between perception and mental imagery, and so calls for a third category bearing only some of the logical marks of experience.  He contends that it is somewhat like a percept in that it depends on looking, has a definite beginning and end, and is affected by shifts in attention.  On the other hand, it is also somewhat like mental imagery in that it is voluntary and non-informative about the external world.

Although ter Hark cites Rich et al. (2005) for support, only 15% of their informants claimed to have full control over synesthetic experience (that is, induced by thought independent of sensory stimulation) and most (76%) characterized it as involuntary.  It would therefore seem that ter Hark’s analysis applies to only a fraction of synesthetes.  The claim that synesthetic percepts seem non-experiential because they fail to represent the world is also contestable.  Visual experience need not always be informative (for example, hallucinations, “seeing stars,” and so forth) and failing to inform us about the world is compatible with aiming to do so but misrepresenting.

7. Individuating the Senses

Synesthesia might be important when it comes to questions about the nature of the senses, how they interact, and how many of them there are.  For example, Keeley (2002) proposes that synesthesia may challenge the assumption that the various senses are, “significantly separate and independent” (p.25, n.37) and so complicate discussions about what distinguishes one sense from another.  A similar point is made by Ross who notes that synesthesia undermines his “modified property condition” (2001, p.502).  The modified property condition is supposed to be necessary for individuating the senses, and states that each sense modality specializes in detecting certain properties (2001, p.500).  As discussed in the section on representationalism, synesthesia might seem to indicate that properties usually deemed proprietary to one sense can be detected by others after all.  Meanwhile, Ross’ proposal that synesthesia be explained away as a memory association seems unpersuasive in light of the preponderance of considerations suggesting it is a genuine sensory phenomenon (see Ramachandran & Hubbard, 2001a, 2001b, 2003b; for further discussion of Ross see Gatzia, 2008).  At present, little seems to have been written by philosophers on the significance of synesthesia as concerns the individuation and interaction of the senses (though see Macpherson, 2007, O’Callaghan 1998, p.325 and R. Gray 2011, p.253, n.17).

8. Aesthetics and “Literary Synesthesia”

The use of “intersense analogy” or sense-related metaphor as a literary technique is long familiar to authors and critics (for example, a sharp taste, a loud shirt) perhaps starting with Aristotle who noticed a “sort of parallelism between what is acute or grave to hearing and what is sharp or blunt to touch” (quoted in O’Malley, 1957, p.391).  Intersense metaphors such as “the sun is silent” (Dante quoted in O’Malley, 1957, p.409) and, more recently, “sound that makes the headphones edible” (from the lyrics of a popular rock band) may be, “a basic feature of language” natural for literature to incorporate (O’Malley, 1957, p.397), and to some “an essential component in the poetic sensibility” (Götlind, 1957, p.329).  Such “literary” synesthesia is therefore an important part of aesthetic criticism, as in Hellman’s (1977, p.287) discussion of musical styles, Masson’s analysis of acoustic associations (1953, p.222) and Ueda’s evaluation of cross-modal analogies in Haiku poetry which draw attention to “strange yet harmonious” combinations (1963, p.428).

Importantly, “the writer’s use of the ‘metaphor of the senses’” (O’Malley, 1957, p.391) is not to be confused with synesthesia as a sensory phenomenon, as repeatedly noted over the years by several philosophical works on poetry and aesthetics including Downey (1912, p.490), Götlind (1957, p.328) and O’Malley (1958, p.178).  Nevertheless, there are speculations about the connection between the two (for example, Smith, 1972, p.28; O’Malley, 1957, pp.395-396) and sensory synesthesia has been put forward as an important creative source in poetry (Downey, 1912, pp.490-491; Rayan, 1969), music and film (Brougher et al., 2005), painting (Tomas, 1969; Cazeaux, 1999; Ione, 2004) and artistic development generally (Donnell & Duignan, 1977).

That not all sensory matches work aesthetically—it seems awkward to speak of a loud smell or a salty color—might be significant in suggesting ties to perceptual synesthesia.  Perhaps they have more in common than is usually suspected (Marks, 1982; Day 1996).

Synesthetic metaphor is a “human universal” found in every culture and may be an expression of our shared nature (Pinker, 2002, p.439).  Maurer and Mondloch (2004) suggest that the fact that the cross-modal parings in synesthesias tend to be the same as the sensory matches manifest in common metaphors may reveal that non-synesthete adults share cross-modal activations with synesthetes, and synesthesia is a normal feature of early development.  Matey suggests that this lends credibility to the view that the cross-wiring present in synesthetes and non-synesthetes differs in degree and so we may draw conclusions about the types of representational contents possible of normal perceivers’ experiences based on the perceptual contents of synesthetes.

9. Synesthesia and Creativity

Ramachandran and Hubbard, among others, have been developing a number of hypotheses about the explanatory value of synesthesia towards creativity, the nature of metaphor, and even the origins of language (2001b, 2003a; see also Mulvenna, 2007; Hunt, 2005).  Like synesthesia, creativity seems to consist in, “linking two seemingly unrelated realms in order to highlight a hidden deep similarity” (Ramachandran & Hubbard, 2001b, p.17).  Ramachandran and Hubbard (2001b) conjecture that greater connectivity (or perhaps the absence of inhibitory processes) between functionally discrete brain regions might facilitate creative mappings between concepts, experiences, and behaviors in both artists and synesthetes.  These ideas are controversial and although there is some evidence that synethetes are more likely to be artists (for example, Ward et al., 2008; Rothen & Meier, 2010) the links between synesthesia and creativity remain tentative and conjectural.

10. References and Further Reading

  • Alter, T. (2006). Does synesthesia undermine representationalism? Psyche, 12(5).
  • Asher, J.E., Lamb, J., Brocklebank, D., Cazier, J., Maestrini, E., Addis, L., … Monaco, A. (2009). A whole-genome scan and fine-mapping linkage study of auditory-visual synesthesia reveals evidence of linkage to chromosomes. American Journal of Human Genetics, 84, 279-285.
  • Baron-Cohen, S. (1996). Is there a normal phase of synaesthesia in development? Psyche, 2(27).
  • Baron-Cohen, S., Wyke, M.A., & Binnie, C. (1987). Hearing words and seeing colours: An experimental investigation of a case of synaesthesia. Perception, 16(6), 761-767.
  • Beck, J. (1966). Effect of orientation and of shape similarity on perceptual grouping. Perception and Psychophysics, 1, 300-302.
  • Baron-Cohen, S., Harrison, J., Goldstein, L., & Wyke, M.A. (1993). Coloured speech perception: Is synaesthesia what happens when modularity breaks down? Perception, 22, 419-426.
  • Brougher, K., Mattis, O., Strick, J., Wiseman, A., & Zikczer, J. (2005). Visual music: Synaesthesia in art and music since 1900. London: Thames and Hudson.
  • Cazeaux, C. (1999). Synaesthesia and epistemology in abstract painting. British Journal of Aesthetics, 39(3), 241-251.
  • Chalmers, D. (2004). The representational character of experience. In B. Leiter (Ed.), The Future for Philosophy (pp.153-181). Oxford: Clarendon Press.
  • Critchley, E.M.R. (1997). Synaesthesia: Possible mechanisms. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classical and contemporary readings (pp.259-268). Cambridge, Massachusetts: Blackwell
  • Cytowic, R.E. (1989). A union of the senses. New York: Springer-Verlag.
  • Cytowic, R.E. (1997). Synesthesia: Phenomenology and neuropsychology: A review of current knowledge. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classical and contemporary readings (pp.17-39). Cambridge, Massachusetts: Blackwell.
  • Cytowic, R.E., & Eagleman, D. (2009). Wednesday is indigo blue: Discovering the brain of synesthesia. Cambridge: The MIT Press.
  • Day, S.A. (1996). Synaesthesia and synaesthetic metaphor. Psyche, 2(32).
  • Day, S.A. (2005). Some demographic and socio-cultural aspects of synesthesia. In L. Robertson & N. Sagiv (Eds.), Synesthesia: Perspectives from cognitive neuroscience (pp.11-33). Oxford: Oxford University Press.
  • Dixon, M.J., Smilek, D., Cudahy, C., & Merikle, P.M. (2000). Five plus two equals yellow. Nature, 406, 365.
  • Dixon, M.J., Smilek, D., & Merikle, P.M. (2004). Not all synaesthetes are created equal: Projector versus associator synaesthetes. Cognitive, Affective & Behavioral Neuroscience, 4(3), 335-343.
  • Dixon, M.J., Smilek, D., Duffy, P.L., Zanna, M.P., & Merikle, P.M. (2006). The role of meaning in grapheme-colour synaesthesia. Cortex, 42(2), 243-252.
  • Donnell, C.A., & Duignan, W. (1977). Synaesthesia and aesthetic education. Journal of Aesthetic Education, 11, 69-85.
  • Downey, J.E. (1912). Literary Synesthesia. The Journal of Philosophy, Psychology and Scientific Methods, 9(18), 490-498.
  • Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: The MIT Press.
  • Edquist, J., Rich, A.N., Brinkman, C., & Mattingley, J.B. (2006). Do synaesthetic colours act as unique features in visual search? Cortex, 42(2), 222-231.
  • Fish, W. (2010). Philosophy of perception: A contemporary introduction. New York: Routledge.
  • Fodor, J. (1974). Special sciences, or the disunity of science as a working hypothesis. Synthese, 28, 97-115.
  • Fodor, J. (1983). Modularity of mind: An essay on faculty psychology. Cambridge, MA: MIT Press.
  • Galton, F. (1880). Visualized numerals. Nature, 22, 494-495.
  • Galton, F. (1883). Inquiries into human faculty and its development. Dent & Sons: London.
  • Gatzia, D.E. (2008). Martian colours. Philosophical Writings, 37, 3-16.
  • Gray, J.A. (2003). How are qualia coupled to functions? Trends in Cognitive Sciences, 7(5), 192-194.
  • Gray, J.A. (2004). Consciousness: Creeping up on the hard problem. Oxford: Oxford University Press.
  • Gray, J.A., Williams, S.C.R., Nunn, J., & Baron-Cohen, S. (1997). Possible implications of synaesthesia for the question of consciousness. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.173-181). Cambridge, MA: Blackwell.
  • Gray, J.A. (1998).  Creeping up on the hard question of consciousness. In S. Hameroff, A. Kaszniak & A. Scott (Eds.), Toward a science of consciousness II: The second Tucson discussions and debates (pp.279-291). Cambridge, MA: The MIT Press.
  • Gray, J.A., Nunn J., & Chopping S. (2002). Implications of synaesthesia for functionalism: Theory and experiments. Journal of Consciousness Studies, 9(12), 5-31.
  • Gray, J.A., Parslow, D.M., Brammer, M.J., Chopping, S.M., Vythelingum, G.N., & Ffytche, D.H. (2006). Evidence against functionalism from neuroimaging of the alien colour effect in synaesthesia. Cortex, 42(2), 309-318.
  • Gray, R. (2001a). Synaesthesia and misrepresentation: A reply to Wager. Philosophical Psychology, 14(3), 339-346.
  • Gray, R. (2001b). Cognitive modules, synaesthesia and the constitution of psychological natural kinds. Philosophical Psychology, 14(1), 65-82.
  • Gray, R. (2004). What synaesthesia really tells us about functionalism. Journal of Consciousness Studies, 11(9), 64-69.
  • Gray, R. (2011). On the nature of the senses. In F. Macpherson (Ed.), The Senses: Classic and contemporary philosophical perspectives, pp.243-260. New York: Oxford University Press.
  • Götlind, E. (1957). The appreciation of poetry: A proposal of certain empirical inquiries. The Journal of Aesthetics and Art Criticism, 15(3), 322-330.
  • Grossenbacher, P.G., & Lovelace, C.T. (2001). Mechanisms of synesthesia: Cognitive and physiological constraints. Trends in Cognitive Sciences, 5(1), 36-42.
  • Hardcastle, V.G. (1997). When a pain is not. The Journal of Philosophy, 94(8), 381-409.
  • Harrison, J.E. (2001). Synaesthesia: The strangest thing. New York: Oxford University Press.
  • Harrison, J.E., & Baron-Cohen, S. (1997). Synaesthesia: A review of psychological theories. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.109-122). Cambridge, MA: Blackwell.
  • Hellman, G. (1977). Symbol systems and artistic styles. The Journal of Aesthetics and Art Criticism, 35(3), 279-292.
  • Hunt, H. (2005). Synaesthesia, metaphor, and consciousness: A cognitive-developmental perspective. Journal of Consciousness Studies, 12(12), 26-45.
  • Hurley, S., & Noë, A. (2003a). Neural plasticity and consciousness. Biology and Philosophy, 18, 131-168.
  • Hurley, S., & Noë, A. (2003b). Neural plasticity and consciousness: Reply to Block. Trends in Cognitive Sciences, 7(1), 342.
  • Ione, A. (2004). Klee and Kandinsky: Polyphonic painting, chromatic chords and synaesthesia. Journal of Consciousness Studies, 11(3-4), 148-158.
  • Keeley, B.L. (2002). Making sense of the senses: Individuating modalities in humans and other animals. The Journal of Philosophy, 99(1), 5-28.
  • Kim, C-Y., Blake, R., & Palmeri, T.J. (2006). Perceptual interaction between real and synesthetic colors. Cortex, 42, 195-203.
  • Kadosh R.C., Henik, A., Catena, A., Walsh, V., & Fuentes, L.J. (2009). Induced cross-modal synaesthetic experiences without abnormal neuronal connections. Psychological Science, 20(2), 258-265.
  • Kohler, I. (1964). Formation and transformation of the perceptual world. Psychological Issues 3(4, Monogr. No. 12), 1-173.
  • Lycan, W. (1987). Consciousness. Cambridge, MA: The MIT Press.
  • Lycan, W. (2006). Representational theories of consciousness. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy.
  • Luria, A.R. (1968). The mind of a mnemonist. New York: Basic Books.
  • Macpherson, F. (2007). Synaesthesia, functionalism and phenomenology. In M. de Caro, F. Ferretti & M. Marraffa (Eds.), Cartographies of the mind: Philosophy and psychology in intersection series: Studies in brain and mind (Vol.4, pp.65-80). Dordrecht, The Netherlands: Springer.
  • Marks, L.E. (1982). Synesthetic perception and poetic metaphor. Journal of experimental psychology: Human perception and performance, 8(1): 15-23.
  • Masson, D.I. (1953). Vowel and consonant patterns in poetry. The Journal Aesthetics and Art Criticism, 12(2), 213-227.
  • Maurer, D. (1993). Neonatal synesthesia: Implications for the processing of speech and faces. In B. de Boysson-Bardies, S. de Schonen, P. Jusczyk, P. Mcneilage & J. Morton (Eds.), Developmental neurocognition: Speech and face processing in the first year of life (pp.109-124). Dordrecht: Kluwer.
  • Maurer, D., & Mondloch, C. (2004). Neonatal synesthesia: A re-evaluation. In L. Robertson & N. Sagiv (Eds.), Attention on Synesthesia: Cognition, Development and Neuroscience, (pp. 193-213). Oxford: Oxford University Press.
  • Meier, B., & Rothen, N. (2009). Training grapheme-colour associations produces a synaesthetic Stroop effect, but not a conditioned synaesthetic response. Neuropsychologia, 47(4), 1208-1211.
  • Mulvenna, C.M. (2007). Synaesthesia, the arts and creativity: A neurological connection. Frontiers of Neurology and Neuroscience, 22, 206-222.
  • Noë, A., & Hurley, S. (2003). The deferential brain in action. Trends in Cognitive Sciences, 7(5), 195-196.
  • O’Callaghan, C. (1998). Seeing what you hear: Cross-modal illusions and perception. Philosophical Issues, 18(1), 316-338.
  • O’Malley, G. (1957). Literary synesthesia. The Journal of Aesthetics and Art Criticism, 15(4), 391-411.
  • O’Malley, G. (1958). Shelley’s “air-prism”: The synesthetic scheme of “Alastor.” Modern Philology, 55(3), 178-187.
  • Paulesu, E., Harrison, J., Baron-Cohen, S., Watson, J.D.G., Goldstein, L., Heather, J., … Frith, C.D. (1995). The physiology of coloured hearing: A PET activation study of colour-word synaesthesia, Brain 118, 661-676.
  • Pettit, P. (2003). Looks red. Philosophical Issues, 13(1), 221-252.
  • Pinker, S. (2002). The blank slate: The modern denial of human nature. New York: Viking.
  • Proulx, M.J. (2010). Synthetic synaesthesia and sensory substitution. Consciousness and Cognition, 19(1), 501-503.
  • Ramachandran, V.S., & Hubbard, E.M. (2000). Number-color synaesthesia arises from cross-wiring in the fusiform gyrus. Society for Neuroscience Abstracts, 30, 1222.
  • Ramachandran, V.S., & Hubbard, E.M. (2001a). Psychophysical investigations into the neural basis of synaesthesia. Proceedings of the Royal Society of London B, 268, 979-983.
  • Ramachandran, V.S., & Hubbard, E.M. (2001b). Synaesthesia: A window into perception though and language. Journal of Consciousness Studies, 8(12), 3-34.
  • Ramachandran, V.S., & Hubbard, E.M. (2003a). Hearing colors, tasting shapes. Scientific American, April, 52-59.
  • Ramachandran, V.S., & Hubbard, E.M. (2003b). The phenomenology of synaesthesia. Journal of Consciousness Studies, 10(8), 49-57.
  • Rang, H.P., & Dale, M.M. (1987). Pharmacology. Edinburgh: Churchill Livingstone.
  • Rayan, K. (1969). Edgar Allan Poe and suggestiveness. The British Journal of Aesthetics, 9, 73-79.
  • Revonsuo, A. (2001). Putting color back where it belongs. Consciousness and Cognition, 10(1), 78-84.
  • Rich, A.N., Bradshaw, J.L., & Mattingley, J.B. (2005). A systematic, large-scale study of synaesthesia: Implications for the role of early experience in lexical-colour associations, Cognition, 98, 53-84.
  • Rosenberg, G. (2004). A place for consciousness: Probing the deep structure of the natural world. Oxford: Oxford University Press.
  • Ross, P.W. (2001). Qualia and the senses. The Philosophical Quarterly, 51(205), 495-511.
  • Rothen, N., & Meier, B. (2010). Higher prevalence of synaesthesia in art students. Perception, 39, 718-720.
  • Segal, G.M.A. (1997). Synaesthesia: Implications for modularity of mind. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.211-223). Cambridge, MA: Blackwell.
  • Simner, J., Sagiv, N., Mulvenna, C., Tsakanikos, E., Witherby, S., Fraser, C., … Ward, J. (2006). Synaesthesia: The prevalence of atypical cross-modal experiences. Perception, 35, 1024-1033.
  • Smilek, D., Dixon, M.J., Cudahy, C., & Merikle, P.M. (2001). Synaesthetic photisms influence visual perception. Journal of Cognitive Neuroscience, 13, 930-936.
  • Smilek. D., Dixon, M.J., Cudahy, C. & Merikle, P.M. (2002). Synesthetic color experiences influence memory. Psychological Science, 13(6), 548-552
  • Smilek, D., Dixon M.J., & Merikle P.M. (2003). Synaesthetic photisms guide attention. Brain & Cognition, 53, 364-367.
  • Ter Hark, M. (2009). Coloured vowels: Wittgenstein on synaesthesia and secondary meaning. Philosophia: Philosophical Quarterly of Israel, 37(4), 589-604.
  • Tomas, V. (1969). Kandinsky’s theory of painting. British Journal of Aesthetics, 9, 19-38.
  • Treisman, A. (1982). Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology: Human perception and performance, 8(2), 194-214.
  • Tye, M. (1995). Ten problems of consciousness: A representational theory of the phenomenal mind. Cambridge, MA: The MIT Press.
  • Ueda, M. (1963). Basho and the poetics of “Haiku.” The Journal of Aesthetics and Art Criticism, 21(4), 423-431.
  • Wager, A. (1999). The extra qualia problem: Synaesthesia and Representationalism. Philosophical Psychology, 12(3), 263-281.
  • Wager, A. (2001). Synaesthesia misrepresented. Philosophical Psychology, 14(3), 347-351.
  • Ward, J., & Simner, J. (2005). Is synaesthesia an X-linked dominant trait with lethality in males? Perception, 34(5), 611-623.
  • Ward, J., & Sagiv, N. (2007). Synaesthesia for finger counting and dice patterns: A case of higher synaesthesia? Neurocase, 13(2), 86-93.
  • Ward, J., Thompson-Lake, D., Ely, R., & Kaminski, F. (2008). Synaesthesia, creativity and art: What is the link? British Journal of Psychology, 99, 127-141.
  • Wittgenstein, L. (1958/1994). Philosophical investigations. Oxford: Blackwell.


Author Information

Sean Allen-Hermanson
Florida International University
U. S. A.


Jennifer Matey
Florida International University
U. S. A.

René Girard (1923—2015)

Rene GirardRené Girard’s thought defies classification. He has written from the perspective of a wide variety of disciplines: Literary Criticism, Psychology, Anthropology, Sociology, History, Biblical Hermeneutics and Theology. Although he rarely calls himself a philosopher, many philosophical implications can be derived from his work. Girard’s work is above all concerned with Philosophical Anthropology (that is, ‘What is it to be human?’), and draws from many disciplinary perspectives. Over the years he has developed a mimetic theory. According to this theory human beings imitate each other, and this eventually gives rise to rivalries and violent conflicts. Such conflicts are partially solved by a scapegoat mechanism, but ultimately, Christianity is the best antidote to violence.

Perhaps Girard’s lack of specific disciplinary affiliation has promoted a slight marginalization of his work among contemporary philosophers. Girard is not on par with more well known French contemporary philosophers (for example Derrida, Foucault, Deleuze, Lyotard), but his work is becoming increasingly recognized in the humanities, and his commitment as a Christian thinker has given him prominence among theologians.

Table of Contents

  1. Life
  2. Mimetic Desire
    1. External Mediation
    2. Internal Mediation
    3. Metaphysical Desire
    4. The Oedipus Complex
  3. The Scapegoat Mechanism
    1. The Origins of Culture
    2. Religion
    3. Ritual
    4. Myth
    5. Prohibitions
  4. The Uniqueness of the Bible and Christianity
    1. The Hebrew Bible
    2. The New Testament
    3. Nietzsche’s Criticism of Christianity
    4. Apocalypse and Contemporary Culture
  5. Theological Implications
    1. God
    2. The Incarnation
    3. Satan
    4. Original Sin
    5. Atonement
  6. Criticisms
    1. Mimetic Theory Claims Too Much
    2. The Origins of Culture are Not Verifiable
    3. Girard Exaggerates the Contrast Between Myths and the Bible
    4. Christian Uniqueness Does Not Imply a Divine Origin
    5. Lack of a Precise Scientific Language
  7. References and Further Reading
    1. Primary
    2. Secondary

1. Life

René Girard was born on December 25, 1923, in Avignon, France. He was the son of a local archivist, and he went on to follow his father’s footsteps. He studied in Paris’ École Nationale des Chartes, and specialized in Medieval studies. In 1947, Girard took the opportunity to emigrate to America, and pursued a doctorate at Indiana University. His dissertation was on Americans’ opinions about France. Although his later work has had little to do with his doctoral dissertation, Girard has kept a live interest in French affairs.

After the completion of his doctorate, Girard began to take interest in Jean-Paul Sartre’s work. Although on a personal level Girard is still very much interested in Sartre’s philosophy, it has had little influence on his thought. Girard settled in America, and has taught at different institutions (Indiana University, State University of New York in Buffalo, Duke, Johns Hopkins, Bryn Mawr and Stanford) until his retirement in 1995. He died in 2015.

During the beginning of his career as lecturer, Girard was assigned to teach courses on European literature; he admits he was not at all familiar with the great works of European novelists. As Girard began to read the great European novels in preparation for the course, he became especially engaged with the work of five novelists in particular: Cervantes, Stendhal, Flaubert, Dostoyevsky and Proust.

His first book, Mensonge Romantique et Vérité Romanesque (1961), is a literary comment on the works of these great novelists. Until that time, Girard was a self-declared agnostic. As he researched the religious conversions of some of Dostoyevsky’s characters, he felt he had lived a similar experience, and converted to Christianity. Ever since, Girard has been a committed and practicing Roman Catholic.

After the publication of his first book, Girard turned his attention to ancient and contemporary sacrifice rituals, as well as Greek myth and tragedy. This led to another important book, La Violence et le Sacré (1972), for which he gained much recognition. On a personal level, he was a committed Christian, but his Christian views were not publicly expressed until the publication of Des Choses Cachées Depuis la Fondation du Monde (1978), his magnum opus, and best systematization of his thought. Ever since, Girard has written books that expand various aspects of his work. In 2005, Girard was elected to the Académie Française, a very important distinction among French intellectuals.

2. Mimetic Desire

Girard’s fundamental concept is ‘mimetic desire’. Ever since Plato, students of human nature have highlighted the great mimetic capacity of human beings; that is, we are the species most apt at imitation. Indeed, imitation is the basic mechanism of learning (we learn inasmuch as we imitate what our teachers do), and neuroscientists are increasingly reporting that our neural structure promotes imitation very proficiently (for example, ‘mirror neurons’).

However, according to Girard, most thinking devoted to imitation pays little attention to the fact that we also imitate other people’s desires, and depending on how this happens, it may lead to conflicts and rivalries. If people imitate each other’s desires, they may wind up desiring the very same things; and if they desire the same things, they may easily become rivals, as they reach for the same objects. Girard usually distinguishes ‘imitation’ from ‘mimesis’. The former is usually understood as the positive aspect of reproducing someone else’s behavior, whereas the latter usually implies the negative aspect of rivalry. It should also be mentioned that because the former usually is understood to refer to mimicry, Girard proposes the latter term to refer to the deeper, instinctive response that humans have to each other.

a. External Mediation

Girard calls ‘mediation’ the process in which a person influences the desires and preferences of another person. Thus, whenever a person’s desire is imitated by someone else, she becomes a ‘mediator’ or ‘model’. Girard points out that this is very evident in publicity and marketing techniques: whenever a product is promoted, some celebrity is used to ‘mediate’ consumers’ desires: in a sense, the celebrity is inviting people to imitate him in his desire of the product. The product is not promoted on the basis of its inherent qualities, but simply because of the fact that some celebrity desires it.

In his studies on literature, Girard highlights this type of relationship in his literary studies, as for example in his study of Don Quixote. Don Quixote is mediated by Amadis de Gaula. Don Quixote becomes an errant knight, not really because he autonomously desires so, but in order to imitate Amadis. Nevertheless, Amadis and Don Quixote are characters on different planes. They will never meet, and in such a manner, they never become rivals.

The same can be said of the relation between Sancho and Don Quixote. Sancho desires to be governor of an island, mostly because Don Quixote has suggested to Sancho that that is what he should desire. Again, although they interact continuously, Sancho and Don Quixote belong to two different worlds: Don Quixote is a very complex man, Sancho is simple in extreme. Girard calls ‘external mediation’ the situation when the mediator and the person mediated are on different planes. Don Quixote is an ‘external mediator’ to Sancho, inasmuch as he mediates his desires ‘from the outside’; that is, Don Quixote never becomes an obstacle in Sancho’s attempts to satisfy his desires.

External mediation does not carry the risk of rivalry between subjects, because they belong to different worlds. Although the source of Sancho’s desire to be governor of an island is in fact Don Quixote, they never desire the same object. Don Quixote desires things Sancho does not desire, and vice versa. Hence, they never become rivals. Girard believes ‘external mediation’ is a frequent feature of the psychology of desire: from our earliest phase as infants, we look up in imitation to our elders, and eventually, most of what we desire is mediated by them.

b. Internal Mediation

In ‘internal mediation’, the ‘mediator’ and the person mediated are no longer abysmally separated and hence, do not belong to different worlds. In fact, they come to resemble each other to the point that they end up desiring the same things. But, precisely because they are no longer on different worlds and now reach for the same objects of desire, they become rivals. We are fully aware that competition is fiercer when competitors resemble each other.

Thus, in internal mediation the subject imitates the model’s desires, but ultimately, unlike external mediation, the subject falls into rivalry with the model/mediator. Consider this example: a toddler imitates his father in his occupations, and he desires to pursue his father’s career when he grows up. This will hardly cause any rivalry (although it may account for Freud’s Oedipus Complex; see section 2.d). This is, as we have seen, a case of external mediation. But, now consider a PhD candidate that learns a great deal from his supervisor, and seeks to imitate every aspect of his work, and even his life. Eventually, they may become rivals, especially if both are looking for scholarly recognition. Or, consider further the case of a toddler that is playing with a toy, and another toddler that, out of imitation, desires that very same toy: they will eventually become rivals for the control of the toy. This is ‘internal mediation’; that is the person is mediated from the ‘inside’ of his world, and therefore, may easily become his mediator’s rival. This rivalry often has tragic consequences, and Girard considers this a major theme in modern novels. In Girard’s view, this literary theme is in fact a portrait of human nature: very often, people will desire something as a result of imitating other people, but eventually, this imitation will lead to rivalries with the very person imitated in the first place.

c. Metaphysical Desire

In Girard’s view, mimetic desire may grow to such a degree, that a person may eventually desire to be her mediator. Again, publicity is illustrative: sometimes, consumers do not just desire a product for its inherent qualities, but rather, desire to be the celebrity that promotes such a product. Girard considers that a person may desire an object only as part of a larger desire; that is, to be her mediator. Girard calls the desire to be other people, ‘metaphysical desire’. Furthermore, acquisitive desire leads to metaphysical desire, and the original object of desire becomes a token representing the “larger” desire of having the being of the model/rival.

Whereas external mediation does not lead to rivalries, internal mediation does lead to rivalries. But, metaphysical desire leads a person not just to rivalry with her mediator; actually, it leads to total obsession with and resentment of the mediator. For, the mediator becomes the main obstacle in the satisfaction of the person’s metaphysical desire. Inasmuch as the person desires to be his mediator, such desire will never be satisfied. For nobody can be someone else. Eventually, the person developing a metaphysical desire comes to appreciate that the main obstacle to be the mediator is the mediator himself.

According to Girard, metaphysical desire can be a very destructive force, as it promotes resentment against others. Girard considers that the anti-hero of Dostoyevsky’s Notes From the Underground is the quintessential victim of metaphysical desire: the unnamed character eventually goes on a crusade against the world, as he is disillusioned with everything around him. Girard believes that the origin of his alienation is his dissatisfaction with himself, and his obsession to be someone else; that is, an impossible task.

d. The Oedipus Complex

Girard’s career has been mostly devoted to literary criticism, and the analysis of fictional characters. Girard believes that the great modern novelists (such as Stendhal, Flaubert, Proust and Dostoevsky) have understood human psychology better than the modern field of Psychology does. And, as a complement of his literary criticism, he has developed a psychology in which the concept of ‘mimetic desire’ is central. Inasmuch as human beings constantly seek to imitate others, and most desires are in fact borrowed from other people, Girard believes that it is crucial to study how personality relates to others.

Departing from the main premise of mimetic desire, Girard has sought to reformulate some of psychology’s long-held assumptions. In particular, Girard seeks to reconsider some of Freud’s concepts. Although Girard has been careful enough to warn that Freud’s thought may be highly misleading in many ways, he has been engaged with Freud’s work in a number of ways. Girard admits that Freud and his followers had some good initial intuitions, but criticizes Freudian psychoanalytic theory on the grounds that it tends to obviate the role that other individuals have on the development of personality. In other words, psychoanalysis tends to assume that human beings are largely autonomous, and hence, do not desire in imitation of others.

Girard grants that Freud was a superb observer, but was not a good interpreter. And, in a sense, Girard accepts that there is such a thing as the Oedipus Complex: the child will eventually come to unconsciously have a sexual desire for his mother, and a desire to kill his father; and indeed, perhaps this complex will endure throughout adulthood. But, Girard considers that the Oedipus Complex is the result of a mechanism very different from the one outlined by Freud.

According to Freud, the child has an innate sexual desire towards the mother, and eventually, discovers that the father is an obstacle to the satisfaction of that desire. Girard, on the other hand, reinterprets the Oedipus Complex in terms of mimetic desire: the child becomes identified with his father and imitates him. But, inasmuch as he imitates his father, the child imitates the sexual desire for his mother. Then, his father becomes his model and rival, and that explains the ambivalent feelings so characteristic of the Oedipus Complex.

3. The Scapegoat Mechanism

In Girard’s psychology, internal mediation and metaphysical desire eventually lead to rivalry and violence. Imitation eventually erases the differences among human beings, and inasmuch as people become similar to each other, they desire the same things, which leads to rivalries and a Hobbesian war of all against all. These rivalries soon bear the potential to threaten the very existence of communities. Thus, Girard asks: how is it possible for communities to overcome their internal strife?

Whereas the philosophers of the 18th century would have agreed that communal violence comes to an end due to a social contract, Girard believes that, paradoxically, the problem of violence is frequently solved with a lesser dose of violence. When mimetic rivalries accumulate, tensions grow ever greater. But, that tension eventually reaches a paroxysm. When violence is at the point of threatening the existence of the community, very frequently a bizarre psychosocial mechanism arises: communal violence is all of the sudden projected upon a single individual. Thus, people that were formerly struggling, now unite efforts against someone chosen as a scapegoat. Former enemies now become friends, as they communally participate in the execution of violence against a specified enemy.

Girard calls this process ‘scapegoating’, an allusion to the ancient religious ritual where communal sins were metaphorically imposed upon a he-goat, and this beast was eventually abandoned in the desert, or sacrificed to the gods (in the Hebrew Bible, this is especially prescribed in Leviticus 16).The person that receives the communal violence is a ‘scapegoat’ in this sense: her death or expulsion is useful as a regeneration of communal peace and restoration of relationships.

However, Girard considers it crucial that this process be unconscious in order to work. The victim must never be recognized as an innocent scapegoat (indeed, Girard considers that, prior to the rise of Christianity, ‘innocent scapegoat’ was virtually an oxymoron; see section 4.b below); rather, the victim must be thought of as a monstrous creature that transgressed some prohibition and deserved to be punished. In such a manner, the community deceives itself into believing that the victim is the culprit of the communal crisis, and that the elimination of the victim will eventually restore peace.

a. The Origins of Culture

Girard believes that the scapegoat mechanism is the very foundation of cultural life. Natural man became civilized, not through some sort of rational deliberation embodied in a social contract, (as it was fashionable to think among 18th century philosophers) but rather, through the repetition of the scapegoat mechanism. And, very much as many philosophers of the 18th Century believed that their descriptions of the natural state were in fact historical, Girard believes that, indeed, Paleolithic men continually used the scapegoat mechanism, and it was precisely this feature what allowed them to lay the foundations of culture and civilization.

In fact, Girard believes that this process goes farther back in the evolution of Homo sapiens: hominids probably were engaged in scapegoating. But, it was precisely scapegoating what allowed a minimum of communal peace among early hominid groups. Hominids could eventually develop their main cultural traits due to the efficiency of the scapegoat mechanism. The murder of a victim brought forth communal peace, and this peace promoted the flourishing of the most basic cultural institutions.

Once again, Girard takes deep inspiration from Freud, but reinterprets his observations. Freud’s Totem and Taboo presents a thesis that the origins of culture are founded upon the original murder of a father figure by his sons. Girard considers that Freud’s observations were only partially correct. Freud is right in pointing out that indeed, culture is founded upon a murder. But, this murder is not due to the oedipal themes Freud was so fond of. Instead, the founding murder is due to the scapegoat mechanism. The horde murdered a victim (not necessarily a father figure) in order to project upon her all the violence that was threatening the very existence of the community.

However, as mimetic desire has been a constant among human beings, scapegoating has never been entirely efficient. Nevertheless, human communities need to periodically recourse to the scapegoating mechanism in order to maintain social peace.

b. Religion

According to Girard, the scapegoat mechanism brings about unexpected peace. But, this moment is so marvelous, that it soon acquires a religious overtone. Thus, the victim is immediately consecrated. Girard is in the French sociological tradition of Durkheim, who considered that religion essentially accomplishes the function of social integration. In Girard’s view, inasmuch as the deceased victim brings forth communal peace and restores social order and integration, she becomes sacred.

At first, while living, victims are considered to be monstrous transgressors that deserve to be punished. But, once they die, they bring peace to the community. Then, they are not monsters any longer, but rather gods. Girard highlights that, in most primitive societies, there is a deep ambivalence towards deities: they hold high virtues, but they are also capable of performing some very monstrous deeds. That is how, according to Girard, primitive gods are sanctified victims.

In such a manner, all cultures are founded upon a religious basis. The function of the sacred is to offer protection for the stability of communal peace. And, to do this, it ensures that the scapegoat mechanism provides its effects through the main religious institutions.

c. Ritual

Girard considers rituals the earliest cultural and religious institution. In Girard’s view, ritual is a reenactment of the original scapegoating murder. Although, as anthropologists are quick to assert, rituals are very diverse, Girard considers that the most popular form of ritual is sacrifice. When a victim is ritually killed, Girard believes, the community is commemorating the original event that promoted peace.

The original victim was most likely a member of the community. Girard considers that, probably, earliest sacrificial rituals employed human victims. Thus, Aztec human sacrifice may have impacted Western conquistadors and missionaries upon its discovery, but this was a cultural remnant of a popular ancient practice. Eventually, rituals promoted sacrificial substitution, and animals were employed. In fact, Girard considers that hunting and the domestication of animals arose out of the need to continually reenact the original murder with substitute animal victims.

d. Myth

Following the old school of European anthropologists, Girard believes that myths are the narrative corollary of ritual. And, inasmuch as rituals are mainly a reenactment of the original murder, myths also recapitulate the scapegoating themes.

Now, Girard’s crucial point about the necessary unconsciousness of scapegoating: must be kept in mind in order for this mechanism to work, its participants must not recognize it as such. That is to say, the victim must never appear as what it really is: a scapegoat that is no guiltier of disturbance, than other members of the community.

The way to assure that scapegoats are not recognized as what they really are is by distorting the story of the events that led to their death. This is accomplished by telling the story from the perspective of the scapegoaters. Myths will usually tell a story of someone doing a terrible thing and, thus, deserving to be punished. The victim’s perspective will never be incorporated into the myth, precisely because this would spoil the psychological effect of the scapegoating mechanism. The victim will always be portrayed as a culprit whose deeds brought about social chaos, but whose death or expulsion brought about social peace.

Girard’s most recurrent example of myths is that of Oedipus. According to the myth, Oedipus was expelled from Thebes because he murdered his father and married his mother. But, according to Girard, the myth should be read as a chronicle written by a community that chose a scapegoat, blamed him of some crime, punished him, and once expelled, peace returned. Under Girard’s interpretation, the fact that there was a pest in Thebes is suggestive of a social crisis. To solve the crisis, Oedipus is selected as a scapegoat. But, he is never presented as such: quite the contrary, he is accused of parricide and incest, and this justifies his persecution. Thus, Oedipus’ perspective as a victim is suppressed from the myth.

Furthermore, Girard believes that, as myths evolve, later versions will tend to dissimulate the scapegoating violence (for example, instead of presenting a victim who dies by drowning, the myth will just claim that the victim went to live to the bottom of the sea), in order to avoid feeling compassion for the victim. Indeed, Girard considers that the evolution of myths may even reach a point where no violence is present. But, Girard insists, all myths are founded upon violence, and if no violence is found in a myth, it must be because the community made it disappear.

Myths are typical of archaic societies, but Girard thinks that modern societies have the equivalent of myths: persecution texts. Especially during the witch-hunts and persecution of Jews during the Middle Ages, there were plenty of chronicles written from the perspective of the mobs and witch-hunters. These texts told the story of a crisis that appeared as the consequence of some crime committed by a person or a minority. The author of the chronicle is part of the persecuting mob, as he projects upon the victim all the typical accusations, and justifies the mob’s actions. Modern lynching accounts are another prominent example of such persecutory dynamics.

e. Prohibitions

Inasmuch as, under the scapegoaters’ view, there are no innocent scapegoats, an accusation must be made. In the case of Oedipus, he was accused of parricide and incest, and these are recurrent accusations to justify persecution (for example Maria Antoinette), but many other accusations are found (for example blood libels, witchcraft, and so forth). After the victim is executed, Girard claims, a prohibition falls upon the action allegedly perpetrated by the scapegoat. By doing so, the scapegoaters believe they restore social order. Thus, along with ritual and myths, prohibitions derive from the scapegoat mechanism.

Girard also considers that prior to the scapegoating mechanism, communities go through a process he calls a ‘crisis of differences’. Mimetic desire eventually makes every member resemble each other, and this lack of differentiation generates chaos. Traditionally, this indifferentiation is represented through various symbols typically associated with chaos and disorder (plagues, monstrous animals, and so forth). The death of the scapegoat mechanism restores order and, by extension, differentiation. Thus, everything returns to its place. In such a manner, social differentiation and order in general is also derived from the scapegoat mechanism.

4. The Uniqueness of the Bible and Christianity

Girard’s Christian apologetics departs from a comparison of myths and the Bible. According to Girard, whereas myths are caught under the dynamics of the scapegoat mechanism by telling the foundational stories from the perspective of the scapegoaters, the Bible contains plenty of stories that tell the story from the perspective of the victims.

In myths, those who are collectively executed are presented as monstrous culprits that deserve to be punished. In the Bible, those who are collectively executed are presented as innocent victims that are unfairly accused and persecuted. Thus, Girard recapitulates the old Christian apologetic tradition of insisting upon the Bible’s singularity. But, instead of making emphasis on the Bible’s popularity, or fulfillment of prophecies, or consistency, Girard thinks the Bible is unique in its defense of victims.

However, according to Girard, this is not merely a shift in narrative perspective. It is in fact something much more profound. Inasmuch as the Bible presents stories from the perspective of the victims, the Biblical authors reveal something not understood by previous mythological traditions. And, by doing so, they make scapegoating inoperative. Once scapegoats are recognized for what they truly are, the scapegoating mechanism no longer works. Thus, the Bible is a remarkably subversive text, inasmuch as it shatters the scapegoating foundations of culture.

a. The Hebrew Bible

Girard thinks that the Hebrew Bible is a text in travail. There are plenty of stories that are still told from the perspective of the scapegoaters. And, more importantly, it continuously presents a wrathful God that sanctions violence. However, Girard appreciates some important shifts in some narratives from the Bible, especially when they are compared to myths that present similar structures.

For example, Girard contrasts the story of Cain and Abel with the myth of Remus and Romulus. In both stories, there is rivalry between the brothers. In both stories, there is a murder. But, in the Roman myth, Romulus is justified in killing Remus, as the latter transgressed the territorial limits they had earlier agreed upon. In the Biblical story, Cain is never justified in killing Abel. And, indeed, the blood of Abel is evoked as the blood of the innocent victims that have been murdered throughout history, and that God will vindicate.

Girard is also fond of comparing the story of Oedipus with the story of Joseph. Oedipus is accused of incest, and the myth accepts this accusation, therefore justifying his expulsion from Thebes. Joseph is also accused of incest (he allegedly attempted to rape Potiphar’s wife, his de facto step mother). But, the Bible never accepts such an accusation.

In Girard’s views, the Hebrew Bible is also crucial in its rejection of ritual sacrifice. Some prophets vehemently denounced the grotesque ritual killing of sacrificial victims, although, of course, the ritual requirement of sacrificial rituals permeates much of the Old Testament. Girard understands this as a complementary approach to the defense of victims. The prophets promote a new concept of the divinity: God is no longer pleased with ritual violence. This is evocative of Hosea’s plea from God: “I want mercy, not sacrifices”. Thus, the Hebrew Bible takes a twofold reversal of culture’s violent foundation: on the one hand, it begins to present the foundational stories from the perspective of the victims; on the other hand, it begins to present a God that is not satisfied with violence and, therefore, begins to dissociate the sacred from the violent.

b. The New Testament

Under Girard’s interpretation, the New Testament is the completion of the process that the Hebrew Bible had begun. The New Testament fully endorses the victims’ perspective, and satisfactorily dissociates the sacred from the violent.

The Passion story is central in the New Testament, and it is the complete reversal of traditional myth’s structure. Amidst a huge social crisis, a victim (Jesus) is persecuted, blamed of some fault, and executed. Even the apostles succumb to the collective pressure and abandon Jesus, tacitly becoming part of the scapegoating crowd. This is emblematic in the story of Peter’s denial of Jesus.

Nevertheless, the evangelists never succumb to the collective pressure of the scapegoating mob. The evangelists adhere to Jesus’ innocence throughout the whole story. Alas, Jesus is finally recognized as what he really is: an innocent scapegoat, the Lamb of God that was taken to the slaughterhouse, although no fault was in him. According to Girard, this is the completion of a slow process begun in the Hebrew Bible. Once and for all, the New Testament reverses the violent psychosocial mechanism upon which human culture has been founded.

Aside from that, Jesus’ ethical message is complementary. Under Girard’s interpretation, humanity has achieved social peace by performing violent acts of scapegoating. Jesus’ solution is much more radical and efficient: to turn the other cheek, to abstain from violent retribution. Scapegoating is not an efficient means to bring about peace, as it always depends on the periodic repetition of the mechanism. The real solution is in the total withdrawal from violence, and that is the bulk of Jesus’ message.

c. Nietzsche’s Criticism of Christianity

Girard is bothered by the possibility that his readers may fail to appreciate the uniqueness of the Bible and Christianity. In this sense, Girard is very critical of classical anthropologists such as Sir James Frazer, who saw the relevance of scapegoating in primitive rituals and myths, but, according to Girard, failed to see that the Christian story is fundamentally different from other scapegoating myths.

Indeed, Girard resents the fact that Christianity is usually considered to be merely one among many other religions. However, ironically, Girard seeks help from a powerful opponent of Christianity: Friedrich Nietzsche. Nietzsche criticized Christianity for its ‘slave morality’; that is, its tendency to side with the weak. Nietzsche recognized that, above other religions, Christianity promoted mercy as a virtue. Nietzsche interpreted this as a corruption of the human vital spirit, and advocated a return to the pre-Christian values of power and strength.

Girard is, of course, opposed to the Nietzschean disdain for mercy and antipathy towards the weak and victims. But, Girard considers Nietzsche a genius, inasmuch as the German philosopher saw what, according to Girard, most people (including the majority of Christians) fail to see: Christianity is unique in its defense of victims. Thus, in a sense, Girard claims that his Christian apologetics is for the most part a reversal of Nietzsche: they both agree that Christianity is singular, but whereas Nietzsche believed this singularity corrupted humanity, Girard believes this singularity is the manifestation of a power that reverses the violent foundations of culture.

d. Apocalypse and Contemporary Culture

Girard acknowledges that, on the surface, not everything in the New Testament is about peace and love. Indeed, there are some frightening passages in Jesus’ preaching, perhaps the most emblematic “I come not to bring peace, but a sword”. This is part of the apocalyptic worldview prevalent in Jesus’ days. But, much more than that, Girard believes that the apocalyptic teachings to be found in the New Testament are a warning about future human violence.

Girard considers that, inasmuch as the New Testament overturns the old scapegoating practices, humanity no longer has the capacity to return to the scapegoating mechanism in order to restore peace. Once victims are revealed as innocent, scapegoating can no longer be relied upon to restore peace. And, in such a sense, there is now an even greater threat of violence. According to Girard, Jesus brings a sword, not in the sense that he himself is going to execute violence, but in the sense that, through his work and the influence of the Bible, humanity will not have the traditional violent means to put an end to violence. The inefficacy of the scapegoat mechanism will bring even more violence. The way to restore peace is not through the scapegoat mechanism, but rather, through the total withdrawal of violence.

Thus, Girard believes that, ironically, Christianity has brought about even more violence. But, once again, this violence is not attributable to Christianity itself, but rather, to the stubbornness of human beings who do not want to follow the Christian admonition and insist on putting an end to violence through traditional scapegoating.

Girard believes that, 20th and 21st centuries are more than ever an apocalyptic age. And, once again, he acknowledges a 19th century German figure as a precursor of these views: Carl von Clausewitz. According to Girard, the great Prussian war strategist realized that modern war would no longer be an honorable enterprise, but rather, a brutal activity that has the potential to destroy all of humanity. Indeed, Girard believes 20th and 21st centuries are apocalyptic, but not in the fundamentalist sense. The ‘signs’ of apocalypse are not numerical clues such as 666, but rather, signs that humanity has not found an efficient way to put an end to violence, and unless the Christian message of repentance and withdrawal from violence is assumed, we are headed towards doomsday; not a Final Judgment brought forth by a punishing God, but rather, a doomsday brought about by our own human violence.

5. Theological Implications

Girard claims not to be a theologian, but rather, a philosophical anthropologist. But, echoing Simone Weil, he believes that the gospels, inasmuch as they reveal the nature of human beings, also indirectly reveal the nature of God. Thus, Girard’s work has great implications for theologians, and his work has generated new ways to interpret the traditional Christian doctrines.

a. God

Girard is little concerned with the classical theistic attempt to prove the existence of God (for example Aquinas, Plantinga, Craig and Swinburne). But, he does seem to assume that the only way to explain the Bible’s uniqueness in its rejection of scapegoating distortion and its refusal to succumb to the mob’s influence, is by proposing the intervention of a higher celestial power. So, in a weak sense, Girard’s apologetic works try to prove that the Bible is divinely inspired and, therefore, that God exists.

More importantly, Girard believes that the Bible reveals that the true God is far removed from violence, whereas gods that sanction violence are false gods, that is, idols. By revealing how human violence works, Girard claims, the Bible reveals that this violence does not come from God; rather, God sympathizes with victims and wants nothing to do with victimizers.

b. The Incarnation

Furthermore, the doctrine of Incarnation becomes especially important under Girard’s interpretation. For God himself incarnates in the person of Jesus, in order to become himself a victim. Thus, God is so far removed from aggressors and scapegoaters, He himself becomes a victim in order to show humanity that He sides with innocent victims. Thus, the way to overturn the scapegoat mechanism is not only by telling the stories from the perspective of the victim, but also by telling the story that the victim itself is God incarnate.

c. Satan

Most liberal contemporary Christians pay little attention to Satan, but Girard wishes to keep its relevance. Girard has little patience for the literal mythological interpretation of Satan as the red, horned creature. According to Girard, the concept of Satan and the Devil most frequently referred to in the gospels is what it etymologically expresses: the opponent, the accuser. And, in this sense, Satan is the scapegoating mechanism itself (or, perhaps more precisely, the accusing process); that is, the psychological processes in which human beings are caught up by the lynching mob, and eventually succumb to its influence and participate in the collective violence against the scapegoat.

Likewise, the Holy Spirit in Girard’s interpretation is the reverse of Satan. Again, Girard recurs to etymology: the Paraclete etymologically refers to the spirit of defense. Thus, Satan accuses victims, and the Paraclete mercifully defends victims. Thus, the Holy Spirit is understood by Girard as the overturning of the old scapegoating practices.

d. Original Sin

In the old Pelagian-Augustinian debate over original sin, Girard’s work clearly sides with Augustine. Under Girard’s interpretation, there is a twofold sense of original sin: 1) human beings are born with the propensity to imitate each other and, eventually, be led to violence; 2) human culture was laid upon the foundations of violence. Thus, human nature is tainted by an original sin, but it can be saved through repentance materialized in the withdrawal from violence.

The complementary aspect of the original sin debate, that is, free will, has not been tackled by Girard. But, being a Roman Catholic, it is presumable that Girard would not accept the Calvinist concepts of total depravity, irresistible grace and predestination. He seems to believe that human beings are born with sin, but they have the capacity to do something about it through repentance.

e. Atonement

Girard’s vision of Christianity also brings forth a new interpretation of the doctrine of atonement, that is, that Christ died for our sins. Anselm’s traditional account (God’s honor was offended by the sins of mankind, His honor was reestablished with the death of His own son), or other traditional interpretations (mankind was kidnapped by the Devil, God offered Christ as a ransom; Jesus died so God could show humanity what He is capable of doing if we do not repent, and so forth) are deemed inadequate by Girard. Under Girard’s interpretation, Jesus saved us by becoming a victim and overturning once and for all the scapegoat mechanism. Thanks to Jesus’ salvific mission, human beings now have the capacity to understand what scapegoats really are, and have the golden opportunity to achieve enduring social peace.

6. Criticisms

An important source of criticisms against Girard is his apologetic commitment to Christianity. Most critics argue that he has a tendency to twist interpretations of classical texts and myths in order to favor Christian doctrine. Girard has frequently asserted that he was not a Christian for the early part of his life, but that his work as a humanist eventually drove him to Christianity. Also, Girard has been seen with contempt by postmodernist critics who, on the whole, are suspicious of objective truth.

a. Mimetic Theory Claims Too Much

The first point of criticism directed at Girard is that he is too ambitious. His initial plausible interpretations of mimetic psychology and anthropology are eventually transformed into a grandiose theoretical system that attempts to explain every aspect of human nature.

Consequently, in such a manner, his methods have been questioned. His theories regarding mimetic desire are derived, not from a careful study of subjects and the implementation of tests, but rather, from the reading of works of fiction. The fact that his theory seems to coincide with what many neuroscientists are informing us about mirror neurons is immaterial: his was just a lucky guess.

The same critique may be extended to his work on the origins of culture. Again, his scapegoating thesis may be plausible, in as much as it is easy to find many examples of scapegoating processes in human culture. But, to claim that all human culture ultimately relies on scapegoating, and that the fundamental cultural institutions (myths, rituals, hunting, domestication of animals, and so forth), are ultimately derived from an original murder, is perhaps too much.

Thus, in a sense, Girard’s work is subject to the same criticism of many of the great theoretical systems of the human sciences in the 19th century (Hegel, Freud, Marx, and so forth): his sole concentration on his favorite themes makes him overlook equally plausible alternate explanations for the phenomena he highlights.

b. The Origins of Culture are Not Verifiable

As a corollary of the previous objection, empirically-minded philosophers would object that Girard’s theses are not verifiable in a meaningful way. There is little possibility to know what may have happened during Paleolithic times, apart from what paleontology and archaeology might tell us.

In some instances, Girard claims that his theses have indeed been verified. There have been plenty of archaeological remains that suggest ritual human sacrifice, and plenty of contemporary rituals and myths that suggest scapegoating violence. But, then again, the number of rituals and myths that do not display violence is even greater. Girard does not see this as a great obstacle to his theses, because according to him, cultures have a tendency to erase the track of original violence.

But, in such a case, the empirically-minded philosopher may argue that Girard’s work is not falsifiable in Popper’s sense. There seems to be no possibility of a counter-example that will refute Girard’s thesis. If a violent myth or ritual is considered, Girard will argue that this piece of evidence confirms his hypotheses. If, on the other hand, a non-violent myth or ritual is considered, Girard will once again argue that this piece of evidence confirms his evidence, because it proves that cultures erase tracks of violence in myths and rituals. Thus, Girard is open to the same Popperian objection leveled against Freud: both sexual and non-sexual dreams confirm psychoanalytic theory; therefore, there is no possible way to refute it, and in such a manner, it becomes a meaningless theory.

c. Girard Exaggerates the Contrast Between Myths and the Bible

Girard is also open to criticism inasmuch as his Christian apologetics seems to rely on an already biased comparison of myths and the Bible. It has been objected that he is not thoroughly fair in the application of standards when contrasting the Bible and myths. Girard’s hermeneutic goes to great lengths to highlight violence in rituals when, in fact, it is not all that evident. He may be accused of being predisposed to find sanctioned violence in myths and, based upon that predisposition, he interprets as sanctioned violence mythical elements that, under another interpretative lens, would not be violent at all. Metaphorically speaking, when studying many myths, Girard is just seeing faces in the clouds, and projecting upon myths some elements that are far from being clear.

In the same manner, one may object that Girard’s treatment of the Bible, and especially the New Testament, is too benevolent. Most secular historians would agree that there are some hints of persecution against the Jews in the gospels (for example, an exaggeration of Jewish guilt in the arrest and execution of Jesus), and that the historical Jesus’ apocalyptic preaching is not just a warning of future human violence, but rather, an announcement of imminent divine wrath.

d. Christian Uniqueness Does Not Imply a Divine Origin

Even if Girard’s thesis about the uniqueness of Christianity were accepted, it needn’t prove a divine origin. Perhaps Christianity is unique due to a set of historical and sociological circumstances that drove biblical authors to sympathize with victims (indeed, Max Weber’s explanation is as follows: the Bible’s authors sympathize with victims because they were themselves victims as subjects of the great empires of the Near East). In such a manner, Girard may be accused of incurring an ad ignorantiam fallacy. The fact that we cannot currently explain a given phenomenon does not imply that such phenomenon’s origins are supernatural.

e. Lack of a Precise Scientific Language

Even if one were to accept that the Bible reveals a profound nature about human beings, scientifically-minded philosophers would object that Girard’s language is too obscure and too religiously-based for scientific purposes. Perhaps the Bible does reveal some interesting insights about the nature of scapegoating. But, to name such a process ‘Satan’, or to name the human tendency to incur in rivalries ‘sin’, bears a great potential for confusion. Whenever most readers encounter the word ‘Satan’, they are prone to imagine the nasty horned tailed creature, and not in some sort of abstract psychological mechanism that gives rise to scapegoating violence. So, even if Girard’s use of those terms is metaphoric, they are easily open to confusion, and perhaps should be abandoned.

7. References and Further Reading

a. Primary

  • Deceit, Desire, and the Novel: Self and Other in Literary Structure. Baltimore: The Johns Hopkins University Press, 1965.
  • Resurrection from the Underground: Feodor Dostoevsky. New York: Crossroad, 1997.
  • Violence and the Sacred. Baltimore: The Johns Hopkins University Press, 1977.
  • Things Hidden since the Foundation of the World. Research undertaken in collaboration with Jean-Michel Oughourlian and Guy Lefort. Stanford, CA: Stanford University Press, 1987.
  • "To Double Business Bound": Essays on Literature, Mimesis, and Anthropology. Baltimore: The Johns Hopkins University Press, 1978.
  • The Scapegoat. Baltimore: The Johns Hopkins University Press, 1986.
  • Job: The Victim of His People. Stanford, CA: Stanford University Press, 1987
  • A Theater of Envy: William Shakespeare. St. Augustine's Press, 2004.
  • Quand ces choses commenceront...Entretiens avec Michel Treguer. Paris: Arléa, 1994.
  • The Girard Reader. Edited by James G. Williams. New York: Crossroad, 1996.
  • I See Satan Fall like Lightning. Maryknoll, NY: Orbis Books, 2001.
  • Celui par qui le scandale arrive: Entretiens avec Maria Stella Barberi. Paris: Brouwer, 2001.
  • Oedipus Unbound: Selected Writings on Rivalry and Desire. Edited by Mark Anspach. Stanford, CA: Stanford University Press, 2004.
  • Evolution and Conversion: Dialogues on the Origins of Culture. With Pierpaolo Antonello and Joao Cezar de Castro Rocha. London: T&T Clark/Continuum, 2007
  • Christianity, Truth, and Weakening Faith: A Dialogue. René Girard and Gianni Vattimo. Edited by Pierpaolo Antonello and translated by William McCuaig. New York: Columbia University Press, 2010
  • Battling to the End: Conversations with Benoît Chantre. East Lansing, MI: Michigan State University Press, 2010.
  • Anorexie et désir mimétique. Herne, 2008.

b. Secondary

  • ALBERG, Jeremiah. A Reinterpretation of Rousseau: A Religious System. Foreward by René Girard. Palgrave Macmillan, 2007. Hardcover
  • ALISON, James. Broken Hearts and New Creations: Intimations of a Great Reversal. New York: Continuum, 2010.
  • ALISON, James. Faith Beyond Resentment: Fragments Catholic and Gay. New York: Crossroad, 2001
  • ALISON, James. The Joy of Being Wrong: Original Sin Through Easter Eyes. New York: Crossroad, 1998.
  • ANDRADE, Gabriel. René Girard: Um retrato intellectual. E realizacaoes. 2011.
  • ASTELL, Ann W. Joan of Arc and Sacrificial Authorship. South Bend, IN: University of Notre Dame Press, 2003
  • BAILIE, Gil. Violence Unveiled: Humanity at the Crossroads. New York: Crossroad, 1995. Paper.
  • BANDERA, Cesáreo. The Humble Story of Don Quixote: Reflections on the Birth of the Modern Novel. Catholic University of America Press, 2006.
  • BANDERA, Cesáreo. The Sacred Game. Penn State Press. 2004.
  • BARTLETT, Anthony. Cross Purposes: The Violent Grammar of Christian Atonement. Valley Forge, PA: Trinity Press International, 2001.
  • BELLINGER, Charles K. The Genealogy of Violence: Reflections on Creation, Freedom, and Evil. Oxford University Press, 2001.
  • DALY, Robert J., S. J. Sacrifice Unveiled: The True Meaning of Christian Sacrifice. London: T&T Clark / New York: Continuum, 2009.
  • DUMOCHEL, Paul, ed. Violence and Truth: on the Work of René Girard. Stanford, CA: Stanford University Press, 1988.
  • FINAMORE, Stephen. God, Order, and Chaos: René Girard and the Apocalypse. Eugene, OR: Wipf & Stock, 2009.
  • FLEMING, Chris. René Girard: Violence and Mimesis. Cambridge, Eng.: Polity Press, 2004.
  • FREUD, Sigmund. Totem and Taboo. Create Space. 2011.
  • GOLSAN, Richard J. René Girard and Myth: An Introduction. New York: Routledge, 2001
  • GOODHART, Sandor; Jorgensen, Jorgen; Ryba, Thomas; Williams, James G.; eds. For René Girard: Essays in Friendship and in Truth. East Lansing, MI: Michigan State University Press, 2009.
  • GOODHART, Sandor; Jorgensen, Jorgen; Ryba, Thomas; Williams, James G.; eds. Sacrificing Commentary: Reading the End of Literature. Baltimore: Johns Hopkins University Press, 1996.
  • GRANDE, Per Bjørnar. Mimesis and Desire: An Analysis of the Religious Nature of Mimesis and Desire in the Work of René Girard. LAP Lambert Academic Publishing, 2009. Paperback: 224 pages.
  • GROTE, Jim and McGeeney, John. Clever as Serpents: Business Ethics and Office Politics. Collegeville, MN: The Liturgical Press, 1997. Paperback, 149 pages.
  • HAMERTON-KELLY, Robert G, ed. Politics & Apocalypse. East Lansing, MI: Michigan State University Press, 2007.
  • HAMERTON-KELLY, Robert G, ed. Sacred Violence: Paul's Hermeneutic of the Cross. Minneapolis: Fortress Press, 1992 .
  • HAMERTON-KELLY, Robert G, ed. The Gospel and the Sacred: Poetics of Violence in Mark. Minneapolis: Fortress Press, 1994
  • of the victim."
  • HOBBES, Thomas. Leviathan. Oxford UP. 2009.
  • KIRK-DUGGAN, Cheryl A. Refiner's Fire: A Religious Engagement with Violence. Minneapolis: Fortress Press, 2001
  • KIRWAN, Michael. Discovering Girard. Cambridge, MA: Cowley Publications, 2005.
  • LEFEBURE, Leo D. Revelation, the Religions, and Violence. Maryknoll, NY: Orbis Books, 2000.
  • MCKENNA, Andrew J. Violence and Difference: Girard, Derrida, and Deconstruction. Chicago: University of Illinois Press, 1992.
  • OUGHOURLIAN, Jean-Michel. The Genesis of Desire. E. Lansing, MI: Michigan State University Press, 2010.
  • Chicago: University of Illinois Press, 1992.
  • OUGHOURLIAN, Jean-Michel. The Puppet of Desire: The Psychology of Hysteria, Possession, and Hypnosis. Stanford, CA: Stanford University Press, 1991.
  • SCHWAGGER, Raymund, S.J. Banished from Eden: Original Sin and Evolutionary Theory in the Drama of Salvation. Gracewing, 2006.
  • SWARTLEY, Willard M., editor. Violence Renounced: René Girard, Biblical Studies, and Peacemaking. Response by René Girard and Foreward by Diana M. Culbertson. Telford, PA: Cascadia Publishing House, 2000.
  • WILLIAMS, James G. The Bible, Violence, and the Sacred: Liberation from the Myth of Sanctioned Violence. Foreword by René Girard. Eugene, OR: Wipf & Stock, 2007.


Author Information

Gabriel Andrade
University of Zulia

Platonism and Theism

This article explores the compatibility of, and relationship between, the Platonic and Theistic metaphysical visions. According to Platonism, there is a realm of necessarily existing abstract objects comprising a framework of reality beyond the material world. Platonism argues these abstract objects do not originate with creative divine activity. Traditional Theism contends that God is primarily the creator and that God is the source of existence for all realities beyond himself, including the realm of abstract objects.

A primary obstacle between these two perspectives centers upon the origin, nature and existence of abstract objects.  The Platonist contends that these abstract objects exist as a part of the framework of reality and that abstract objects are, by nature, necessary, eternal and uncreated.  These qualities stand as challenges for the Traditional Theist, attempting to reconcile his or her metaphysic with that of Platonism since Traditional Theism contends that God is uniquely necessary, eternal, uncaused, and is the cause of everything that exists. The question, therefore, emerges as to whether these two metaphysical visions are reconcilable and, if not, then why not, and, if so, then how might this be accomplished?

Despite the differences, some Traditional Theists have found Platonism to be a helpful framework by which to convey their conclusions regarding the nature of God and of ultimate reality. Others pursue reconciliation between Theism and Platonism through the proposal of what has been defined as a modalized Platonism, which concludes that necessarily existing abstract objects, nevertheless, have origin in the creative activity of God.  Still others refuse any consideration of Theism in relationship to Platonism.

Table of Contents

  1. The Problem
  2. Platonism and Abstract Objects
    1. Abstract Objects and Necessary Existence
    2. Abstract Objects as Uncreated
    3. Abstract Objects as Eternal
  3. Traditional Theism
    1. God as Creator
    2. Creatio ex Nihilo
    3. Divine Freedom
  4. Emerging Tensions
    1. God as the Origin of Abstract Objects
    2. Abstract Objects as Uncreated
  5. Selected Proposals
    1. James Ross: A Critical Rejection of Platonism
    2. Nicholas Wolterstorff: A Restrictive Idea of Creation
    3. Morris and Menzel: Theistic Activism
    4. Bergman and Brower: Truthmaker Theory
    5. Plantinga: Christian Platonism
  6. References and Further Reading
    1. Books
    2. Articles

1. The Problem

Is the platonic metaphysical vision compatible with that of Traditional Theism? Some would contend that the two are compatible, while others would argue to the contrary. Platonists argue that at least some, if not all, abstract objects are uncreated, and exist necessarily and eternally; whereas Traditional Theism asserts that God exists as the uncreated creator of all reality existing beyond himself.

But can this central conclusion of Traditional Theism be reconciled with the Platonic understanding of abstract objects as uncreated, necessarily extant, and eternal? Furthermore, if it is possible to reconcile these worldviews, how might one do so?  Put differently, is there anything, other than himself, that God has not created? Or are we to understand the conclusion that God has created everything in a qualified or restricted sense? Are there things which are not to be included in the Theistic tenet of faith that God is the creator of all things? If so, what things do not result from God’s creative activity?

2. Platonism and Abstract Objects

Contemporary Platonism argues the existence of abstract objects. Abstract objects do not exist in space or time and are entirely non-physical and non-mental. Contemporary Platonism, while deriving from the teachings of Plato, is not directly reflective of the teachings of Plato. Abstract objects are non-physical entities in that they do not exist in the physical world, and they are not compositionally material. Abstract objects are non-mental, meaning that they are not minds or ideas in minds, neither are they disembodied souls or gods. Further, abstract objects are said to be causally inert. In short, Platonism contends that abstract objects exist necessarily, are eternal, and cannot be involved in cause and effect relationships with other objects.

Platonists argue the existence of abstract objects since it makes sense to believe, for instance, that numbers exist and that the only legitimate view of these things is that they are abstract objects. For Platonists, however, there are several categories of things, including physical things, mental things, spiritual things, and the problematic fourth category that includes things such as universals (the wisdom of Socrates, the redness of an apple), relationships (for example, loving, betweenness), propositions (such as 7 + 5 = 12, God is just), and mathematical objects such as numbers and sets. (Menzel, 2001, 3)

As we shall see below, the existence of abstract objects represents a significant challenge for the Christian in particular and for Traditional Theists in general since it is central to these worldviews that God is the creator of everything other than God himself. Generally, however, abstract objects are considered to be like God in that they are said to have always existed, and to always exist in future. Consequently, there is no point at which God is considered to have brought them into being. (Menzel, 2001, 1-5).

But why would the Platonist conclude that God has not created all abstract objects, or has created selected abstract objects?  The response to this question moves us to a consideration of the nature of abstract objects as necessarily extant, uncreated, and eternal, and to briefly address why God’s creation of abstract objects is questionable.

a. Abstract Objects and Necessary Existence

What is meant by the phrase necessary existence? A thing is said to possess necessary existence if it would have existed no matter what or if it would have existed under any possible circumstances. A thing has necessary existence if its non-existence is impossible. For instance, if x is a necessary being, then the non-existence of x is as impossible as a round square or a liquid wine bottle. Human beings are said not to exist necessarily since we would never have existed if our parents had never met and this is a possible circumstance. (Van Inwagen, 1993, 118)

For the Platonist, God’s creation of abstract objects is questionable since they are understood to exist necessarily. As such, abstract objects cannot have not existed.  Consequently, consider whether God can create something existing necessarily? Put differently, does the assertion “x exists necessarily” entail that “x is uncreated”?  If this constitutes a valid assumption, the Platonic understanding of the nature of abstracts objects as necessarily extant excludes the creation of these objects by God or any other external source.

b. Abstract Objects as Uncreated

Second, for the Platonist, God’s creation of abstract objects is questionable since the creative event in Traditional Theism is understood to be a causal event and Platonism understands abstract objects as being uncreated and also as being incapable of entering into causal relations. If, therefore, abstract objects are uncreated, then it seems that God is just one more extant entity existing in the universe and God cannot be the maker of all things, both visible and invisible. (Menzel, 1986)

c. Abstract Objects as Eternal

Third, for the Platonist, God’s creation of abstract objects is questionable due to their being eternal. There is no point at which God could be said to have brought abstract objects into being and, therefore, it is difficult to think of them as creatures since they are not created. If an abstract object has no beginning in time there could not have been a time at which God first created it. (Menzel, 2001, 4-6) If abstract objects are eternal, then they possess a character which parallels God, since according to Traditional Theism God is considered to be eternal.

These platonic affirmations regarding the nature of abstract objects as eternal, necessary and uncreated pose significant challenges to any effort to merge the worldviews of Platonism and Traditional Theism. With this understanding of abstract objects, we now turn to a consideration of the definition of Traditional Theism.

3. Traditional Theism

What are the central tenets of Traditional Theism? First, Traditional Theism and Classical Theism (hereafter referred to as Traditional Theism) are regarded as synonymous. Traditional Theism is supported in the writings of authors such as Moses Maimonides (1135-1204), the Islamic author Avicenna (980-1037), and the Christian author Thomas Aquinas (1224-74). Traditional Theism constitutes what all Jews, Christians and Muslims officially endorsed for many centuries. In addition, Traditional Theists strongly endorse the aseity-sovereignty doctrine, according to which God is the uncreated Creator of all things and all things other than God depend upon God, while God depends on nothing whatsoever. (Davies, 2004, 1) Numerous philosophers have assumed that God is as defenders of Traditional Theism consider him to be, the source of all reality external to himself. From the period of St. Augustine of Hippo (354-430) to the time of G. W. Leibniz (1646-1716), philosophers carried on with the assumption that belief in God is belief in Traditional Theism. This understanding has been endorsed by many theologians, and is represented in the tenets of the Roman Catholic Church. These beliefs were also endorsed and propagated by many of the major Protestant reformers, such as the eighteenth century American Puritan, Jonathan Edwards.

It is to the definition of Traditional Theism that we turn since it is these tenets of faith that represent the primary obstacles in our effort to reconcile the Theistic and Platonic metaphysical perspectives. These include: God as creator, Creation as ex nihilo, and the assertion of divine freedom.

a. God as Creator

Traditional Theism understands God to be the creative source for his own existence, as well as for the existence of all reality existing outside of himself. First, as regards God’s being the creative source for his own existence, if something else created God, and then God created the universe, it would seem to most that this other thing was the real and ultimate source of the universe and that God is nothing more than an intermediary. (Leftow, 1990, 584) Therefore, according to Traditional Theism, there can be no regress of explanations for what exists past the explanations for God’s existence.

Second, Traditional Theism not only endorses the belief that God is responsible for his own existence, but also that God is the Creator of all extant reality beyond himself. Consequently, God is essentially what accounts for the existence of anything beyond God or God is responsible for the existence of something rather than nothing. For Traditional Theism, this notion entails not only that God is responsible for the fact that the universe began to exist, but that God’s work is also responsible for the continued existence of the cosmos. (Davies, 2004, 3)

b. Creatio ex Nihilo

Is there anything that can pre-exist the creative activity of God? Traditional Theists respond to this question with a resounding, “No.”  Aquinas writes,

We must consider not only the emanation of a particular being from a particular agent, but also the emanation of all being from the universal cause, which is God; and this emanation we designate by the name of creation. Now what proceeds by particular emanation is not presupposed to that emanation; as when a man is generated, he was not before, but man is made from not-man, and white from not-white. Hence, if the emanation of the whole universal being from the first principle be considered, it is impossible that any being should be presupposed before this emanation. For nothing is the same as no being. Therefore, as the generation of a man is from the not-being which is not-man, so creation, which is the emanation of all being, is from the not-being which is nothing. (Thomas Aquinas, 1948, Ia, 45, 1.)

Traditional Theism, therefore, understands God as the one who creates ex nihilo, or from nothing. The phrase denotes not that God, in the creative act, worked with something called “nothing” but that God creates that which is external to himself without there being anything prior to his creative act with the exception of himself. The challenging implication of this tenet of Traditional Theism for the Platonic notion of abstract objects is obvious. Traditional Theists counter the Platonic notion that abstract objects are uncreated, contending that if God did not create non-substance items, such as abstract objects, creation would not truly be ex nihilo, since these entities would have accompanied God from all eternity and become aspects of God’s creation, for example, by being unsubstantiated. (Leftow, 1990, 583-84).

c. Divine Freedom

Traditional Theists also argue that God’s choices to act are always carried out in the context of divine freedom, signifying that God is not constrained by anything beyond the laws of logic and His own nature. This is regarded as true by the Traditional Theist since God has established these laws and can alter them if he chooses to do so. Further, God cannot be compelled to choose. If God makes choices in response to human action, so says the Traditional Theist, it is always in his power to prevent actions by any method he chooses.

In short, God always responds to the actions he permits. Consequently, God is always ultimately in control, even in the context of actions that we have created. Therefore, if God carried out his creative activity in the context of complete divine freedom and if God is not and cannot be compelled to act creatively by any external source, then how can God’s freedom be reconciled with the Platonic notion of abstract objects as existing necessarily, since, if abstract objects exist necessarily by God’s creative act, then God was compelled to create them by forces beyond himself. Again, the tension between the two worldviews of Traditional Theism and Platonism becomes apparent.

As this examination of the central tenets of Traditional Theism demonstrates, a challenge exists in the effort to integrate the worldviews of Traditional Theism and Platonism. In summary, Platonists contend that abstract objects are uncreated, whereas Traditional Theists argue that God created all reality; Platonists believe that abstract objects exist necessarily, whereas Traditional Theists contend that God alone is necessarily extant; Platonists propose that abstract objects are eternal, whereas Traditional Theists believe that God alone is eternal. With these contrasts in mind, we turn now to consider specific problems said to emerge from them.

4. Emerging Tensions

As has been observed in this article, the apparent conflict between Platonism and Traditional Theism emerges from the central notion of Traditional Theism, that God is the absolute creator of everything existing distinct from himself; and the central claim of contemporary Platonism, that there exists a realm of necessarily existent abstract objects that could not fail to exist. In considering the tension between abstract objects and Traditional Theism, Gould writes,

To see what the problem is, consider the following three jointly inconsistent claims: (a) there is an infinite realm of abstract objects which are (i) necessary independent beings and are thus (ii) uncreated; (b) only God exists as a necessary independent being; (c) God creates all of reality distinct from him, i.e. only God is uncreated. Statement (a) represents a common understanding of Platonism. Statements (b) and (c) follow from the common theistic claim that to qualify for the title “God,” someone must exist entirely from himself (a se), whereas everything else must be somehow dependent upon him. (Gould, 2010, 2)

A contradiction emerges in consideration of the first and third claims. Proposal (a) posits the existence of abstract objects that are necessary, independent and uncreated. Proposal (c) posits that all reality existing separately from God has its origin in divine creative activity. These two proposals would appear to be mutually exclusive. As a result a rapprochement appears to exist between Platonism and Traditional Theism. Platonism asserts that the existence of all things outside of God is rooted in divine activity. Platonism further argues that there are strong reasons for recognizing in our ontology the existence of a realm of necessarily existent abstract objects. In contradistinction, the Traditional Theist claims that the realm of necessity as well as that of contingency is within the province of divine creation. For the Traditional Theist, therefore, God is, in some fashion, responsible for the existence of all necessarily existent entities, as well as for contingent objects such as stars, planets and electrons, and so forth. (Morris and Menzel, 1986, 153)

But what are the specific problems associated with the effort to merge Platonism and Traditional Theism? Menzel clarifies,

On the [P]latonist conception, most, if not all, abstract objects are thought to exist necessarily. One can either locate these entities outside the scope of God’s creative activity or not. If the former, then it seems the believer must compromise his view of God: rather than the sovereign creator and lord of all things visible and invisible, God turns out to be just one more entity among many in a vast constellation of necessary beings existing independently of his creative power. If the latter, the believer is faced with the problem of what it could possibly mean for God to create an object that is both necessary and abstract. (Menzel, 1987, 1)

Therefore, both horns of this dilemma lead to inevitable challenges. To contend that God created abstract objects has been said to lead to a problem of coherence and a questioning of divine freedom. To contend that God did not create abstract objects has been understood to lead to a problem regarding the sovereignty of God, as well as the uniqueness of God. It is to these matters that we now turn.

a. God as the Origin of Abstract Objects

Consider the conclusion that God created abstract objects. Two objections arise from this proposal.

First, the coherence problem contends that it makes no sense to discuss the origin of things considered to exist necessarily, or that could not have failed to exist, such as abstract objects. (Leftow, 1990, 584)  Supposing that at least some abstract objects exist necessarily, does the truth of this conclusion entail also that God has not created such abstract objects that exist of necessity?

Second, the freedom problem has its origin in the contention of Traditional Theism that God always acts in total freedom. However, if abstract objects exist necessarily, then God had no choice in the matter of their creation. Therefore, God is constrained by something other than himself, a conclusion leading to questions regarding the nature of God as omnipotent and possessing complete freedom. Traditional Theists are quick to affirm that God’s intentions or choices are not constrained by any entity other than God and no chain of true explanations goes beyond a divine intention or choice – or else beyond God’s having his nature and whatever beliefs he has logically before he creates, which may explain certain of God’s intentions and choices. For if nothing other than God forces God to act as he does, the real explanation of God’s actions always lies within God himself. (Leftow, 1990, 584-585)

b. Abstract Objects as Uncreated

Suppose, on the other hand, that God did not create abstract objects. Problems still emerge.  First, if God did not create abstract objects, and if abstract objects are eternal, necessary and uncreated, then these realities are sovereign, as is God who also is eternal, necessary and uncreated, according to the Traditional Theist. God therefore is merely one more object in the vast array of items in the universe, which also includes abstract objects. This dilemma has been designated as the sovereignty problem. (Leftow, 1990, 584)

Further, a necessary object is said to constitute its own reason for existence. It is said to exist of and from itself. Therefore there is no need for a further explanation of the reason for the existence of the necessary object, a belief known as the doctrine of aseity. Aseity, however, has been associated uniquely with God. Therefore, if abstract objects exist a se, then God is not unique, exists alongside abstract objects and, exists as one being among many others existing by their own nature. This problem has been designated as the uniqueness problem.

In consideration of the relationship of Platonism and Traditional Theism, these problems force the Theist to revise, in some fashion, his understanding of the nature of God, reject Platonism altogether, or to seek a manner in which to reconcile the two. We now turn to a consideration of certain of the efforts made by Traditional Theists to merge or reconcile these two major metaphysical perspectives.

5. Selected Proposals

Can the worldviews of Traditional Theism and Platonism be merged in a manner that does not compromise the core tenets of these seemingly divergent metaphysical perspectives? Proposals range from those which reject altogether the notion of compatibility to those that use the Augustinian notion of abstract ideas as products of the intellectual activity of God. The present section considers five prominent proposals.

a. James Ross: A Critical Rejection of Platonism

Ross’ approach represents a rejection of the integration of Platonic and Theistic metaphysical perspectives. Ross presents a highly critical analysis of Platonism. He denies the Platonic notion of the world of eternal forms, opting instead for a thorough-going Aristotelianism, positing the existence of inherent explanatory structures throughout reality, which he understands as “forms”.  According to Ross, if the independent necessary beings of Platonic Theism are other than God, both the simplicity and independence of God are compromised. Ross further posits that by attracting our attention to the Platonic abstractions, which all existing things are supposed to exemplify, we are consequently distracted from the things or objects themselves. (Ross, 1989, 3)

Ross presents a further set of objections to Platonic metaphysics. He points out that the whole set of abstract entities, which all physical objects are supposed to instantiate, are held to be eternal and changeless realities. Within a Theistic point of view, two options exist regarding these abstract entities according to Ross. First, some Theists propose that abstract entities are co-eternal with God because they are in fact one with God, and second, abstract objects are in some other sense ideas in the mind of God and therefore co-eternal with him.

Ross objects that the first possibility is incompatible with an attribute traditionally ascribed to God, that is, God’s simplicity. Ross further objects that the second contention compromises the Traditional Theists’ understanding of God as the source of all extant realities beyond himself.  Third, Ross counters that the divine creation seems not to involve much creativity or choice if it consists completely of God instantiating beings that had already existed for all of eternity, thereby compromising God’s freedom. Further, the whole sense of creatio ex nihilo is, therefore, eliminated if we are to conceive of God as not making things up but only granting physical existence to that which already shared abstract existence co-eternally with him. (Ross, 1989, 3-5)

Ross concludes that there is an inherent incompatibility of Platonism and Traditional Theism since the incorporation of the Platonic worldview, which entails the existence of abstract objects that exist eternally, necessarily, and are uncaused, forces the Traditional Theist to compromise in some fashion his understanding of the nature of God, thereby leading the Theist to a departure from what is regarded as an orthodox understanding of the nature of God.

b. Nicholas Wolterstorff: A Restrictive Idea of Creation

Nicholas Wolterstorff finds a mediating position between the Platonic and Theistic worldviews. He does so, however, by adopting a non-Traditional Theistic perspective, which according to some is an unavoidable consequence of endorsing Platonism. Wolterstorff proposes that necessarily existing abstract objects are in fact not dependent upon God. (Wolterstorff, 1970) and he promotes the view that some properties, specifically those exemplified by God, are to be excluded from God’s creative activity. (Gould, 2010, 134) Wolterstorff goes so far as to propose that God in his nature has properties that he did not bring about. (Wolterstorff, 1970, 292) He writes:

[Consider] the fact that propositions have the property of being either true or false. This property is not a property of God. . . . For the propositions “God exists” and “God is able to create” exemplify being true or false wholly apart from any creative activity on God’s part; in fact, creative ability on his part presupposes that these propositions are true, and thus presupposes that there exists such a property as being either true of false. (Wolterstorff, 1970, 292; Gould, 2010, 135)

As such, Wolterstorff presents what may be termed a restrictive understanding of the creative activity of God. (Wolterstorff, 1970, 292). Wolterstorff, a Christian, argues that the biblical writers simply did not endorse a wide scope reading of the doctrine of creation. He posits that it cannot legitimately be entertained that the biblical writers actually had universals in view when speaking of God’s being the Creator of all things. In addition, he points out that the creator/creature distinction is invoked in Scripture for religious and not theoretical or metaphysical reasons.

Again we see in Wolterstorff’s approach what those who reject Traditional Theism altogether understand to be an inevitable result of endorsing Platonism. Wolterstorff, due to his endorsing of Platonism, is said therefore to have compromised the understanding of Traditional Theism in that God ceases to be the creator of various dimensions of his own identity, as well as of objects existing beyond himself.

c. Morris and Menzel: Theistic Activism

Christopher Menzel and Thomas Morris acknowledge a tension between Theism and Platonism, but seek to reconcile the divergent metaphysical perspectives utilizing the concept of Theistic Activism. Morris and Menzel ask whether God can not only be responsible for the creation of all contingent reality, but also if it can be intelligently and coherently concluded that God can also be creatively responsible for necessary existence and necessary truth. Morris and Menzel proceed to demonstrate what they term as the extraordinary compatibility of core elements of the Platonic and Theistic metaphysical visions. (Morris and Menzel, 1986, 361). Menzel writes,

The model that will be adopted . . . is simply an updated and refined version of Augustine’s doctrine of divine ideas, a view I will call theistic activism, or just activism, for short. Very briefly, the idea is this. On this model, abstract objects are taken to be contents of a certain kind of divine intellective activity in which God is essentially engaged; roughly, they are God’s thoughts, concepts, and perhaps certain other products of God’s mental life. This divine activity is. . . causally efficacious: the abstract objects that exist at any given moment, as products of God’s mental life, exist because God is thinking them; which is just to say that he creates them. (Menzel, 1986)

The authors, therefore, attempt to provide a Theistic ontology which places God at the center and which views everything else as exemplifying a relation of creaturely dependence on God. The authors agree that Platonism, in general, has been viewed historically as incompatible with Western Theism, but they propose that this perceived incompatibility is not insurmountable, and that the notion of Theistic Activism can overcome this apparent incompatibility. Menzel and Morris have two consequent objectives. First, they strive to eliminate the apparent inconsistency between Platonism and Theism. Second, the authors strive to preserve the Platonic notions of abstract objects, such as properties as necessary beings, as eternal, and as uncreated.

Morris and Menzel resolve the tension between abstract objects existing in simultaneity with God, concluding that God, in some fashion, must be creatively responsible for abstract objects. The authors therefore advance Theistic Activism, suggesting that the origination for the framework of reality that includes abstract objects has its source in the divine intellectual activity.

First, they argue that a Theistic Activist will hold God creatively responsible for the entire modal economy, for what is possible as well as what is necessary, and even for what is impossible. As stated above, the authors resort to the Augustinian divine ideas tradition, which concludes that the Platonic framework of reality arises out of the creatively efficacious intellective activity of God. The authors contend that the entire Platonic realm is, therefore, to be understood as deriving from God (Morris and Menzel, 1986, 356).

Second, Morris and Menzel proceed to propose a continuous model of creation, according to which God is always playing a direct causal role in the existence of his creatures and his creative activity is essential to a creatures being at all times, throughout its spacio-temporal existence. This is true regardless of whether God initially causes the created entity to exist. This conclusion is essential to the proposal of Morris and Menzel in that it provides a framework in which it can coherently be argued that God creates absolutely all objects, be they necessary or contingent. (Menzel, 1982, 2)

Third, for the Theistic Activist, God is understood to necessarily create the framework of reality. Morris and Menzel acknowledge the potentially problematic nature of this contention with regard to the activity of God as a free creator. As a resolution to the dilemma posed by the notions of God necessarily creating and God’s freedom, the authors argue that divine freedom must be understood in a radically different fashion from human freedom. Divine freedom is shaped by God’s moral nature. Therefore, God could not have done morally otherwise than was conducted in the act of creation.

Fourth, Morris and Menzel also address the problem of God’s own nature in relationship to this creative activity. The authors give consideration to the question of whether the varied dimensions of God’s own nature are part of the creative framework. The authors have two responses. They reject the proposal of some that God is to be understood as pure being and therefore devoid of determinate attributes such as omnipotence or omniscience. Morris and Menzel opt for the solution that God has a nature and that God creates his own nature. (Morris, 1989)

The writers conclude:

On the view of absolute creation, God is indeed a determinate, existent individual, but one whose status is clearly not just that of one more item in the inventory of reality. He is rather the source of absolutely everything there is: to use Tillich’s own characterization, he is in the deepest sense possible the ground of all-being. (Morris and Menzel, 1986, 360)

d. Bergman and Brower: Truthmaker Theory

Bergman and Brower conclude that Platonism is inconsistent with the central thesis of Traditional Theism, the aseity-dependence doctrine, which holds that God is an absolutely independent being who exists entirely from himself or a se. This central thesis of Traditional Theism led both philosophers and theologians of the Middle Ages to endorse the doctrine of divine simplicity by which God is understood to be an absolutely simple being, completely devoid of any metaphysical complexity. Further, according to the doctrine, God lacks the complexity associated with material or temporal composition, as well as the minimal form of complexity associated with the exemplification of properties.

The inconsistency is most apparent with regard to the tension between Platonism and divine simplicity. Platonism requires all true predications to be explained in terms of properties. Divine simplicity requires God to be identical with each of the things that can be predicated of him. If both are true, then God is identical with each of his properties and is therefore himself a property. This conclusion stands in contrast with the Traditional Theists understanding of God as a person and the conclusion that persons cannot be exemplified. Therefore Bergman and Brower advance that Platonism is inconsistent with the aseity-dependence doctrine itself. They further argue that the rejection of divine simplicity fails to remove this tension. In their conclusion, contemporary philosophers of religion have lost sight of a significant tension existing between Traditional Theism and Platonism, concluding that the two are perfectly compatible.

Bergman and Brower describe Platonism as characterized by two components. They remind that Platonism entails the view that a unified account of predication can be provided in terms of properties or exemplifiables. They also point out that Platonism entails the view that exemplifiables are best conceived of as abstract objects. Bergman and Brower indicate that Traditional Theism has typically addressed the second of these views and they propose that the distinctive aspect of their own argument targets the first. For Bergman and Brower this distinction is all important since it is often concluded that the inconsistency of Platonism and Traditional Theism is avoided merely by rejecting the Platonic view of properties in favor of another, such as the Augustinian view that properties are ideas in the mind of God. They write,

Traditional Theists who are Platonists, therefore, cannot avoid the inconsistency merely by dropping the Platonic conception of properties and replacing it with another – whether it be an Aristotelian conception (according to which there are no unexemplified universals), some form of immanent realism (according to which universals are concrete constituents of things that exemplify them), a nominalistic theory of tropes (according to which properties are concrete individuals), or even the Augustinian account (according to which all exemplifiables are divine concepts). (Bergman and Brower, 2006, 3-4)

However, Bergman and Brower contend that the inconsistency between the two metaphysical perspectives remains as long as the Traditional Theist continues to endorse the second of the two components of Platonism cited above. They further argue that the inconsistency can be resolved in only one of two ways. Either one is compelled to reject Traditional Theism and, therefore, become either a non-Theist or a non-Traditional Theist, or one is compelled to reject any unified account of predication in terms of exemplifiables. Those who desire to maintain the perspective of Traditional Theism are naturally inclined to adopt a unified account of predication and it is at this point that Bergman and Brower propose the alternative of Truthmaker Theory. (Bergman and Brower, 2006, 4)

But what is intended with the designation Truthmaker? The authors point out that the designation is not to be understood in causal terms or literally in terms of a “maker”, but on the contrary it is to be understood in terms of what they regard as a broadly logical entailment. Bergman and Brower begin their defense of Truthmaker Theory with a defense of the Truthmaker Theory of predication. Twenty-first century philosophers typically speak of Truthmakers as entailing the truth of certain statements or as predication by which is intended the truths expressed by them. For instance:

TM: If an entity E is a Truthmaker for a predication P, then “E exists” entails the truth expressed by P.

As a result, Socrates may be regarded as the Truthmaker for the statement “Socrates is human,” and God may be regarded as the Truthmaker for the statement, “God is divine.” If Traditional Theists desire to explain the truth of this predication in terms of something other than properties or exemplifiables, they can do so in terms of Truthmakers since, given that “God is divine” is a case of essential predication and that God necessitates its truth, God is, therefore, a plausible candidate for its Truthmaker. (Bergman and Bower, 2006, 25-27)

Not only do Bergman and Brower defend a Truthmaker Theory of predication, but they also attempt to demonstrate that Truthmaker Theory yields an understanding of the doctrine of divine simplicity that rescues the doctrine from the standard contemporary objection leveled against it, its alleged lack of coherence. Therefore, from the fact that God is simple, the medievals infer that God lacks any accidental or contingent properties and therefore that all true predications of the form “God is F” are cases of essential predication. Therefore, from the truth, “God is divine” it can be inferred that God is identical with his nature or divinity, which conclusion redeems the doctrine of divine simplicity. From the truth “God is good,” it can be inferred that he is identical with his goodness, the essence of the doctrine of divine simplicity. This is true for every other predication of this nature. Further, it can be concluded that just as God is identical with each of these qualities, so also each of these qualities is identical with each of the others, a further component of the doctrine of divine simplicity.

e. Plantinga: Christian Platonism

Alvin Plantinga has been described as a Platonist par–excellence. (Gould, 2010, 108) If Platonism is defined as the metaphysical perspective that there are innumerably many necessarily existing abstract entities, then Plantinga’s Does God Have A Nature? represents a thorough defense of Christian Platonism. (Freddoso, 145-53) Plantinga acknowledges that most Christians believe that God is the uncreated creator of all things and all things depend on him, and he depends upon nothing at all. The created universe presents no problem for this doctrine. God’s creation is dependent on him in a variety of ways and God is in no way dependent upon it. However, what does present a problem for this doctrine is the entire realm of Platonic universals, properties, kinds, propositions, numbers, sets, states of affairs and possible worlds. These things are everlasting, having no beginning or end. Abstract objects are also said to exist necessarily. Their non-existence is impossible. But how then are these abstract objects related to God? Plantinga frames the problem:

According to Augustine, God created everything distinct from him; did he then create these things? Presumably not; they have no beginnings. Are they dependent on him? But how could a thing whose non-existence is impossible . . . depend upon anything for its existence? And what about the characteristics and properties these things display? Does God just find them constituted the way they are? Must he simply put up with their being thus constituted? Are these things, their existence and their character, outside his control?  (Plantinga, 1980, 3-4)

Plantinga acknowledges two conflicting perceptions regarding God and he attempts to reconcile these two perspectives. On the one hand, it is argued that God has control over all things (sovereignty) and we believe that God is uncreated or that God exists a se.  Second, it is argued that certain abstract objects and necessary truths are independent of God and that certain of these, such as omniscience, omnipotence, omni-benevolence, constitute God’s nature. These two conclusions, however, are logically contradictory. How can God have sovereign control over all things and abstract objects exist independently?

Either the first or the second of these intuitions must be false. The entirety of Does God Have A Nature? is dedicated to an attempt to resolve this dilemma. Plantinga first discusses the proposal of Kant. Kant resolved the problem of these two conflicting intuitions through the denial that God has a nature, a conclusion that Plantinga rejects. Plantinga then moves to the consideration of the proposed solution of Thomas Aquinas. Aquinas argues on behalf of the doctrine of divine simplicity, which posits that God has a nature, but that God is identical with his nature. Plantinga concludes that Aquinas’ proposal is also inadequate due to the implications of the doctrine of divine simplicity, which seems to be problematic in that it leads to the denial of the personhood of God, thereby reducing him to an abstract object. Plantinga then turns to nominalism. The nominalist contends that abstract objects, such as properties, do not exist in any real sense. Abstract objects, therefore, are nothing more than designations and do not refer to any objects. Nominalism fails, in Plantinga’s opinion, since it is irrelevant to the real issue, the preservation of God’s absolute control. Plantinga then contends, in light of the failure of the previous approaches, that we may resolve to deny the truth of our intuition that abstract objects are necessary, or eternal, a conclusion which is designated as universal possibilism since the implication of the position is that everything is possible for God, a notion which Plantinga also rejects, since, in his opinion, this conclusion simply seems absurd.

However, for Plantinga the bifurcation between the Theistic notion of God as the uncreated creator of all that exists outside of himself and the Platonic notion of the existence of abstract objects, which exist necessarily and eternally, is not insurmountable. Plantinga endorses a form of Platonic realism. He espouses a conception of properties according to which these abstract objects are a specific type of abstract entity, namely, universals. Plantinga, proposes the following solution to the dilemma,

Augustine saw in Plato a vir sapientissimus et eruditissimus (Contra Academicos III, 17); yet he felt obliged to transform Plato’s theory of ideas in such a way that these abstract objects become . . . part of God, perhaps identical with his intellect. It is easy to see why Augustine took such a course, and easy to see why most later medieval thinkers adopted similar views. For the alternative seems to limit God in an important way; the existence and necessity of these things distinct from him seems incompatible with his sovereignty. (Plantinga, 1980, 5)

Plantinga, therefore, concludes that there may be some sense of dependence between God and abstract objects, that these abstract objects depend on God asymmetrically, and that they are the result of God’s intellective activity.

From the preceding overview we see that there exists a tension between the central notion of Traditional Theism, that God exists as the uncreated creator and that all objects existing beyond God have the source of their being in the creative activity of God, and the central notion of Platonism, that there exists a realm of abstract objects which are uncreated, and exist necessarily and eternally. Furthermore, we have seen that there exists a variety of proposals ranging from those that reject altogether the notion that these two distinctive worldviews are reconcilable, to those that would argue on behalf of their compatibility. (Freddosso, 1983)

6. References and Further Reading

a. Books

  • Aquinas, T. (1948). Summa Theologiae, trans. Fathers of the English Dominican Province. U.S.A: Christian Classics.
  • Brown, C. (1968). Philosophy and the Christian Faith. Illinois: Intervarsity Press.
    • Provides an examination of the historical interaction of philosophical thought and Christian theology.
  • Campbell, K. (1990). Abstract Particulars. Basil Blackwell Ltd.
    • Provides an in-depth analysis of Abstract Particulars.
  • Davies, B. (2004) An Introduction to the Philosophy of Religion (3rd ed.). New York: Oxford University Press.
    • An excellent introduction to the basic issues in Philosophy of Religion.
  • Gerson, L. P. (1990). Plotinus: The Arguments of the Philosophers. New York: Routledge.
    • Provides an analysis of the development of Platonic philosophy and its incorporation into Christian Theology.
  • Morris, T. (1989) Anselmian Explorations: Essays in Philosophical Theology. Notre Dame: University of Notre Dame Press.
  • Plantinga, A. (1980). Does God Have a Nature? Milwaukee, Wisconsin: Marquette University Press.
    • Discusses the relationship of God to abstract objects.
  • Plantinga, A. (2000). Warranted Christian Belief. New York: Oxford University Press.
    • Explores the intellectual validity of Christian faith.
  • Van Inwagen, P. (1993) Metaphysics. Westview Press.
    • An in-depth exploration of the dimensions of metaphysics.
  • Wolterstorff, N. (1970). On Universals: An Essay in Ontology. University of Chicago.
    • Explores the nature of Platonic thought, the tenets of Traditional Theism.

b. Articles

  • Bergman, M., Brower, J. E. (2006). “A Theistic Argument against Platonism.” Oxford Studies in Metaphysics, 2, 357-386.
    • Discusses the logical inconsistency between Theism and Platonism by virtue of the aseity dependence doctrine.
  • Brower, J. E. “Making Sense of Divine Simplicity.” Unpublished.
    • Presents an in-depth analysis of the nature of divine simplicity.
  • Freddoso, A. (1983). “Review of Plantinga’s ‘Does God Have a Nature?’.” Christian Scholars Review, 12, 78-83.
    • An excellent and helpful review of Plantinga’s most significant work.
  • Gould, P. (2010). “A Defense of Platonic Theism: Dissertation.” Purdue University West.
    • A defense of Platonic Theism, which seeks to remain faithful to the Theistic tradition.
  • Leftow, B. (1990). “Is God an Abstract Object?.” Nous, 24, 581-598.
    • Strives to demonstrate that the Identity Thesis follows from a basic Theistic belief.
  • Menzel, C. (2001). “God and Mathematical Objects.” Bradley, J., Howell, R. (Eds.). Mathematics in a Postmodern Age: A Christian Perspective. Eerdman’s.
  • Menzel, C. (1987). “Theism, Platonism, and the Metaphysics of Mathematics.” Faith and Philosophy, 4(4), 1-22.
  • Morris, T., Menzel, C. (1986). “Absolute Creation.” American philosophical quarterly, 23, 353-362.
    • Seeks to reconcile the divergent metaphysical perspectives utilizing the concept of Theistic Activism
  • Plantinga, A. (1982). “How to be an Anti-Realist.” Proceedings and Addresses of the American Philosophical Association, 56 (1), 47-70.
    • An insightful and helpful discussion of Plantinga’s rejection of contemporary anti-realism and unbridled realism.
  • Ross, J. (1989). “The Crash of Modal Metaphysics.” Review of Metaphysics, 43, 251-79.
    • Addresses Quantified Modal Logic as at one time promising for metaphysics.
  • Ross, J. (1983). Creation II. “In The Existence and Nature of God.” A. J. Freddoso, (Ed). Notre Dame: University of Notre Dame Press.
  • Van Inwagen, P. (2009). “God and Other Uncreated Things.” Timpe, K. (Ed). Metaphysics and God: Essays in Honor of Eleonore Stump, 3-20.
    • Addresses the question regarding whether there is anything other than himself that God has not created.
  • Van Inwagen, P. (1988). “A Theory of Properties.” Oxford Studies in Metaphysics, 1, 107-138.
    • Explores the rationality of belief in abstract objects in general and properties in particular.


Author Information

Eddy Carder
Prairie View A & M University
U. S. A.