Jules Lequyer (Lequier) (1814—1862)

LequyerLike Kierkegaard, Jules Lequyer (Luh-key-eh) resisted, with every philosophical and literary tool at his disposal, the monistic philosophies that attempt to weave human choice into the seamless cloth of the absolute. Although haunted by the suspicion that freedom is an illusion fostered by an ignorance of the causes working within us, he maintained that in whatever ways we are made—by God, the forces of nature, or the conventions of society—there remain frayed strands in the fabric of human existence where self-making adds to the process. Declaring this freedom “the first truth” required by all genuine inquiry into truth, he also challenged traditional doctrines of divine creativity, eternity, and omniscience and he developed his own alternative based on what he saw as the implications of a true metaphysics of freedom.

Lequyer was a reclusive Breton who died in relative obscurity without having published anything. He held no important academic post and most of his literary and philosophical work remained unfinished. Despite these disadvantages, his influence on philosophy was much greater than the ignorance of his thought and of his name would suggest. Charles Renouvier and William James adopted many of his ideas about the meaning of human freedom, its reality, and how it is known. Echoes of Lequyer’s ideas, and sometimes the very phrases he used, are found in French existentialism and American process philosophy. A man of deep religious conviction but also of increasingly melancholy temperament, Lequyer expressed his philosophy in a variety of literary styles. As a consequence, he has been called “the French Kierkegaard,” although he and his more famous Danish contemporary knew nothing of each other.

Table of Contents

  1. Biography
  2. Philosophy of Freedom
  3. Theological Applications
  4. Philosophical Legacy
  5. Conclusion
  6. References and Further Reading
    1. Primary Sources
    2. English Translations
    3. Secondary Sources in French and English

1. Biography

Joseph-Louis-Jules Lequyer, born January 29, 1814 in the village of Quintin, France, was an only child. His father, Joseph Lequyer (1779-1837), was a respected physician, and his mother, Céleste-Reine-Marie-Eusèbe Digaultray (1772-1844), cared for the poor and sick in the Quintin hospital. The family name was subject to a variety of spellings, most notably, “Lequier” and “Lequyer” (occasionally with an accent aigu over the first e). Lequyer’s birth certificate had “Lequier” but in 1834 his father had the spelling legally fixed as “Lequyer” [Grenier, La Philosophie de Jules Lequier, 257-58]. Lequyer was not consistent in the way he spelled his name and the orthographic confusion persists in the scholarly literature. “Lequyer” is the spelling on the plaque marking his birthplace in Quintin and on his tombstone in Plérin.

Lequyer’s parents relocated from Quintin to the nearby town of St.-Brieuc along the north coast of Brittany where their son was educated in a little seminary. By the age of thirteen, he excelled in Greek and Latin. A pious Catholic upbringing, combined with his friendship with Louis Épivent (1805-1876), who himself became a cleric, nurtured Lequyer’s interests in philosophy and theology, especially the perennial question of human free will. The family spent vacations just north of St.-Brieuc near Plérin at an isolated cottage known as Plermont (a contraction of “Plérin” and “mont”) within walking distance of the coast. In this rural setting Lequyer spent many happy hours with his closest friend, Mathurin Le Gal La Salle (1814-1904). Another important attachment of his early years was Anne Deszille (1818-1909), also known as “Nanine.” Lequyer never married, although he twice proposed to Deszille (in 1851 and in 1861) and, to his great disappointment, she twice refused.

In 1834 Lequyer entered the École Polytechnique in Paris. The school regimen required students to rise at dawn, eat a meager breakfast, then study scientific subjects—mathematics, physics, and chemistry—until lunchtime. After lunch, there were military exercises, fencing, and horse riding, as well as lessons in dance and music. After supper, students retired to their studies until nightfall. The rigid schedule did not suit Lequyer’s contemplative habits so he was at cross purposes with some of his superiors. His troubles were exacerbated by the unexpected death of his father in 1837. The following year he failed the exam that would have qualified him to become a lieutenant. Viewing an offer to enter the infantry as an insult, he made a dramatic exit. He announced his resignation to the examining officer with these words: “My general, there are two types of justice, mine and yours” [Hémon, 145]. Of some interest is Lequyer’s physical description from his matriculation card: he stood just under five and a half feet, had blond hair, brown eyes, a straight nose, a small mouth, an oval face, a round chin, and scars under his left eye and on the right side of his chin [Brimmer 1975, Appendix III]. The scar on his chin was from a riding accident at the school which, in later years, he covered by wearing a beard.

The course of study in Paris introduced Lequyer to the determinism of Pierre Simon LaPlace (1749-1827). As the school’s military schedule had conflicted with his temperament, so the idea that every event is necessitated by its causes was in tension with his cherished religious ideas, in particular, the conviction of free will. By happy coincidence, he found in his new friend and classmate Charles Renouvier (1815-1903) a sounding board for his quandaries about freedom and necessity. Renouvier saw in Lequyer a strange combination of religious naïveté and philosophical profundity. Indeed, Renouvier never failed to acknowledge Lequyer’s genius and to refer to him—literally, to his dying days—as his “master” on the subject of free will [Derniers entretiens, 64]. Lequyer, chronically unable to complete most of what he wrote, benefited from Renouvier’s industry. Renouvier eventually published a small library of books, in some of which he included excerpts from Lequyer’s writing. Three years after his friend’s death Renouvier published, at his own expense, one-hundred and twenty copies of a handsome edition of his selection of Lequyer’s writings which he distributed free of charge to any interested party.

Upon leaving the École Polytechnique, Lequyer used the inheritance from his father to retire to Plermont where he lived with his mother and the family servant, Marianne Feuillet (probably born in 1792). Lequyer never had a head for finances, so his money was soon exhausted, although there remained properties in St.-Brieuc that his father had owned. In 1843, the three moved to Paris where Lequyer acquired a position teaching French composition to Egyptian nationals at the École Égyptienne. He had the misfortune of teaching at the school during its decline. Nevertheless, he worked to redesign its curriculum after the model of the École Polytechnique, but centered more on literature, poetry, and even opera. Lequyer’s mother died the year following the move to Paris. Worried over the state of her son’s mind, she entrusted him to the care of Feuillet with these words: “Oh, Marianne, keep watch over my poor Jules. He has in his heart a passion which, I greatly fear, will be the cause of his death” [Hémon, 172]. The exact object of his mother’s concern is unknown but in the fullness of time her words became prophetic.

On August 15, 1846, the day of celebration of the Assumption of Mary, Lequyer underwent a mystical experience that was occasioned by his meditations on the Passion of Christ. He wrote down his experience, alternating between French and Latin, which invites a comparison with Pascal’s Memorial. Lequyer’s indignation at those who caused Christ’s suffering is transformed, first, into a profound sense of repentance as he realizes that he too had “added some burden to the cross” by his sins, and, second, into the gratitude for the love of God in being forgiven his sins. On August 19th, the religious ecstasy recurred, this time as he took communion at the church of St.-Sulpice. Again, the theme of the suffering of Christ is paramount, but now giving way to a determination to share in those sufferings to such an extent that the Virgin Mary would be unable to distinguish him from her own son. Lequyer’s first biographer, Proper Hémon (1846-1918), spoke of the philosopher’s “bizarre religiosity” [Hémon, 184], but there can be no question that, despite his shortcomings and misfortunes, his mystical experiences found outlet in acts of devotion and charity for the remainder of his life.

Lequyer returned to Plermont with Feuillet in 1848, after the February revolution in Paris. Full of zeal for a rejuvenated Republic, he announced, with Renouvier’s help, his candidacy for a seat in the parliament of the Côtes-du-Nord as a “Catholic Republican” [Hémon, 188]. His published platform identifies freedom as the basis of rights and duties and it explicitly mentions the freedoms of the press, of association, of education, and of religion [Le Brech, 56-57]. Of note is that Lequyer received a glowing recommendation for political office from one of his former teachers at the École Polytechnique, Barthélémy Saint-Hilaire. However, like many in more rural areas who identified, or seemed to identify, with the Parisian revolutionaries, Lequyer was not elected. He came in twentieth on the list of candidates, receiving far too few votes to be among those who won a seat in the parliament.

After the election, which was in April 1848, Lequyer retired to Plermont and spent his days in study and meditation, which included long walks along the coast; sometimes he would stay out overnight. There was, however, the persistent problem of finances. Hémon reports that Lequyer would throw change wrapped in paper from his second floor study to the occasional beggar that passed by. From March 30, 1850 into 1851, he sold the family property in St.-Brieuc, leaving him only Plermont. When his aunt Digaultray died on March 31, 1850 he was hopeful of an inheritance of 10,000 francs. As luck would have it, the aunt’s will directed that the sum be doubled, but only on the condition that it be used to pay a debt of 20,000 francs that Lequyer owed to his first cousin, Palasme de Champeaux! The cousin died in August of the same year, so the inheritance went to his estate [Hémon, 245].

Lequyer’s letters to Renouvier indicate a heightened level of creativity in which he made major progress on his philosophical work. In a November 1850 letter, he claimed that he was writing “something unheard of,” namely that the first and most certain of truths is the declaration of one’s own freedom. This movement of thought ends with the idea that one is one’s own work, responsible to oneself, and “to God, who created me creator of myself” (Lequyer had written “creature of myself” but later changed it to “creator of myself”) [OC 70, 538]. Philosophical insights, however, were not enough to save Lequyer from the weight of his failed projects and his destitution which, arguably, contributed to a mental breakdown. On February 28, 1851, a neighbor found Lequyer wandering about with an axe with which he intended to cut his own arm; Lequyer was taken to the hospital in St.-Brieuc for observation. The doctors determined that he was a danger to himself and should be transferred to a mental institution. On March 3rd, Le Gal La Salle and the Abbot Cocheril took Lequyer to the asylum near Dinan, using subterfuge to lure him there. On April 12th, with the help of Paul Michelot (1817-1885) and some other friends, Lequyer was taken to Passy, near Paris, to the celebrated hospital-resort of Dr. Esprit Blanche, the well-known physician who specialized in mental disorders.

Lequyer was discharged from Passy on April 29th, improved but not completely recovered, according to the doctors. He returned to Plermont, there to be welcomed by the faithful Feuillet and to renew contact with an elderly neighbor, Madame Agathe Lando de Kervélégan (born 1790). Relations with others, however, were broken or became strained. Never accepting that his confinement was justified, he severed ties with Le Gal La Salle who he regarded as the one who had orchestrated it. In the book that he planned, a major section was labeled “Episode: Dinan.” Since the book was never completed, we cannot know Lequyer’s exact thoughts about his two months under medical supervision. That his perceptions were cloudy is indicated by the fact that, only a few months after his confinement, he proposed marriage to Nanine, believing she would accept. Her family, with a view to Lequyer’s mental and financial instability, encouraged her to refuse. This she did in a most forceful way by returning all of his letters and by instructing him to burn her letters to him. This he did, but not before making copies of certain excerpts.

For two years after the events of 1851 Lequyer’s whereabouts are unknown. His letters to Renouvier in the closing months of 1855 indicate that two years earlier he had gone to Besançon as a professor of mathematics at the Collège Saint-François Xavier. By Easter of 1854, however, relations with the head of the college, a Monsieur Besson, had gone sour. The details of the problem are unknown, but it seems that Besson scolded Lequyer for not coming to him to ask for something. According to Lequyer, Besson boasted that men of influence as great as the arch-bishop, “crawl at my feet” [OC 546]. Lequyer related this conversation to the Cardinal and Besson was demoted. One of Lequyer’s friends, Henri Deville, had written a well-intentioned letter to the Cardinal requesting that he find Lequyer another place in his diocese. The Cardinal, perhaps misinterpreting the request, turned against Lequyer. As a result, Lequyer was entangled in law suits with both Besson and the Cardinal over indemnities. Lequyer’s lawyer told him “all was lost” when he decided to act with dignity and not crawl at Besson’s feet [OC 549]. An interesting aspect of Lequyer’s sketchy account is that he says he was inspired by the memory of Dinan, imitating the man he had been there by controlling his anger in spite of the wrongs he perceived to have been done to him. Furthermore, he recognized Deville’s good intentions and, though he thought his intervention inappropriate, did not blame him for it.

By the close of 1855 Lequyer had returned to Plermont, never to leave again. Many of the most touching stories about Lequyer come from the last six years of his life. Though his relations with his friends were often strained, he inspired in them a seemingly unconditional loyalty. It was they after all who underwrote the considerable cost of staying at Passy. In his final years, his friends—including Le Gal La Salle who he had disowned—came to his aid more than once. For example, Lequyer frequented a restaurant in St.-Brieuc but would order embarrassingly meager portions. When the owner of the establishment told his friends, they instructed him to give Lequyer full meals and they would pay the difference. When the owner wondered whether Lequyer would notice the charity, the reply was, “Non, il est dans le ciel” [Hémon, 205]—his head is in the clouds—an apt metaphor for his impracticality and his philosophical preoccupations.

In 1858, on the recommendation of Madame Lando, Lequyer became the tutor of Jean-Louis Ollivier, the thirteen year old son of a customs officer of the same name who admired Lequyer’s rhetorical skills; the father once described Lequyer as “a magician of words” [Hémon, 191]. Lequyer taught young Ollivier but also employed him in transcribing Lequyer’s own writing into a more legible script. Ollivier studied with Lequyer for two years but at the close of 1860, passing the exam that allowed him the chance to study to become an administrator of the state, the boy left. A few months earlier (in April) Lequyer had the misfortune of losing a chance to become chief archivist for the Côtes-du-Nord because of a delay in mail service. With this opportunity missed and Ollivier gone, Lequyer was without his student and unemployed. Jean-Louis Le Hesnan, a man of twenty who was too frail to work in the fields took Olliver’s place as Lequyer’s secretary. This partnership, however, was not enough to lift the weight of loneliness.

In the year that followed, Lequyer’s condition deteriorated. His neighbors reported that he would lose track of time and come calling at late hours with no explanation. His hair and beard, no longer cared for, grew prematurely white. His gaze took on a lost and vacant stare. Lequyer’s quixotic hopes of marriage to Nanine were rekindled when, on December 28, 1861, her father died—he believed her father was the main obstacle to the marriage. He again proposed marriage; sometime in the first week of February he learned of her refusal, which she made clear was final. Lequyer’s behavior became frenzied and erratic. He was subject to bizarre hallucinations and he spoke of putting an end to his misery. On Tuesday, February 11, 1862, Lequyer went to the beach with Le Hesnan, shed his clothes, threw water on his chest, and jumped into the bay. He swam to the limits of his strength until he was visible only as a dot among the waves and he cried out. According to Le Hesnan, Lequyer’s last words would not have been a cry of distress but a farewell to Deszille—“Adieu Nanine” [Hémon, 232] At nine o’clock in the evening, Lequyer’s body washed ashore. Feuillet, who Lando described as Lequyer’s “second mother,” was waiting at Plermont to receive the body.

The official police report mentioned Lequyer’s “disturbed spirit” but ruled his death accidental. Nevertheless, a controversy erupted when a newspaper published a poem, “Les Adeiux de Jules Lequyer,” [The Farewells of Jules Lequyer] which was written in Lequyer’s voice and which suggested that he had committed suicide [Grenier, La Philosophie, 272]. Madame Lando eventually revealed herself as the author of the poem; she explained that she was saying Lequyer’s farewells for him in a way that he would have wished. The most propitious result of the controversy is that Charles Le Maoût, writing for Le Publicateur des Côtes-du-Nord (March 1, 1862), published an article titled “Derniers Moments de Jules Lequyer” [Last Moments of Jules Lequyer]. The article includes reports of Lequyer’s friends and neighbors about his final days, thereby providing insight into the disoriented and melancholy condition into which the philosopher had fallen. In November 1949, Dr. Yves Longuet, a psychiatrist at Nantes gave his professional opinion from the available evidence. He concluded that Lequyer suffered a “clear cyclopthemia,” that is to say, a manic-depressive personality [Grenier 1951, 37].

2. Philosophy of Freedom

Renouvier’s edition of Lequyer’s work, noted above, bore the title La Recherche d’une première vérité [The Search for a First Truth]. The book is divided into three sections. The first, titled Comment trouver, comment chercher une première vérité? [How to Find, How to Search for a First Truth?], is prefaced by a brief quasi-autobiographical meditation, “La Feuille charmille” [The Hornbeam Leaf]. The second and third sections are, respectively, Probus ou le principe de la science: Dialogue [Probus or the Principle of Knowledge: Dialogue] and Abel et Abel—Esaü et Jacob: Récit biblique [Abel and Abel—Esau and Jacob: Biblical Narrative]. Collections edited by Jean Grenier in 1936 and 1952 brought together most of Lequyer’s extant work, including excerpts from his correspondence. Curiously absent from Grenier’s editions is a meditation on love and the Trinity; longer and shorter versions of this were published in subsequent collections (Abel et Abel 1991, pp. 101-08; La Recherche 1993, pp. 319-22). An unfinished short story from Lequyer’s earlier years titled La Fourche et la quenouille [The Fork and the Distaff] was published in 2010 and edited by Goulven Le Brech. Other collections have been published, but these form the corpus of Lequyer’s work.

“The Hornbeam Leaf” is Lequyer’s best known work. It was the one thing he wrote that he considered complete enough to distribute to his friends. It addresses, in the form of a childhood experience, the meaning and reality of freedom. Lequyer intended it to be the introduction to his work. It exhibits the best qualities of Lequyer’s writing in its dramatic setting, its poetic language, and its philosophical originality. Lequyer recalls one of his earliest memories as he played in his father’s garden. He is about to pluck a leaf from a hornbeam when he considers that he is the master of his action. Insignificant as it seems, the decision whether or not to pluck the leaf is in his power. He marvels at the idea that his act will initiate a chain of events that will make the world forever thereafter different than it might have been. As he reaches for the leaf, a bird in the foliage is startled. It takes flight only to be seized by a sparrow hawk. Recovering from the shock of this unintended consequence of his act, the child reflects on whether any other outcome was really possible. Perhaps the decision to reach for the leaf was one in a series of events in which each cause was itself the inevitable effect of a prior cause. Perhaps the belief that he could have chosen otherwise, that the course of events might have been different, is an illusion fostered by an ignorance of the antecedent factors bearing on the decision. The child is mesmerized by the thought that he might be unknowingly tangled in a web of necessity, but he recovers the faith in his freedom by a triumphant affirmation of his freedom.

Renouvier remarked that “The Hornbeam Leaf” recorded the point of departure of Lequyer’s philosophical effort [OC 3]. More than this, it illustrates the salient characteristics of freedom as Lequyer conceived them. For Lequyer, at a minimum, freedom involves the twin ideas that an agent’s decision is not a mere conduit through which the causal forces of nature operate and that it is itself the initiator of a chain of causes. Prior to the decision, the future opens onto alternate possibilities. The agent’s decision closes some of these possibilities while it opens others. After the decision is made, the feeling persists that one could have decided differently, and that the past would have been different because of the decision one might have made. Because the course of events is at least partially determined by the agent’s decision, Lequyer maintains that it creates something that, prior to the decision, existed only as a possibility. If one is free in this sense, then one is part creator of the world, and also of others. The child’s gesture leads to the bird’s death. Lequyer draws the corollary that the smallest of beginnings can have the greatest of effects that are unforeseen by the one who initiated the causal chain, a thought that makes even the least of decisions potentially momentous [OC 14, compare OC 201]. This is Lequyer’s version of what Edward Lorenz much later, and in a different context, dubbed “the butterfly effect”—a butterfly flaps its wings in Brazil which leads to a tornado in Texas.

For Lequyer, one’s decisions not only create something in the world, they double back on oneself. If one is free then, in some respects, one is self-creative. These ideas are expressed cryptically in Lequyer’s maxim which occurs in the closing pages of How to Find, How to Search for a First Truth?: “TO MAKE, not to become, but to make, and, in making, TO MAKE ONESELF” [OC 71]. When Lequyer denies that making is a form of becoming he is saying that the free act is not a law-like consequence of prior conditions. This is not to say that making or self-making is wholly independent of prior conditions. Lequyer borrows the language of Johann Fichte and speaks of the human person as a “dependent independence” [OC 70; compare OC 441]. Lequyer is clear that one is not responsible for having come to exist nor for all the factors of nature and nurture that brought one to the point of being capable of thinking for oneself and making one’s own decisions. All of these are aspects of one’s dependence and Lequyer often underscores their importance. On the other hand, one’s independence, as fragile and seemingly insignificant as it may be, is the measure of one’s freedom. This freedom, moreover, is the essential factor in one’s self-making. For Lequyer, it makes sense not only to speak of one’s decisions as being expressions of one’s character as so far formed, but also to speak of one’s character as an expression of one’s decisions as so far made.

Lequyer considers the objection that his view of freedom involves “a sort of madness of the will” [OC 54; compare OC 381]; by claiming that the free act, like a role of dice, could go one way or another, Lequyer seems to imply that freedom is only randomness, a “liberty of indifference” undisciplined by reason. Lequyer replies that arbitrariness is indeed not the idea of freedom, but he claims that it is its foundation. In Lequyer’s view, one is oneself the author of the chance event and that event is one’s very decision. His meaning seems to be that indeterminism—the idea that, in some instances, a single set of causal factors is compatible with more than one possible effect—is a necessary but not a sufficient condition of acts for which we hold a person accountable. In the process of deliberation, motives are noticed and reasons are weighed until one decides for one course of action over another. The will is manifested in the sphere of one’s thought when one causes one idea to prevail over others and one’s hesitation is brought to an end. The act resulting in a decision may be characterized in any number of ways—capricious, selfish, reasonable, moral—but it is in no sense a product of mere brute force. The entire process of deliberation, Lequyer says, is animated by the self-determination of the will. Should an explanation be demanded, appealing to antecedent conditions for exactly why the decision was made one way rather than another, Lequyer replies that the demand is question-begging, for it presupposes determinism [OC 47]. The free act is not a mere link in a causal chain; it is at the origin of such chains. In Lequyer’s words, “To act is to begin” [OC 43].

It is clear that Lequyer did not believe that freedom and determinism can both be true. He acknowledged that we often act, without coercion, in accordance with our desires. Lequyer says that “the inner feeling”—presumably, introspectively discerned—guarantees it [OC 50]. Some philosophers look no further than this for a definition of freedom. For Lequyer, however, this is not enough, for non-human animals often act without constraint [OC 334, 484]. To speak of free will one must also include the idea that one is the ultimate author of one’s decisions. He counsels not to confuse the lack of a feeling of dependence upon causal conditions that would necessitate one’s decision with the feeling of independence of such conditions. The confusion of these ideas, Lequyer claims, leads us to believe that we have more freedom than we actually have. All that we are allowed to say, based on introspection, is that we sometimes do not feel necessitated by past events. An analogous argument for determinism is likewise inconclusive. When we come to believe through a careful examination of a past decision that causes were at work of which we were unaware and which strongly suggest that the decision was inevitable, we are not warranted in generalizing to all of our decisions, supposing that none of them are free [OC 50].

In the dramatic finale of “The Hornbeam Leaf” the child affirms his own freedom. This affirmation is not based on an argument in the sense of inferring a conclusion from premises that are more evident than freedom itself. Lequyer reaches a theoretical impasse—an aporia—on the question of freedom and necessity. Somewhat anticipating Freud, he never tires of emphasizing the depth of our ignorance about the ultimate causes of our decisions. Indeed, the final sentence of How to Find, How to Search for a First Truth? cautions that we never know whether a given act is free [OC 75]. Moreover, he denies that we experience freedom [OC 52; compare OC 349, 353]. He argues that this would involve the impossibility of living through the same choice twice over and experiencing the decision being made first in one way and then being made in the contrary way. The memory of the first choice—or at least the mere fact of its having taken place—would intrude on the second and thus it would not be the same choice in identical circumstances. Lequyer speaks, rather, of a “presentiment” of freedom, the stubbornly persistent sense that we have that, in a given circumstance, we could have chosen differently [OC 52]. Yet, Lequyer maintains, such is the extent of our ignorance—our lack of self-knowledge—that it is often easier to believe that one is free when one is not than to believe that one is free when one really is [OC 53].

Notwithstanding Lequyer’s many caveats about the limitations on freedom and even of knowing whether free will exists, he is above all a champion of human liberty. What remains to be explained is the ground of this affirmation. Despite the fragmentary nature of his literary remains, the general outline of his thinking is clear. How to Find, How to Search for a First Truth? begins as a Cartesian search for an indubitable first truth but it diverges from Descartes’ project in being more than a theoretical exercise. Lequyer speaks of the “formidable difficulty” that stands in the way of inquiry: if one seeks truth without prejudice one runs the risk of changing one’s most cherished convictions [OC 32]. He uses a Pascalian image to illustrate the attempt to seek truth without risk of losing one’s convictions. He says that it would be like walking along a road imagining a precipice on either side; something would be missing from the experience, “the precipice and the vertigo.” Lequyer continues in Pascal’s vein by raising the possibility that honest investigation may not support one’s faith. The heart can place itself above reason but what one most desires is that faith and reason be in harmony [OC 33]. There is, finally, the difficulty that sincere doubt is “both impossible and necessary from different points of view” [OC 30]. It is impossible because doubting what is evident (for example, that there is a world independent of one’s mind) is merely feigned doubt; it is necessary because one cannot assume that what is evident is true (for example, even necessary truths may seem false and people have genuine disagreements about what they firmly believe), otherwise, the search for truth would never begin.

Lequyer’s differences with Descartes are also apparent in his treatment of the skeptical argument from dreaming: because dreams can feel as real as waking life, one cannot be certain that one is awake. Lequyer notes that the search for a first truth requires a sustained effort of concentration in which one actively directs one’s thoughts. In dreams, impressions come pell-mell and one is more a spectator of fantastic worlds than an actor sustaining one’s own thoughts. Lequyer concedes that he cannot be certain that he is awake, but he can be certain that he does not inhabit any ordinary dream. If one sleeps it is one’s thoughts that one doubts; if one is awake, it is one’s memory that one doubts [OC 36]. Lequyer avers that the former is a less feigned doubt than the latter. Pushed further by the radical skepticism to justify one’s belief in the external world, Lequyer prefers the answer of the child: “Just because” [OC 37]. His discussion takes a decidedly existential detour as he reflects on the solitude implicit in the impossibility of directly knowing the thoughts of another. Lequyer’s is not the academic worry of Descartes of how we know that another person is not a mere automaton, it is rather the sense of isolation in contemplating the gulf between two minds even when there is the sincere desire on both of their parts to communicate [OC 37].

It is Lequyer’s treatment of the cogito (“I think”) that takes one to the heart of his philosophy of freedom. He acknowledges the certainty of Descartes’ “I think therefore I am” but he criticizes his predecessor for leaving the insight obscure and therefore of not making proper use of it [OC 329]. The obscurity, Lequyer says, is in the concept of a self-identical thinking substance—sum res cogitans. The cogito is precisely the activity of a thinking subject having itself as an object of thought. In the language of the phenomenologists, Lequyer is puzzled by the intentionality within self-consciousness—the mind representing itself to itself [compare OC 362]. He argues that there is an essentially temporal structure to this relation; the “self” of which one is aware in self-awareness is a previous state of oneself. Lequyer goes so far as to call consciousness “nascent memory” [OC 339-40]. This is a significant departure from Descartes who does not even include memory in his list of characteristics of thought. Descartes says that by “thought” he means understanding, willing, sensing, feeling, and imagining (abstaining by methodical doubt, to be sure, from any judgment about the reality of the object of one’s thought). The omission of “remembering” is curious; “I (seem to) remember, therefore I am” is an instance of the cogito and memory is not obviously reducible to any of the other characteristics of thought. Although Lequyer does not claim that self-memory is perfect, he maintains that each aspect of self-consciousness—as subject and as object—requires the other. Their unity, he maintains, is nothing other than the activity of unifying subject and object. Furthermore, the on-going sequence of events that is consciousness requires that each emergent “me” becomes an object remembered by a subsequent “me.” The “Hornbeam Leaf” is itself the report of such an act of remembering.

For Lequyer, the analysis of the “I think” reveals a more fundamental fact, to wit, “I make.” The making, moreover, is a self-making, for one is continually unifying the dual and interdependent aspects of oneself as subject and as object [OC 329]. Because this process of self-formation is not deterministic, it is open-ended. Lequyer characterizes the relation of cause and effect in a free act as asymmetrical. He labels the relation from effect (subject) to cause (object) as “the necessary” because the subject would not be what it is apart from the object that it incorporates into self-awareness; however, he labels the relation from cause (object) to effect (subject) as “the possible” in the sense that the object remains what it is independent of the subject incorporating it. Lequyer says that “the effect is the movement by which the cause determines itself” [OC 473]. Lequyer’s asymmetrical view of causation, at least where the free act is concerned, diverges from that of the determinist. In deterministic thinking, necessity flows symmetrically from cause to effect and from effect to cause; “the possible,” for determinism, is only a product of our ignorance of the causal matrix that produces an effect. Lequyer agrees that ignorance is a factor in our talk of possibility. He notes that the hand that opens a letter that contains happy or fatal news still trembles, hoping for the best and fearing the worst, each “possibility” considered, although one knows that one of the imagined outcomes is now impossible [OC 60]. Lequyer’s indeterminism, on the other hand, allows that possibilities outrun necessities, that the future is sometimes open whether or not we are ignorant of causes.

Lequyer writes that “it is an act of freedom which affirms freedom” [OC 67]. As already noted, for Lequyer, free will is not deduced from premises whose truth is more certain than the conclusion. We have also seen that he denies that free will can be known directly in experience [OC 353]. The logical possibility remains—entertained by the child in “The Hornbeam Leaf” and spelled out in greater detail in the fourth part of How to Find, How to Search for a First Truth?—that free will is an illusion, that one’s every thought and act is necessitated by the already completed course of events reaching into the past before one’s birth. Lequyer addresses the impasse between free will and determinism with the following reasoning (Renouvier called this Lequyer’s double dilemma). Either free will or determinism is true, but which one is true is not evident. Lequyer says that one must choose one or the other by means of one or the other. This yields a four-fold array: (1) one chooses freedom freely; (2) one chooses freedom necessarily; (3) one chooses necessity freely; (4) one chooses necessity necessarily [OC 398; compare Renouvier’s summary, OC 64-65]. One’s affirmation should at least be consistent with the truth, which means that the array reduces to the first and last options. Of course, the determinist believes that the second option characterizes the advocate of free will; by parity of reasoning, the free willist believes that the third option characterizes the determinist. Again, there is stalemate.

Inspired by the example of mathematics, Lequyer proposes to break the deadlock by considering “a maximum and a minimum at the same time, the least expense of belief for the greatest result” [OC 64, 368]. He compares the hypotheses of free will and determinism as postulates for how they might make sense of or fail to make sense of human decisions. Lequyer, it should be noted, conceives the non-human world of nature as deterministic, so his discussion of free will is limited to the human realm and, in his theology, to that of the divine [OC 475]. It is in considering the two postulates, according to Lequyer, that the specter of determinism casts its darkest shadow. First, with Kant, Lequyer accepts that free will is a necessary postulate to make sense of the moral life [OC 345; compare OC 484-85]. If no one could have chosen otherwise than they chose, there is no basis for claiming that they should have chosen otherwise; judgments of praise and blame, especially of past actions, are groundless if determinism is true. Second, Lequyer goes beyond Kant by claiming that free will is necessary for making sense of the search for truth [OC 398-400]. Lequyer’s reasoning is not as clear as one would like, but the argument seems to be as follows. The search for truth presupposes that the mind can evaluate the reasons for and against a given proposition. The mechanisms of determinism are not, however, sensitive to reasons; indeed, no remotely plausible deterministic laws have been found or proposed for understanding intellectual inquiry. Renouvier elaborated this point by saying that, as the freedom of indifference involves (as Lequyer says) an active indifference to reasons, so determinism involves a passive indifference to reasons. Thus, determinism, by positing necessity as the explanation for our reasoned judgments, undermines the mind’s sensitivity to reasons and therefore allows no way clear of skepticism.

Lequyer’s reasoning, even if it is sound, does not decide the issue in favor of free will. Nor does Lequyer claim that it does. Determinism may yet be true and, if Lequyer is correct, the consequences are that morality is founded on a fiction and we can have no more trust in our judgments of truth and falsity than we can have in a random assignment of truth values to propositions. In the final analysis, the truth that Lequyer seeks is less a truth that is discovered than it is a truth that is made. The free act affirms itself, but because the act is self-creative, it is also a case of the act creating a new truth, namely, that such and such individual affirmed freedom. If freedom is true, and if Lequyer’s reasoning is correct, then the one who creates this fact has the virtue of being able to live a life consistent with moral ideals and of having some hope of discovering truth.

3. Theological Applications

Renouvier deemphasized the theological dimensions of Lequyer’s thought. He said he was bored by Lequyer’s views on the Trinity. He suggested demythologizing Lequyer’s religious ideas so as to salvage philosophical kernels from the theological husk in which they were encased. Obviously, Lequyer did not agree with this approach. Indeed, he devoted approximately twice as much space in his work to topics in philosophy of religion and Christian theology as he did to strictly non-religious philosophizing. Grenier convincingly argued that Lequyer’s design was a renewal of Christian philosophy [OC 326]. One may, however, sympathize with Renouvier’s concerns, for a few of Lequyer’s ruminations are now dated. He seemed to have no knowledge of the sciences that, in his own day, were revealing the astounding age of the earth and the universe. Adam and Eve were real characters in his mind and he speculated on Christ’s return in a few years because of the symmetry between the supposed two-thousand year interval from the moment of creation until the time of Christ and the fact that nearly two-thousand more years had elapsed since Jesus walked the earth [OC 439-40]. Despite these limitations Lequyer’s treatment of religious themes is not, for the most part, dependent on outdated science. His views prefigure developments in philosophical theology in the century and a half since his death, giving his thought a surprisingly contemporary flavor.

Lequyer’s more explicitly theological works are as notable for their literary qualities as for their philosophical arguments. Probus or the Principle of Knowledge, also known as the Dialogue of the Predestinate and the Reprobate, is a nearly complete work in three parts. The first section is a dialogue between two clerics who have been made privy to the future by means of a tableau that pictures for them the contents of divine foreknowledge. Neither character is named, but one is sincerely faithful while the other exhibits only a superficial piety. They see in the tableau that the hypocritical cleric will repent and enter heaven but the pious cleric will backslide and live with the demons. When “the reprobate” begins to despair, “the predestinate” tries to offer him hope of going to heaven. Hope comes in the form of arguments from medieval theologians that are designed to show the compatibility of God’s foreknowledge and human freedom. In the style of Scholastic quaestiones disputatae, the clerics debate the classical arguments. The pious cleric criticizes and is unconvinced by each argument. In the second part, the impious cleric appeals to the tableau for events occurring twenty years in the future. The pious cleric has become a master in a monastery and, ironically, has become a partisan of the very arguments that he had earlier criticized. In the future scene, the master monitors and eventually enters a Socratic discussion between Probus, a young divine, and Caliste, a child. Probus defends the idea that God faces a partially open future precisely because God is perfect and must know, and therefore be affected by, what the creatures do. The scene closes as the master counters these arguments with the claim that the future is indeterminate for human perception but determinate for God. The final and shortest section returns to the clerics. The reprobate’s closing speech answers through bitter parodies the ideas that he has just heard uttered by his future self, the master. The speech reveals that the clerics are having dreams that will be mostly forgotten when they awake. The drama closes when they wake up, each remembering only the end of his dream: one singing with the angels, the other in agony with the demons. Satan, who appears for the first time, has the final word. He will lie in wait for one of the men to stumble.

The dialogue is operatic in its intricacy and drama; its philosophical argument is complex and rigorous. The intertwining of its literary and philosophical aspects is evident in the final pages when the clerics are made to forget the content of their shared dream. They must forget their dream in order for the revelation of the dream to come to pass without interference from the revelation itself. Likewise, Satan is not privy to the content of the dreams, so he must lie in wait, not knowing whether he will catch his prey. It is clear both from the tone of the dialogue and from other things that Lequyer wrote that the reprobate in the first and third parts and Probus in the second part are his spokespersons. The overall message of the dialogue is that the position on divine knowledge and human freedom that had been mapped out by Church theologians is nightmarish. Reform in both the meaning of freedom and how this affects ideas about God are in order. In short, the dialogue is a good example of Lequyer’s attempt to renew Christian philosophy. It should be said, however, that specifically Christian (and Jewish) ideas are used primarily by way of illustration and thus, it is less Christian philosophy than it is philosophical theology that is under consideration.

Lequyer was conversant with what most of the great theologians said about the foreknowledge puzzle—from Augustine and Boethius to Albert the Great, Thomas Aquinas, and John Duns Scotus. The concluding fragments of How to Find, How to Search for a First Truth? make clear that he rejected the Thomistic claim that the creatures can have no affect on God. The relation from the creatures to God, says Lequyer, is as real as the relation from God to the creatures [OC 73]. This rejection of Thomism follows from his analysis of freedom as a creative act that initiates causal chains. One’s free acts make the world, other persons, and even oneself, different than they otherwise would have been. Lequyer never doubted that God is the author of the universe, but the universe, he emphasized, includes free creatures. Thus, he speaks of “God, who created me creator of myself” [OC 70]. Aquinas explained that, in the proper sense of the word, creativity belongs to God alone; the creatures cannot create. For Lequyer, on the other hand, God has created creatures that are lesser creators. That they are God’s creation entails that they are dependent upon God, but since they are also creative they are in some measure independent of God. Because the acts of a free creature produce novel realities, they also create novel realities for God. In a striking turn of phrase, Lequyer says that the free acts of the creatures “make a spot in the absolute, which destroys the absolute” [OC 74].

Lequyer never doubts the omniscience of God. What is in doubt is what there is for God to know and how God comes by this knowledge. The dominant answers to these questions, expressed most thoroughly by Aquinas, were that God has detailed knowledge of the entire sweep of events in space and time—all that has been, is, and will be—and this knowledge is grounded in the fact that God created the universe. The deity has perfect self-knowledge and, as the cause of the world, knows the world as its effect. God’s creativity, according to the classical theory, has no temporal location, nor is omniscience hampered by time. Divine eternity, in the seminal statement of Boethius, is the whole, complete, simultaneous possession of endless life [compare OC 423]. Lequyer’s theory of free will challenges Aquinas’ view of the mechanics of omniscience. On Lequyer’s view, God cannot know human creative acts by virtue of creating them. To be sure, the ability to perform such acts is granted by God, but the acts themselves are products of the humans that make them and are not God’s doing. These lesser creative acts are the necessary condition of God’s knowledge of them; they create something in God that God could not know apart from their creativity. Their creative choices, moreover, are not re-enactments in time of what God decided for them in eternity, nor do they exist in eternity [OC 212]. It follows that they cannot be present to God in eternity. If it is a question of the free act of a creature, what is present to God is that such and such a person is undecided between courses of action and that both are equally possible. God too faces an open future precisely because more than one future is open to a creature to help create. In Lequyer’s words, “A frightful prodigy: man deliberates, and God waits!” [OC 71].

It is tempting to say that Lequyer offers a view of divine knowledge as limited. Lequyer demurs. As Probus explains, it is no more a limitation on God’s knowledge not to be able to know a future free act than it is a limitation on God’s power not to be able to create a square circle—the one is as impossible as the other [OC 171]. A future free act is, by its nature, indeterminate and must be known as such, even by God. Lequyer counsels that his view of divine knowledge only seems to be a limitation on God because we have an incorrect view of creativity. Prefiguring Henri Bergson, he speaks of the “magic in the view of accomplished deeds” that makes them appear, in retrospect, as though they were going to happen all along [OC 280; compare OC 419]. Lequyer—through Probus—speaks of divine self-limitation, but this is arguably an infelicitous way for him to make his case [OC 171]. It is not as though God could remove blinders or exert a little more power and achieve the knowledge of an as yet to be enacted free decision. Prior to the free decision, there is nothing more to be known than possibilities (and probabilities); by exerting more power, God could deprive the decision of its freedom, but it would, by the nature of the case, no longer be a free decision that God was foreseeing. Lequyer argues, however, that one may freely set in motion a series of events that make it impossible for one’s future self to accomplish some desired end. In that case, it would have been impossible for God to foreknow the original free decision, but God would infallibly know the result once the decision had been made.

Lequyer does not tire of stressing that if God is omniscient, then God must know the extent to which the future is open at any given juncture [OC 205]. Recall that Lequyer is mindful of how easily we fool ourselves into thinking we are free when we are not. We mistake merely imagined possibilities for real possibilities. God is not subject to this limitation. For these reasons, his view of divine creativity and knowledge allows for a significant degree of providential control, although there can be no absolute guarantees that everything God might wish to occur will occur. Risk remains. Lequyer disparages the idea that every detail of the world is willed by God; this view of divine power, he says, yields “imitations of life” that make of the work of God something frivolous [OC 212]. Even if creatures are ignorant of the extent of their freedom, free will is nonetheless real and so the world is no puppet show. When it comes to the question of prophecy, Lequyer emphasizes how often biblical prophecies are warnings rather than predictions. Those involving predictions, especially of free acts (for example, Peter’s denials of Christ and Judas’ betrayal), can be accounted for, he avers, by highlighting human ignorance and pride in comparison with divine knowledge of the extent to which the future is open [compare OC 206-07]. God is able to see into the heart of a person to know perfectly what is still open for the person not to do and what is certain that he or she will do. On Lequyer’s view, a deed for which a person is held accountable must be free in its origin but not necessarily in its consequences. One may freely make decisions that deprive one’s future self of freedom, but this does not relieve the person of moral accountability [OC 211].

A peculiarity of Lequyer’s theory as it appears in Probus is that he denies the law of non-contradiction where future contingents are concerned. In this, he follows what he understood (and what some commentators understand) to be Aristotle’s views. Lequyer claims that it is true to say of things past or present that they either are or they are not. On the other hand, for future contingents (like free decisions that might go one way or another), Lequyer says that both are false; where A is a future contingent, both A-will-be and A-will-not-be are false [OC 194]. Doubtless this is the least plausible aspect of Lequyer’s views since abandoning the law of non-contradiction is an extremely heavy price to pay for an open future. It is interesting to speculate, however, on what he would have thought of Charles Hartshorne’s view that the contradictory of A-will-be is A-may-not-be and the contradictory of A-will-not-be is A-may-be. This makes A-will-be and A-will-not-be contraries rather than contradictories. As in Aristotle’s square, contraries may both be false; in this way, Lequyer could have achieved at no damage to elementary logic a doctrine of an open future. He certainly leaned in this direction in the closing pages of How to Find, How to Search for a First Truth? There, he declares that it is contradictory to say that a thing will be and that it is entirely possible that it may not be [OC 75].

Besides Probus, the curiously titled Abel and Abel—Esau and Jacob: Biblical Narrative is Lequyer’s other major work that addresses specifically religious themes. As the title suggests, it is closely tied to biblical motifs. Although it is yet another exploration of the idea of freedom, the examination of philosophical arguments is replaced by a fiction informed by philosophical ideas. Lequyer imagines an old man of Judea, living a little after the time of Christ, who has quoted St. Paul to his grandson that God preferred Jacob to Esau before their birth (Romans 9.11). The child is astonished and saddened by the statement, because it seems to place God’s goodness in doubt. The old man tells a story to the child that is designed to help explain the enigma. The tale, set some generations after Jacob and Esau, concerns the identical twin sons—identical even in their names, “Abel”—of a widowed patriarch, Aram. Before telling this story, however, he recounts the biblical episode of Abraham’s attempted sacrifice of Isaac (Gen. 22). He explains that he wishes the grandson to be reminded of Isaac under Abraham’s knife when he tells the story of the Abels, saying, “Faith is a victory; for a great victory, there must be a great conflict” [OC 235]. In the epilogue, the wizened grandfather gives what amounts to a Christian midrash on the story of Jacob and Esau with special attention to Jacob’s wrestling with the angel (Gen. 32.24-32). Thus, the story of the Abel twins is intercalated between two biblical stories. The theme uniting the three stories is God’s tests and the possible responses to them.

The Abel twins are as alike as twins could be, sharing thoughts and sometimes even dreams, but always in bonds of love for one another. They are introduced to an apparent injustice that saddens them when two brothers, slaves of their father, commit a theft and Aram pardons one but punishes the other. The seeming unfairness of the slave’s punishment reminds the twins of Esau’s complaint that he had been cheated when his brother Jacob stole their father’s blessing from him (Gen. 27). The Abels come close to passing judgment on their own father for treating the guilty slaves unequally. They resist the thought and then are told by Eliezer, the senior servant in the household, that Aram recognized the slave he condemned as having led his companions into some misdeed prior to having committed the theft. The boys are relieved to hear their father vindicated. His judgment of the slaves only seemed unjust to the twins because they lacked information that their father possessed. The episode of the unequally treated thieves serves as a parable counseling faith in the justice of God even when God seems to act in morally arbitrary ways.

The twins themselves must also face the test of being treated unequally. Aram shows them an elaborately decorated cedar ark. He explains that the day will come when one of the twins will be favored over the other to open the ark and discover inside the name which God reserves for him and his brother. Mysteriously, the name will apply to both of them but it will separate them as well. The dreams of the twins are disturbed by this favor that will separate them. Aram leaves, perhaps never to return again, giving charge of his sons to Eliezer. After a time, Eliezer brings the boys again to the cedar ark and there explains to them the decree of Aram. The favored son will be given a ring to denote that he is the chosen of God. The other son may either submit to his brother or depart from the country with a third of Aram’s inheritance, leaving the other two-thirds of the wealth for the chosen Abel. Their father’s possessions are great, so to receive a third of the inheritance is a significant amount. Nevertheless, the fact remains that the twins, equal in every way, will have been treated unequally by Aram’s decree.

It is not given to the child who is being told the story of the Abel twins (or to the reader) to know the outcome of their trial. Instead, he is told of three mutually exclusive ways in which the story could go, depending on how the brothers respond to their unequal treatment. In the first scenario, the favored Abel succumbs to pride and his brother shows resentment. Calling to mind the name of the first murderer in the Bible, Lequyer writes, “And, behind the sons of Aram, Satan who was promising himself two Cains from these two Abels, was laughing” [OC 265]. In the second scenario, the favored brother refuses the gift out of a generous feeling for his brother. In that case, Lequyer says that the favored Abel can be called “the Invincible.” In the third scenario, the favored brother, in great sorrow for what his brother has not received, accepts the ring while the other Abel, out of love for his twin, rejoices in his brother’s gift and helps him to open the gilded cedar chest. Lequyer says that, in this case, the other Abel can be called “the Victorious.” Lequyer presents the three scenarios in the order in which he believes they ought to be valued, from the least (the first scenario) to the greatest (the third scenario). When the ark is opened the mystery is revealed of the single name that is given to the brothers that nevertheless distinguishes them. Written within are the words: YOUR NAME IS: THAT WHICH YOU WERE IN THE TEST [OC 276]. The test was to see how the twins would respond to the apparent injustice of one being favored over the other. In effect, God’s predestined name for the brothers is like a mathematical variable whose value will be determined by the choices that the brothers make in response to the test.

Lequyer is clear that the lesson of Abel and Abel is not simply that God respects the free will of the twins. One also learns that God’s richer gifts may be more in what is denied than in what is given [OC 271]. Put somewhat differently, the denial of a gift may itself be a gift of an opportunity to exercise one’s freedom in the best possible way. To be sure, the favored Abel has his own opportunities. By accepting the ring, graciously and without pride, he is a noble figure. He is greater still (“the Invincible”) if he refuses the ring out of love for his brother. It is open to the other Abel, however, to win an incomparable victory (signified by the name, “the Victorious”) should his brother accept the ring. He is victorious over the apparent injustice done to him and over the resentment and envy he might have felt. He has been given a great opportunity to exhibit a higher virtue and he has taken it. In Lequyer’s words, “It is sweet to be loved . . . but it is far sweeter to love” [OC 272]; he argues that one can be loved without finding pleasure in it, although this may be a fault, but one cannot love without feeling joy. It should also be noted that by becoming “the Victorious” the other Abel in no way diminishes the virtue or the reward open to his twin. In this way, Lequyer avers, one may go far in vindicating God’s justice as well as God’s magnificence (that is, giving more to a person than is strictly merited by their deeds). This is a long way from a complete theodicy but Lequyer surely meant these reflections to be an important contribution to a renewal of Christian philosophy.

In the epilogue Lequyer reemphasizes the importance of accepting the will of God even when it seems harsh. The grandfather returns to the story of Jacob and Esau whose unequal treatment so saddened the grandson in the first place. According to the grandfather’s imaginative retelling, Jacob was tested by God when he wrestled with the angel. As Jacob anxiously awaits the arrival of Esau who had vowed to kill him (Gen. 27.41), he is filled with terror contemplating “the stubbornness of the Lord’s goodwill” in allowing him to buy Esau’s birthright (Gen. 25.29-33) and to steal Isaac’s blessing [OC 296]. Perhaps he fears that Esau will finally exact God’s judgment against him. A stranger approaches Jacob from the shadows and demands to know whether he will bless the name of God even if God should strike him. Jacob promises to bless God. He is shown several terrifying episodes in his future, from the rape of his daughter Dinah (Gen. 34.1-5) to the presumed death of his son Joseph (Gen. 37.33). In the final vision, a perfectly righteous man he does not recognize suffers an ignominious death on a cross. After each vision, Jacob “wrestles” with the temptation to impiety but instead blesses God’s name. Jacob is thus found worthy of the favors bestowed upon him. As the stranger leaves, Jacob sees his face and recognizes it as the face of the man on the cross. When morning comes, Esau arrives and greets his brother with kisses of fraternal love (Gen. 33.4).

Probus and Abel and Abel address different problems and in very different styles. Yet, in some sense they are a diptych, to borrow the apt metaphor of André Clair. Each work deals with a different kind of necessity. The necessity in Probus (also in How to Find, How to Search for a First Truth?) is that of deterministic causes resulting inevitably in certain effects, included among the latter, one’s supposedly free decisions. The necessity in Abel and Abel is the inalterability of the past, especially as it pertains to Aram’s decree. The decree sets the conditions of the test but does not determine its outcome. This is very different from the decree of damnation of the unhappy cleric. The tableau of God’s foreknowledge includes every detail of how the cleric will act in the future. In the dialogue, there is no equivalent of the “name” that is written in the cedar ark, no variable whose value can be decided by one’s free choice. Indeed, Probus can be read as an extended reductio against traditional teachings about foreknowledge and predestination. The predestinate fails to console the reprobate. There can be no hope for him for he knows with certainty that he will be damned. The dialogue, however, offers hope for the reader, the hope of breaking free of a nightmarish theology by rethinking the concepts of freedom and the nature of God along the lines that the character of Probus suggests—after all, Probus is the name of the dialogue. Abel and Abel reinforces the idea that God faces a relatively open future. The story does not tell which of the three options is chosen, nor does it suggest that one of them is predestined to occur.

The story of the Abel twins goes beyond the dialogue, however, by returning to the question raised in How to Find, How to Search for a First Truth? of how self-identity is constructed. Clair argues convincingly that Lequyer means to generalize from the Abel twins to all human beings. The twins represent the fact that one’s self-identity is not merely a question of not being someone else. They are different from each other but neither acquires a new “name”—that is, a distinctive identity—apart from exercising their freedom in response to the test. This is consistent with Lequyer’s theme of the self as a product of self-creative acts, although the self-creativity of the twins most clearly manifests itself in relation to other persons. In Abel and Abel, there is a shift in the question of self-creativity from metaphysics to axiology. The fulfillment of self-creativity, which is to say its highest manifestation, is in love. The “I” of self-creativity becomes inseparable from the “we”. Lequyer appropriates this idea for theology in his reflections on the Trinity. He says that a Divine Love that cannot say “You” to one that is equal to itself would be inconsolable by the eternal absence of its object [Abel et Abel 1991, 101]. If God is love, as Christianity maintains (I John 4.8), then the unity of God requires a plurality within the Godhead.

4. Philosophical Legacy

Renouvier once said that he saved Lequyer’s work from sinking [Esquisse d’une classification systématique, v. 2, 382]. In view of Lequyer’s drowning, it is a fitting if somewhat macabre metaphor. Renouvier often quoted his friend’s work at length in his own books. His edition of The Search for a First Truth, limited though it was to one-hundred and twenty copies, ensured that Lequyer’s philosophy was presented in something like a form of which he would have approved. Renouvier included a brief “Editor’s Preface” but his name appears nowhere in the book. In publishing the book, it was his friend’s contribution to philosophy that he intended to preserve and celebrate, not his own. More widely available editions of the book were published in 1924 and 1993. Another indication of Renouvier’s respect is the marker he was instrumental in erecting over Lequyer’s grave. The inscription reads in part, “to the memory of an unhappy friend and a man of great genius.” Throughout his career he called Lequyer his “master” on the subject of free will and he took meticulous care in attributing to Lequyer the ideas that he borrowed from him. In Renouvier’s last conversations, as recorded by his disciple Louis Prat, he quoted Lequyer’s maxim, “TO MAKE . . . and, in making, TO MAKE ONESELF” as a summary of his own philosophy of personalism [Derniers entretiens, 64].

Others did not take as much care as Renouvier in giving Lequyer the credit that he was due. William James learned of Lequyer from reading Renouvier’s works and wrote to him in 1872 inquiring about The Search for a First Truth which he had not been able to locate through a bookstore. Renouvier sent him a copy which he read, at least in part, and which he donated to the Harvard Library. The essential elements of James’s mature views on free will and determinism closely parallel those of Lequyer—freedom is not merely acting in accordance with the will, the impossibility of experiencing freedom, the importance of effort of attention in the phenomenon of will, the reality of chance, the theoretical impasse between freedom and necessity, and the idea that freedom rightly affirms its own reality. James’s Oxford Street/Divinity Avenue thought experiment in his essay “The Dilemma of Determinism” could be interpreted as an application of a similar passage in the third section of How to Find, How to Search for a First Truth? [OC 52]. There are, to be sure, profound differences between James and Lequyer on many subjects, but where it is a question of free will and determinism the similarities are uncanny.

James always credited Renouvier for framing the issue of free will in terms of “the ambiguity of futures,” but it is clear that Renouvier was a conduit for the ideas of Lequyer. This is nowhere more evident than in James’s 1876 review of two books, by Alexander Bain and Renouvier, published in the Nation. He praises Renouvier’s ideas about freedom, but the views he highlights are the very ideas that Renouvier attributed to Lequyer. In one instance, he confuses a quote from Lequyer as belonging to Renouvier. The unwary reader, like James, assumes that it is Renouvier speaking. In his personal letters James mentions Lequyer by name, but not in any of his works written for publication. It is clear, however, that he thought highly of him. In The Principles of Psychology (1890), James mentions “a French philosopher of genius” and quotes a phrase from the concluding section of How to Find, How to Search for a First Truth? He cites the same phrase, slightly altered, in Some Problems of Philosophy but again not revealing the name of its author [For references, see Viney 1997/2009].

Another famous philosopher who quoted Lequyer without mentioning his name is Jean-Paul Sartre. Sartre may have learned of Lequyer in 1935 when he sat on the board of editors for the Nouvelle Revue Française. The board was considering whether to publish Grenier’s doctoral thesis, La Philosophie de Jules Lequier. The decision was against publication, but not without Sartre objecting that there was still interest among readers in freedom. In 1944, Sartre responded to critics of existentialism and affirmed as his own, the saying, “to make and in making to make oneself and to be nothing except what one has made of oneself.” This is a nearly direct quote from Lequyer. Jean Wahl, who edited a selection of Lequyer’s writings, maintained that Sartre borrowed the principle idea of L’existentialisme est un humanisme (1945) from Lequyer, to wit, that in making our own choices, we are our own creators. Lequyer is not quoted in that presentation. Seven years later, however, in a discussion of Stéphane Mallarmé’s poetry, Sartre again mentions Lequyer’s maxim, placing it in quotation marks, but without reference to the name of the Breton. If one may speak of Lequyer’s anonymous influence on James, one may perhaps speak of Lequyer’s anonymous shadow in the work of Sartre [For references see Viney 2010, 13-14].

The irony in Sartre’s quotations of Lequyer’s maxim is that he uses it not only to express a belief in freedom but also to express his atheism. Sartre rejected the idea that, God creates creatures in accordance with a detailed conception of what they will be. This is what Sartre would characterize as essence preceding existence. The formula of Sartre’s existentialism is that existence precedes essence. In Sartre’s words, it is not the case that “the individual man is the realization of a certain concept in the divine understanding” [Existentialisme est un humanisme, 28]. Of course, Lequyer agrees, but rather than adopting atheism he opted for revising the concept of God as one capable of creating other, lesser, creators. Grenier outlined Lequyer’s theology in his dissertation (just mentioned) but there is no indication—unless his silence says something—of what Sartre thought of it. Other philosophers, however, did not remain silent on Lequyer’s suggestions for revising traditional ideas about God.

After Renouvier, Grenier, and Wahl, the philosopher who made most explicit use of Lequyer’s ideas and promoted their importance was the American Charles Hartshorne. Hartshorne learned of Lequyer from Wahl in Paris in 1948. By that time, Hartshorne was far along in his career with well-developed views of his own in what is known as process philosophy and theology. Nevertheless, he thereafter consistently promoted Lequyer’s significance as a forerunner of process thought. He often quoted the Lequyerian phrase, “God created me creator of myself” and cited Lequyer as the first philosopher to clearly affirm a bilateral influence between God and the creatures. With Hartshorne, Lequyer ceased being, as in James and Sartre, the anonymously cited philosopher. Hartshorne included the first English language excerpt from Lequyer’s writings in his anthology, edited with William L. Reese, Philosophers Speak of God (1953).

Harvey H. Brimmer II (1934-1990), one of Hartshorne’s students, wrote a dissertation titled Jules Lequier and Process Philosophy (1975), which included as appendices translations of How to Find, How to Search for a First Truth? and Probus. This was the first book-length study of Lequyer in English. Brimmer argued, among other things, that the distinction for which Hartshorne is known between the existence/essence of God and the actuality of God is implicit in Lequyer’s thought. According to this idea, God’s essential nature (including the divine existence) is immutable and necessary but God is ever open to new experiences as the particular objects of God’s power, knowledge, and goodness, which are contingent, come to be. For example, it is God’s nature to know whatever exists, but the existence of this particular bird singing is contingent, and so God’s knowledge of it is contingent. Brimmer seems to be on firm footing, for Lequyer says both that God is unchanging but that there can be a change in God [OC 74, compare OC 243].

Hartshorne’s admiration for Lequyer introduced, if unintentionally, its own distortion, as though the only things that matter about Lequyer were the ways in which he anticipated process thought. It may be more accurate, for example, to interpret Lequyer as a forerunner of an evangelical “open theism”—at least a Catholic version—than of process philosophy’s version of divine openness. For example, Lequyer and the evangelical open theists affirm but Hartshorne denies the divine inspiration of the Bible and the doctrine of creation ex nihilo. We may, nevertheless, accentuate the positive by noting that many of Lequyer’s central ideas are incarnated in each variety of open theism. Also noteworthy is that some of those evangelicals who identify themselves as open theists—William Hasker, Richard Rice, and Gregory Boyd—were influenced to a greater or lesser extent by Hartshorne. That Lequyer is an important, if not the most important, pioneer of an open view of God cannot be doubted. Moreover, the combination of literary imagination and philosophical rigor that he brought to the exploration of an open view of God, especially in Probus and Abel and Abel, is unmatched.

The philosopher to whom Lequyer is most often compared is Kierkegaard. Each philosopher endeavored, in the words of Clair, to “think the singular” [Title of Clair 1993]. They would not allow, after the manner of Hegel, a dialectical aufheben in which, they believed, the individual is swallowed by the absolute [OC 347]. Choice and responsibility are central themes for both philosophers. The same can be said of the subject of faith and the “audacity and passion” (Lequyer) that it requires [OC 501]. Both men blurred the line between literature and philosophy, as often happens in superior spirits. Perhaps the best example of this is that they developed what might be called the art of Christian midrash, amending biblical narratives from their own imaginations to shed new light on the text. As Lequyer said in a Kierkegaardian tone, the Scriptures have “extraordinary silences” [OC 231]. Lequyer’s treatment of the story of Abraham and Isaac bears some similarities with what one finds in Kierkegaard’s Fear and Trembling. Both philosophers warn against reading the story in reverse as though Abraham knew all along that God would not allow Isaac to die. Lequyer says that Abraham faced a terrifying reversal of all things human and divine.

If there is a common idea that unites Lequyer and Kierkegaard it is the revitalization of Christianity. Yet, this commonality begins to dissolve under a multitude of qualifications. Kierkegaard’s criticisms of the established church in Denmark were in the truest spirit of Protestantism. Except for an early period of emotional detachment from the church, Lequyer was loyal to Catholicism. The renewal of Christianity meant something different for each philosopher. Kierkegaard spoke of reintroducing Christianity into Christendom and he maintained that the thought behind his whole work was what it means to become a Christian. A distant analogy in Lequyer’s polemic to what Kierkegaard calls “Christendom” is the reasoning of the doctors of the church. Lequyer says that the reasoning of the doctors never had any power over him, even as a child [OC 13]. Whereas Kierkegaard launched an assault on the idea of identifying an institution with Christianity, Lequyer targets the theologians whose theories he believes undercut belief in the freedom of God and of the creatures. Lequyer’s willingness to engage medieval theology on its own terms, matching argument with argument in an attempt to develop a more adequate, logically consistent, and coherent concept of God, stands in contrast to Kierkegaard’s negative dialectic that leads to faith embracing paradox.

5. Conclusion

Lequyer wrote to Renouvier in 1850 that he was writing “something unheard of” [OC 538]. The way in which his ideas and his words have sometimes been invoked without mention of his name makes this sadly ironic. Too often he has been heard from but without himself being heard of. Until recently, the unavailability of his writings in translation tended to confine detailed knowledge of his work to francophones. To make matters more difficult, as Grenier noted, he is something of an απαξ (hapax)—one of a kind. His philosophy does not readily fit any classification or historical development of ideas. Grenier wryly commented on those eager to classify philosophical schools and movements: “Meteors do not have a right to exist because they enter under no nomenclature” [Grenier 1951, 33]. The same metaphor, used more positively, is invoked by Wahl in his edition of Lequyer’s writings. Lequyer, he remarked, left mostly fragments of philosophy, but he left “brief and vivid trails” in the philosophical firmament.

Lequyer worked outside the philosophical mainstream. Yet, he can be regarded, in the expression of Xavier Tilliette, as a scout or a precursor of such diverse movements as personalism, pragmatism, existentialism, and openness theologies. Of course, it is an honor to be considered in such a light. On the other hand, like a point on the horizon on which lines converge, the distinctiveness and integrity of Lequyer’s own point-of-view is in danger of being lost by such a multitude of comparisons. It does not help matters that Lequyer failed to complete his life’s work. It is often reminiscent of Pascal’s Pensées: nuggets of insight and suggestions for argument are scattered throughout the drafts that he made of his thought. In any event, Goulven Le Brech’s assessment seems secure: “The fragmentary and unfinished work of Jules Lequier is far from having given up all its secrets” [Cahiers Jules Lequier, v. 1, 5].

6. References and Further Reading

  • The abbreviation “OC” refers to OEuvres complètes, Jean Grenier’s edition of Lequyer’s works published in 1952. “Hémon” refers to Prosper Hémon’s biography of Lequyer published in Abel et Abel (1991).
  • The Fonds Jules Lequier [Jules Lequier Archives] are at the University of Rennes. Beginning in 2010, Les amis de Jules Lequier has published annually, under the editorship of Le Brech, Cahiers Jules Lequier [Jules Lequier Notebooks] which includes articles, archival material, and previously published but difficult to find material.

a. Primary Sources

  • Lequier, Jules. 1865. La Recherche d’une première vérité, fragments posthumes [The Search for a First Truth, Postumous Fragments]. Edited by Charles Renouvier. (Saint-Cloud, Impr. de Mme Vve Belin).
  • Lequier, Jules. 1924. La Recherche d’une première vérité, fragments posthumes, recueillis par Charles Renouvier. Notice biographique, par Ludovic Dugas. Paris: Librairie Armand Colin. Dugas’ 58 page introductory essay, titled “La Vie, l’Œuvre et le Génie de Lequier” [The Life, Work, and Genius of Lequier], draws heavily on Hémon’s biography (see Lequier 1991).
  • Lequier, Jules. 1936. La Liberté [Freedom]. Textes inédits présentes par Jean Grenier. Paris: Librairie Philosophique J. Vrin.
  • Lequier, Jules. 1948. Jules Lequier. Textes présentes par Jean Wahl. Les Classiques de la Liberté. Genève et Paris: Editions des Trois Collines.
  • Lequier, Jules. 1952. Œuvres complètes [Complete Works]. Édition de Jean Grenier. Neuchâtel, Suisse: Éditions de la Baconnière.
  • Lequier, Jules. 1985. Comment trouver, comment chercher une première vérité? Suivi de “Le Murmure de Lequier (vie imaginaire)” par Michel Valensi [How to find, how to search for a first truth? Followed by “The Murmure of Lequier (imaginary life)”]. Préface de Claude Morali. Paris: Éditions de l’éclat.
  • Lequier, Jules. 1991. Abel et Abel, suivi d’une “Notice Biographique de Jules Lequyer” [Abel and Abel followed by “A Biographical Notice of JulesLequyer”] par Prosper Hémon. Édition de G. Pyguillem. Combas: Éditions de l’Éclat. Hémon’s biography, though incomplete, is the first and most extensively researched biography of the philosopher. It was written at the end of the nineteenth century.
  • Lequier, Jules. 1993. La Recherche d’une première vérité et autres textes, édition établie et présenté par André Clair. Paris: Presses Universitaires de France.
  • Lequier, Jules. 2010. La Fourche et la quenouille [The Fork and the Distaff], préface et notes par Goulven Le Brech. Bédée : Éditions Folle Avoine.

b. English Translations

  • Brimmer, Harvey H. [with Jacqueline Delobel]. 1974. “Jules Lequier’s ‘The Hornbeam Leaf’” Philosophy in Context, 3: 94-100.
  • Brimmer, Harvey H. and Jacqueline Delobel. 1975. Translations of The Problem of Knowledge (which includes “The Hornbeam Leaf”) (pp. 291-354) and Probus, or the Principle of Knowledge (pp. 362-467). The translations are included as an appendix to Brimmer’s Jules Lequier and Process Philosophy (Doctoral Dissertation, Emory University, 1975), Dissertation Abstracts International, 36, 2892A.
  • Hartshorne, Charles and William L. Reese, editors. 1953. Philosophers Speak of God. University of Chicago Press: 227-230. Contains brief selections from Probus.
  • Viney, Donald W. 1998. Translation of Works of Jules Lequyer: The Hornbeam Leaf, The Dialogue of the Predestinate and the Reprobate, Eugene and Theophilus. Foreword by Robert Kane. Lewiston, New York: The Edwin Mellen Press.
  • West, Mark. 1999. Jules Lequyer’s “Abel and Abel” Followed by “Incidents in the Life and Death of Jules Lequyer.” Translation by Mark West; Biography by Donald Wayne Viney. Foreword by William L. Reese. Lewiston, New York: The Edwin Mellen Press.

c. Secondary Sources in French and English

  • Brimmer, Harvey H. 1967. “Lequier (Joseph Louis) Jules.” The Encyclopedia of Philosophy. Edited by Paul Edwards. Volume 4: 438-439. New York: Macmillan.
  • Clair, André. 2000. Métaphysique et existence: essai sur la philosophie de Jules Lequier. Bibliothèque d’histoire de la philosophie, Nouvelle série. Paris: J. Vrin.
  • Grenier, Jean. 1936. La Philosophie de Jules Lequier. Paris: Presses Universitaires de France.
  • Grenier, Jean. 1951. “Un grand philosophe inconnu et méconnu: Jules Lequier” [A great philosopher unknown and unrecognized]. Rencontre, no ll. Lausanne (novembre): 31-39.
  • Le Brech, Goulven. 2007. Jules Lequier. Rennes : La Part Commune.
  • Pyguillem, Gérard. 1985. “Renouvier et sa publication des fragments posthumes de J. Lequier,” [Renouvier and the publication of the posthumous fragments of J. Lequier]. Archives de Philosophie, 48: 653-668.
  • Séailles, Gabriel. 1898. “Un philosophe inconnu, Jules Lequier.” [An unknown philosopher, Jules Lequier]. Revue Philosophique de la France et de L’Etranger. Tome XLV: 120-150.
  • Tilliette, Xavier. 1964. Jules Lequier ou le tourment de la liberté. [Jules Lequier or the torment of freedom]. Paris: Desclée de Brouwer.
  • Viney, Donald W. 1987. “Faith as a Creative Act: Kierkegaard and Lequier on the Relation of Faith and Reason.” Faith & Creativity: Essays in Honor of Eugene H. Peters. Edited by George Nordgulen and George W. Shields. St. Louis, Missouri: CBP Press: 165-177.
  • Viney, Donald W. 1997. “William James on Free Will: The French Connection.” History of Philosophy Quarterly, 14/1 (October): 29-52. Republished in The Reception of Pragmatism in France & the Rise of Roman Catholic Modernism, 1890-1914, edited by David G. Schultenover, S. J. (Washington, D. C.: The Catholic University of America Press, 2009): 93-121.
  • Viney, Donald W. 1997. “Jules Lequyer and the Openness of God.” Faith and Philosophy, 14/2 (April): 1-24.
  • Viney, Donald W. 1999. “The Nightmare of Necessity: Jules Lequyer’s Dialogue of the Predestinate and the Reprobate.” Journal of the Association of the Interdisciplinary Study of the Arts 5/1 (Autumn): 17-30.
  • Vinson, Alain. 1992. “L’Idée d’éternité chez Jules Lequier.” [The Idea of Eternity According to Jules Lequier]. Les Études Philosophique, numéro 2 (Avril-Juin) (Philosophie française): 179-193.

Author Information

Donald Wayne Viney
Email: dviney@pittstate.edu
Pittsburg State University
U. S. A.

Metaphor and Phenomenology

 The term “contemporary phenomenology” refers to a wide area of 20th and 21st century philosophy in which the study of the structures of consciousness occupies center stage. Since the appearance of Kant’s Critique of Pure Reason and subsequent developments in phenomenology and hermeneutics after Husserl, it has no longer been possible to view consciousness as a simple scientific object of study. It is, in fact, the precondition for any sort of meaningful experience, even the simple apprehension of objects in the world. While the basic features of phenomenological consciousness – intentionality, self-awareness, embodiment, and so forth—have been the focus of analysis, Continental philosophers such as Paul Ricoeur and Jacques Derrida go further in adding a linguistically creative dimension. They argue that metaphor and symbol act as the primary interpreters of reality, generating richer layers of perception, expression, and meaning in speculative thought. The interplay of metaphor and phenomenology introduces serious challenges and ambiguities within long-standing assumptions in the history of Western philosophy, largely with respect to the strict divide between the literal and figurative modes of reality based in the correspondence theory of truth. Since the end of the 20th century, the role of metaphor in the production of cognitive structures has been taken up and extended in new productive directions, including “naturalized phenomenology” and straightforward cognitive science, notably in the work of G. Lakoff and M. Johnson, M. Turner, D. Zahavi, and S. Gallagher.

Table of Contents

  1. Overview
    1. The Conventional View: Aristotle’s Contribution to Substitution Model
    2. The Philosophical Issues
    3. Nietzsche’s Role in Development of Phenomenological Theories of Metaphor
  2. The Phenomenological Theory in Continental Philosophy
    1. Phenomenological Method: Husserl
    2. Heidegger’s Contribution
  3. Existential Phenomenology: Paul Ricoeur, Hermeneutics, and Metaphor
    1. The Mechanics of Conceptual Blending
    2. The Role of Kant’s Schematism in Conceptual Blending
  4. Jacques Derrida: Metaphor as Metaphysics
    1. The Dispute between Ricoeur and Derrida
  5. Anglo-American Philosophy: Interactionist Theories
  6. Metaphor, Phenomenology, and Cognitive Science
    1. The Embodied Mind
    2. The Literary Mind
  7. Conclusion
  8. References and Further Reading

1. Overview

This article highlights the definitive points in the ongoing philosophical conversation about metaphorical language and it’s centrality in phenomenology. The phenomenological interpretation of metaphor, at times presented as a critique, is a radical alternative to the conventional analysis of metaphor. The conventional view, largely inherited from Aristotle, is also known as the “substitution model.” In the traditional, or standard approach, the uses and applications of metaphor have been restricted to (along with other related symbolic phenomena/tropes) the realms of rhetoric and poetics. In this view, metaphor is none other than a kind of categorical mistake, a deviance of sense produced in order to create a lively effect.

While somewhat contested, the standard substitution theory, also referred to as the “similarity theory,” generally defines metaphor as a stylistic literary device involving a deviant and dyadic movement which shifts meaning from one word to another. This view, first and most thoroughly articulated by Aristotle, reinforces the epistemic primacy of the literal, where metaphor can only operate as a secondary device, one which is dependent on the prior level of ordinary descriptive language, where the first-order language in itself contains nothing metaphorical. In most cases, the relation between two orders, literal and figurative, has been interpreted as an implicit simile, which expresses a “this is that” structure. For example, Aristotle mentions, in Poetics: 

When the poet says of Achilles that he “Leapt on the foe as a lion,” this is a simile; when he says of him, “the lion leapt” it is a metaphor—here, since both are courageous, [Homer] has transferred to Achilles the name of “lion.” (1406b 20-3)

In purely conventional terms, poetic language can only be said to refer to itself; that is, it can accomplish imaginative description through metaphorical attribution, but the description does not refer to any reality outside of itself. For the purposes of traditional rhetoric and poetics in the Aristotelian mode, metaphor may serve many purposes; it can be clever, creative, or eloquent, but never true in terms of referring to new propositional content. This is due to the restriction of comparison to substitution, such that the cognitive impact of the metaphoric transfer of meaning is produced by assuming similarities between literal and figurative domains of objects and the descriptive predicates attributed to them.

The phenomenological interpretation of metaphor, however, not only challenges the substitution model, it advances the role of metaphor far beyond the limits of traditional rhetoric. In the Continental philosophical tradition, the most extensive developments of metaphor’s place in phenomenology are found in the work of Martin Heidegger, Paul Ricoeur and Jacques Derrida. They all, in slightly different ways, see figurative language as the primary vehicle for the disclosure and creation of new forms of meaning which emerge from an ontological, rather than purely epistemic or objectifying engagement with the world.

a. The Conventional View: Aristotle’s Contribution to Substitution Model

Metaphor consists in giving the thing a name that belongs to something else; the transference being either from species to genus, or from genus to species, or from species to species, on the grounds of analogy. (Poetics 1457b 6-9)

 While his philosophical predecessor Plato condemns the use of figurative speech for its role in rhetorike, “the art of persuasion,” Aristotle recognizes its stylistic merits and provides us with the first systematic analysis of metaphor and its place in literature and the mimetic arts. His briefer descriptions of how metaphors are to be used can be found in Rhetoric and Poetics, while his extended analysis of how metaphor operates within the context of language as a whole can be inferred by reading On Interpretation together with Metaphysics. The descriptive use of metaphor can be understood as an extension of its meaning; the term derives from the Greek metaphora, from metaphero, meaning “to transfer or carry over.” Thus, the figurative trope emerges from a movement of substitution, involving the transference of a word to a new sense, one which compares or juxtaposes seemingly unrelated subjects.  For example, in Shakespeare’s Sonnet 73:

In me thou seest the glowing of such fire,
That on the ashes of his youth doth lie…

The narrator directly transfers and applies the “dying ember” image in a new “foreign” sense: his own awareness of his waning youth.

This is Aristotle’s contribution to the standard substitution model of metaphor. It is to be understood as a linguistic device, widely applied but remaining within the confines of rhetoric and poetry. Though it does play a central role in social persuasion, metaphor, restricted by the mechanics of similarity and substitution, does not carry with it any speculative or philosophical importance. Metaphors may point out underlying similarities between objects and their descriptive categories, and may instruct through adding liveliness and elegance to speech, but they do not refer, in the strong sense, to a form of propositional knowledge.

The formal structure of substitution operates in the following manner: the first subject or entity under description in one context is characterized as equivalent in some way to the second entity derived from another context; it is either implied or stated that the first entity “is” the second entity in some way. The metaphorical attribution occurs when certain select properties from the second entity are imposed on the first in order to characterize it in some distinctive way. Metaphor relies on pre-existing categories which classify objects and their properties; these categories guide the ascription of predicates to objects, and since metaphor may entail a kind of violation of this order, it cannot itself refer to a “real” class of existing objects or the relations between them. Similarly, in poetry, metaphor serves not as a foundation for knowledge, but as a tool for mimesis or artistic imitation, representing the actions in epic tragedy or mythos in order to move and instruct the emotions of the audience for the purpose of catharsis.

Aristotle’s theory and its significance for philosophy can only be fully understood in terms of the wider context of denotation and reference which supports the classical realist epistemology. Metaphor is found within his taxonomy of speech forms; additionally, simile is subordinate to metaphor and both are figures of speech falling under the rubric of lexis/diction, which itself is composed of individual linguistic units or noun-names and verbs. Lexis operates within the unity of logos, meaning that the uses of various forms of speech must conform to the overall unity of language and reason, held together by categorical structures of being found in Aristotle’s metaphysics.

As a result of Aristotle’s combined thinking in these works, it turns out that the ostensive function of naming individual objects (“this” name standing for “this object” or property) allows for the clear demarcation between the literal and figurative meanings for names. Thus, the noun-name can work as a signifier of meaning in two domains, the literal and the non-literal. However, there remains an unresolved problem: the categorical nature of the boundary between literal and figurative domains will be a point of contention for many contemporary critiques of the theory coming from phenomenological philosophy.

Furthermore, the denotative theory has served in support of the referential function of language, one which assumes a system of methodological connections between language, sense perceptions, mental states, and the external world. The referential relation between language and its objects serves the correspondence theory of truth, in that the truth-bearing capacity of language corresponds to valid perception and cognition of the external world. The theory assumes that these sets of correspondences allow for the consistent and reliable relation of reference between words, images, and objects.

Aristotle accounts for this kind of correspondence in the following way: sense perceptions’s pathemata give rise to the psychological states in which object representations are formed. These states are actually likenesses (isomorphisms) of the external objects. Thus, names for things refer to the things themselves, mental representations of those things, and to the class-based meanings.

If, as Aristotle assumes, the meaning of metaphor rests on the level of the noun-name, its distinguishing feature lies in its deviation, a “something which happens” to the noun/name by virtue of a transfer (epiphora) of meaning. Here, Aristotle creates a metaphor (based on physical movement) in order to explain metaphor. The term “phora” refers to a change in location from one place to another, to which is added the prefix “epi:” epiphora refers then to the transfer of the common proper name of the thing to the new, unfamiliar, alien (allotrios) place or object. Furthermore, the transference (or substitution), borrowing as it does the alien name for the thing, does not disrupt the overall unity of meaning or logical order of correspondence within the denotative system; all such movement remains within the classifications of genus and species.

The metaphoric transfer of meaning will become a significant point of debate and speculation in later philosophical discussions. Although Aristotle himself does not explore the latent philosophical questions in his own theory, subsequent philosophers of language have over the years recast these issues, exploring the challenges to meaning, reference, and correspondence that present themselves in the substitution theory. What happens, on these various levels, when we substitute one object or descriptor of a “natural kind,” to a foreign object domain? It may the be the case that metaphorical transference calls into question the limits of all meaning-bearing categories, and in turn, the manner in which words can be said to “refer” to specific objects and their attributes. By virtue of the epiphoric movement, species and genus attributes of disparate objects fall into relations of kinship, opposition, or deviation among the various ontological categories. These relations allow for the metaphoric novelty which will subsequently fuel the development of alternative theories, those which view as fundamental to our cognitive or conceptual processes. At this point the analysis of metaphor opens up the philosophical space for further debate and interpretation.

b. The Philosophical Issues

In any theory of metaphor, there are significant philosophical implications for the transfer of meaning from one object-domain or context of associations to another. The metaphor, unlike its sister-trope the analogy, creates a new form of predication, suggesting that one category or class of objects (with certain characteristics) can be projected onto another separate class of entities; this projection may require a blurring of the ontological and epistemological distinctions between the kinds of objects that can be said to exist, either in the mind or in the external world. Returning to the Shakespearean metaphor above, what are the criteria that we use to determine whether a dying ember aptly fits the state of the narrator’s consciousness? What are the perceptual and ontological connections between fire and human existence? The first problem lies in how we are to explain the initial “fit” between any predicate category and its objects. Another problem comes to the forefront when we try to account for how metaphors enable us to think in new ways. If we are to move beyond the standard substitution model, we are compelled to investigate the specific mental operations that enable us to create metaphoric representations; we need to elaborate upon the processes which connect particular external objects (and their properties) given to sensory experience to linguistic signs “referring” to a new kind of object, knowledge context, or domain of experience.

According to the standard model, a metaphor’s ability to signify is restricted by ordinary denotation. The metaphor, understood as a new name, is conceived as a function of individual terms, rather than sentences or wider forms of discourse (narratives, texts). As Continental phenomenology develops in the late 19th and 20th centuries, we are presented with radically alternative theories which obscure strict boundaries between the literal and the figurative, disrupting the connections between perception, language, and thought. Namely, the phenomenological, interactionist, and cognitive treatments of metaphor defend the view that metaphorical language and symbol serve as indirect routes to novel ways of knowing and describing human experience. In their own ways, these theories will call into question the validity and usefulness of correspondence and reference, especially in theoretical disciplines such as philosophy, theology, literature, and science.

Although this article largely focuses on explicating phenomenological theories of metaphor, it should be noted that in all three theories mentioned above, metaphor is displaced from its formerly secondary position in substitution theory to occupying the front and center of our cognitive capabilities. Understood as the product of intentional structures in the mind, metaphor now becomes conceptual, rather than merely ornamental, acting as a conduit through which we take apart and re-assemble the concepts we use to describe the varieties and nuances of experience. They all share in the assumption that metaphors suggest, posit, or disclose similarities between objects and domains of experience (where there seem to be none), without explicitly recognizing that a comparison is being made between two sometimes very different kinds of things or events. These theories, when applied to our original metaphor (“in me thou seest…”) contend that at times, there need not be any explicit similarity between states of awareness or existence as “fire” or “ashes”.

c. Nietzsche’s Role in Development of Phenomenological Theories of Metaphor

In Nietzsche’s thought we see an early turning away from the substitution theory and its reliance on the correspondence theory of truth, denotation, and reference. His description of metaphor takes us back to its primordial “precognitive” or ontological origins; Nietzsche acts here as a pre-cursor to later developments, yet in itself his analysis offers a compelling account of the power of metaphor. Though his remarks on metaphor are somewhat scattered, they can be found in the early writings of 1872-74, Nachgelassene Fragmente, and “On Truth and Lie in an Extra-Moral Sense” (see W. Kaufman’s translation in The Portable Nietzsche). Together with the “Rhetorik” lectures, these writings argue for a genealogical explanation of the conceptual, displacing traditional philosophical categories into the metaphorical realm. In doing so, he deconstructs our conventional reliance on the idea that meaningful language must reflect a system of logical correspondences.

With correspondence, we can only assume we are in possession of the truth when our representations or ideas about the world “match up” with external states of affairs. We have already seen how Aristotle’s system of first-order predication supports correspondence, as it is enabled through the denotative ascription of predicates/categorical features of /to objects. But Nietzsche boldly suggests that we are, from the outset, already in metaphor and he works from this starting point. The concepts and judgments we use to describe reality do not flatly reflect pre-existing similarities or causal relationships between themselves and our physical intuitions about reality, they are themselves metaphorical constructions; that is, they are creative forms of differentiation emerging out of a deeper undifferentiated primordiality of being. The truth of the world is more closely reflected in the Dionysian level of pure aesthetic immersion into an “undecipherable” innermost essence of things.

Even in his early work, The Birth of Tragedy, Nietzsche rejects the long-held assumption that truth is an ordering of concepts expressed through rigid linguistic categories, putting forth the alternative view which gives primacy to symbol as the purest, most elemental form of representation. That which is and must be expressed is produced organically, out of the flux of nature and yielding a “becoming” rather than being.

In the Dionysian dithyramb man is incited to the greatest exaltation of all his symbolic faculties; something never before experienced struggles for utterance—the annihilation of the veil of maya, … oneness as the soul of the race and of nature itself. The essence of nature is now to be expressed symbolically; we need a new world of symbols.… (BOT Ch. 2)

Here, following Schopenhauer, he reverses Aristotelian transference of concept-categories from the literal to the figurative, and makes the figurative the original mode for representation of experience. The class terms “species” and “genus”, based in Aristotle and so important in classical and medieval epistemology, only appear to originate and validate themselves in “dialectics and through scientific reflection.” For Nietzsche, the categories hide their real nature, abiding as frozen metaphors which reflect previously experienced levels of natural experience metaphorically represented in our consciousness. They emerge through construction indirectly based in vague images or names for things, willed into being out of the unnamed flowing elements of biological existence. Even Thales the pre-Socratic, we are reminded, in his attempt to give identity to the underlying unity of all things, falls back on a conceptualization of it as water without realizing he is using a metaphor.

Once we construct and begin to apply our concepts, their metaphorical origins are forgotten or concealed from ordinary awareness. This theoretical process is but another attempt to restore “the also-forgotten” original unity of being. The layering of metaphors, the archeological ancestors of concepts, is specifically linked to our immediate experiential capacity to transcend the proper and the individual levels of experience and linguistic signs. We cannot, argues Nietzsche, construct metaphors without breaking out of the confines of singularity, thus we must reject the artificiality of designating separate names for separate things. To assume that an individual name would completely and transparently describe its referent (in perception) is to also assume that language and external experience mirror one another in some perfect way. It is rather the case that language transfers meaning from place to place. The terms metapherein and Übertragung are equivalently applied here; if external experience is in constant flux, it is not possible to reduplicate exact and individual meanings. To re-describe things through metaphor is to “leave out” and “carry-over” meaning, to undergo a kind of dispossession of self, thing, place, and time and an overcoming of both individualisms and dualities. Thus the meaningful expression of the real is seen and experienced most directly in the endlessly creative activity of art and music, rather than philosophy.

2. The Phenomenological Theory in Continental Philosophy

Versions of Nietzsche’s “metaphorization” of thought will reappear in the Continental philosophers described below; those who owe their phenomenological attitudes to Husserl, but disagree with his transcendental idealization of meaning, one which demands that we somehow separate the world of experience from the essential meanings of objects in that world. Taken together, these philosophers call into question the position that truth entails a relationship of correspondence between dual aspects of reality, one internal to our minds and the other external. We consider Heidegger, Ricoeur, and Derrida as the primary examples. For Heidegger, metaphoric language signals a totality or field of significance in which being discloses or reveals itself. Ricoeur’s work, in turn, builds upon aspects of Heidegger’s ontological hermeneutics, explicating how it is the case that metaphors drive speculative reflection. In Ricoeur’s model, the literal level is subverted, and metaphoric language and symbols containing “semantic kernels” create structures of double reference in all figurative forms of discourse. These structures point beyond themselves in symbols and texts, serving as mediums which reveal new worlds of meaning and existential possibilities.

French philosopher Jacques Derrida, on the other hand, reiterates the Nietzschean position; metaphor does not subvert metaphysics, but rather is itself the hidden source of all conceptual structures.

a. Phenomenological Method: Husserl

Edmund Husserl’s phenomenological method laid the groundwork, in the early 20th century, for what would eventually take shape in the phenomenological philosophies of Martin Heidegger, Maurice Merleau-Ponty, and Jean-Paul Sartre. Husserl’s early work provides the foundation for exploring how these modes of presentation convey the actual meaningful contents of experience. He means to address here the former distinction made by Kant between the phenomenal appearances of the real (to consciousness) and the noumenal reality of the things-in-themselves. Husserl, broadly speaking, seeks to resolve not only what some see as a problematic dualism in Kant, but also some philosophical problems that accompany Hegel’s constructivist phenomenology.

Taken in its entirety, Husserl’s project demonstrates a major shift in the 20th century phenomenology, seeking a rigorous method for the description and analysis of consciousness and the contents given to it. He intends his method to be the scientific grounding for philosophy; it is to be a critique of psychologism and a return to a universal knowledge of “the things themselves,” those intelligible objects apprehended by and given to consciousness.

In applying this method we seek, Husserl argues, a scientific foundation for universally objective knowledge; adhering to the “pure description” of phenomena given to consciousness through the perception of objects. If those objects are knowable, it is because they are immediate in conscious experience. It is through the thorough description of these objects as they appear to us in terms of color, shape, and so forth, that we apprehend that which is essential – what we call “essences” or meanings. Here, the act of description is a method for avoiding a metaphysical trap: that of imposing these essences or object meanings onto the contents of mental experience. Noesis, for Husserl, achieves its aim by including within itself (giving an account of) the role that context or horizon plays in delineating possible objects for experience. This will have important implications for later phenomenological theories of metaphor, in that metaphors may be said intend new figurative contexts in which being appears to us in new ways.

In Ideen (30), Husserl explains how such a horizon or domain of experience presents a set of criteria for us to apply. We choose and identify an object as a single member of a class of objects, and so these regions of subjective experience, also called regions of phenomena, circumscribe certain totalities or generic unities to which concrete items belong. In order to understand the phenomenological approach to meaning-making, it is first necessary to clarify what we mean by “phenomenological description,” as it is described in Logical Investigations. Drawing upon the work of Brentano and Meinong, Husserl develops a set of necessary structural relations between the knower (ego), the objects of experience, and the horizon within which those objects are given. The relation is characterized in an axiomatic manner as intentionality, where the subjective consciousness and its objects are correlates brought together in a psychological act. Subjectivity contributes to and makes possible cognition; specifically, it must be the case that perception and cognition are always about something given in the stream of consciousness, they are only possible because consciousness intends or refers to these immanent objects. As we shall presently see, the intentional nature of consciousness applies to Ricoeur’s hermeneutics of the understanding, bestowing metaphor with a special ability to expand (to nearly undermining) the structure of reference in a non-literal sense to an existential state.

Husserl’s stage like development of phenomenology unveils the structure of intentionality as derived from the careful description of certain mental acts. Communicable linguistic expressions, such as names and sentences, exist only in so far as they exhibit intentional meanings for speakers. Written or spoken expressions only carry references to objects because they have meanings for speakers and knowers. If we examine all of our mental perceptions, we find it impossible to think without intending an object of some sort. Both Continental and Anglo-American thinkers agree that metaphor holds the key to understanding these processes, as it re-organizes our senses of perception, temporality, and relation of subject to object, referring to these as subjects of existential concern and possibility.

b. Heidegger’s Contribution

Heidegger, building upon the phenomenological thematic, asserts that philosophical analysis should keep to careful description of the human encounter with the world, revealing the modes in which being is existentially or relationally given. This signals both a nod to and departure from Husserl, leading to a rethinking of phenomenology which replaces the theoretical apprehension of meaning with an “uncovering” of being as it is lived out in experiential contexts or horizons. Later, Ricoeur will draw on Heidegger’s “existentialized” intentionality as he characterizes the referential power of metaphors to signal those meanings waiting to be “uncovered’ by Dasein’s (human as being-there) experience of itself – in relation to others, and to alternate worlds of possibility.

As his student, Heidegger owes to Husserl the phenomenological intent to capture “the things themselves” (die Sachen selbst), however, the Heideggerian project outlined in Being and Time rejects the attempt to establish phenomenology as a science of the structures of consciousness and reforms it in ontologically disclosive or manifestational terms. Heidegger’s strong attraction to the hermeneutic tradition in part originates in his dialogue with Wilhem Dilthey, the 19th century thinker who stressed the importance of historical consciousness attitude in guiding the work of the social sciences and hermeneutics, directed toward the understanding of primordial experience. Dilthey’s influence on Heidegger and Ricoeur (as well as Gadamer) is evident, in that all recognize the historical life of humans as apprehended in the study of the text (a form of spirit), particularly those containing metaphors and narratives conveying a lived, concrete experience of religious life.

Heidegger rejects the notion that the structures of consciousness are internally maintained as transcendentally subjective and also directed towards their transcendental object. Phenomenology must now be tied to the problems of human existence, and must then direct itself immediately towards the lived world and allow this “beholding” of the world to guide the work of “its own uncovering.”

Heidegger argues for a return to the original Greek definitions of the terms phainonmenon (derived from phainesthai, or “that which shows itself”) and logos. Heidegger adopts these terms for his own purposes, utilizing them to reinforce the dependence of ontological disclosure or presence: those beings showing themselves or letting themselves be “seen-as.” The pursuit of aletheia, (“truth as recovering of the forgotten aspects of being”) is now fulfilled through adherence to a method of self-interpretation achieved from the standpoint of Dasein’s (humanity’s) subjectivity, which has come to replace the transcendental ego of Kant and Husserl.

The turn to language, in this case, must be more than simple communication between persons; it is a primordial feature of subjectivity. Language is to be the interpretive medium of the understanding through which all forms of being present themselves to subjective apprehension. In this way, Heidegger replaces the transcendental version of phenomenology with the disclosive, where the structure of interpretation provides further insight into his ontological purposes of the understanding.

3. Existential Phenomenology: Paul Ricoeur, Hermeneutics, and Metaphor

The linguistic turn in phenomenology has been most directly applied to metaphor in the works of Paul Ricoeur, who revisits Husserlian and Heideggerian themes in his extensive treatment of metaphor. He extends his analysis of metaphor into a fully developed discursive theory of symbol, focusing on those found in religious texts and sacred narratives. His own views follow from what he thinks are overly limited structuralist theories of symbol, which, in essence, do not provide a theory of linguistic reference useful for his own hermeneutic project. For Ricoeur, a proper theory of metaphor understands it to be “a re-appropriation of our effort to exist,” echoing Nietszche’s call to go back to the primordiality of being. Metaphor must then include the notion that such language is expressive and constitutive of the being of those who embark on philosophical reflection.

Much of Ricoeur’s thought can be characterized by his well-known statement “the symbol gives rise to the thought.” Ricoeur shares Heidegger’s and Husserl’s assumptions: we reflectively apprehend or grasp the structures of human experience as they are presented to temporalized subjective consciousness While the “pure” phenomenology of Husserl seeks a transparent description of experience as it is lived out in phases or moments, Ricoeur, also following Nietzsche, centers the creation of meaning in the existential context. The noetic act originates in the encounter with a living text, constituting “a horizon of possibilities,” for the meaning of existence, thus abandoning the search for essences internal to the objects we experience in the world.

His foundational work in The Symbolism of Evil and The Rule of Metaphor places the route to human understanding concretely, via symbolic expressions which allow for the phenomenological constitution, reflection, and re-appropriation of experience. These processes are enabled by the structure of “seeing-as,” adding to Heidegger’s insight with the metaphoric acting as a “refiguring” of that which is given to consciousness. At various points he enters into conversation with Max Black and Nelson Goodman, among others, who also recognize the cognitive contributions to science and art found in the models and metaphors. In Ricoeur’s case, sacred metaphors display the same second-order functions shared by those in the arts and sciences, but with a distinctively ontological emphasis: “the interpretation of symbols is worthy of being called a hermeneutics only insofar as it is a part of self-understanding and of the understanding of being” (COI 30).

In The Rule of Metaphor, Ricoeur, departing from Aristotle, locates the signifying power of metaphor primarily at the level of the sentence, not individual terms. Metaphor is to be understood as a discursive linguistic act which achieves its purpose through extended predication rather than simple substitution of names. Ricoeur, like so many language philosophers, argues that Aristotelian substitution is incomplete; it does not go far enough in accounting for the semantic, syntactic, logical, and ontological issues that accompany the creation of a metaphor. The standard substitution model cannot do justice to potential for metaphor create meaning by working in tandem with propositional thought-structures (sentences). To these ends, Ricoeur’s study in The Rule of Metaphor replaces substitution and strict denotative theories with a theory of language that works through a structure of double reference.

Taking his lead while diverging from Aristotle, Ricoeur reads the metaphorical transfer of a name as a kind of “category mistake” which produces an imaginative construction about the new way objects may be related to one another. He expands this dynamic of “meaning transfer” on to the level of the sentence, then text, enabling the production of a second-order discursive level of thinking whereby all forms of symbolic language become phenomenological disclosures of being.

The discussion begins with the linguistic movement of epiphora (transfer of names-predicates) taken from an example in Poetics. A central dynamic exists in transposing one term, with one set of meaning-associations onto another. Citing Aristotle’s own example of “sowing around a god-created flame,”

If A = light of the sun, B = action of the sun, C = grain, and D = sowing, then

B is to A, as D is to C

We see action of the sun is to light as sowing is to grain, however, B is a vague action term (sun’s action) which is both missing and implied; Ricoeur calls this a “nameless act” which establishes a similar relation to the object, sunlight, as sowing is to the grain. In this act the phenomenological space for the creation of new meaning is opened up, precisely because we cannot find a conventional word to take the place of a metaphorical word. The nameless act implies that the transfer of an alien name entails more than a simple substitution of concepts, and is therefore said to be logically disruptive.

a. The Mechanics of Conceptual Blending

The “nameless act” entails a kind of “cognitive leap:’’ since there is no conventional term for B, the act does not involve substituting a decorative term in its place. Rather, a new meaning association has been created through the semantic gap between the objects. The absence of the original literal term, the “semantic void”, cannot be filled without the creation of a metaphor which signals the larger discursive context of the sentence and eventually, the text. If, as above, the transfer of predicates (the sowing of grain as casting of flames) challenges the “rules” of meaning dictated by ostensive theory, we are forced to make a new connection where there was none, between the conventional and metaphorical names for the object. For Ricoeur, the figurative (sowing around a flame) acts as hermeneutic medium in that it negates and displaces the original term, signifying a “new kind of object” which is in fact a new form (logos) of being. The metaphorical statement allows us to say that an object is and is not what we usually call it. The sense-based aspect is then “divorced” from predication and subsequently, logos is emptied of its objective meaning; the new object may be meaningful but not clear under the conditions of strict denotation or natural knowledge.

We take note that the “new object” (theoretically speaking) has more than figurative existence; the newly formed subject-predicate relation places the copula at the center of the name-object (ROM 18). Ricoeur’s objective is to create a dialectically driven process which produces a new ‘object-domain’ or category of being. Following the movement of the Hegelian Aufhebung, (through the aforementioned negation and displacement) the new name has opened up a new field of meaning to be re-appropriated into our reflective consciousness. This is how Ricoeur deconstructs first-order reference in order to develop an ontology of sacred language based on second-order reference.

We are led to the view that myths are modes of discourse whose meanings are phenomenological spaces of openness, creating a nearly infinite range of interpretations. Thus we see how metaphor enables being, as Aristotle notes, to “be said in many ways.”

Ricoeur argues that second-order discursivity “violates” the pre-existing first order of genus and species, in turn causing a kind of upheaval among further relations and rules set by the categories: namely subordination, coordination, proportionality or equality among object properties. Something of a unity of being remains, yet for Ricoeur this non-generic unity or enchainement, corresponds to a single generic context referring to “Being,” restricting the senses or applications of transferred predicates in the metaphoric context.

b. The Role of Kant’s Schematism in Conceptual Blending

The notion of a “non-generic unity” raises, perhaps, more philosophical problems than it answers. How are we to explain the mechanics which blend descriptors from one object domain and its sets of perceptions, to a domain of foreign objects? Ricoeur addresses the epistemic issues surrounding the transfer of names from one category to another in spatiotemporal experience by importing Kant’s theory of object construction, found in the Critique of Pure Reason. In the “Transcendental Schematism”, Kant establishes the objective validity of the conceptual categories we use to synthesize the contents of experience. In this section, Kant elevates the Aristotelian categories from grammatical principles to formal structures intrinsic to reason. Here, he identifies an essential problem for knowledge: how are we to conceive a relationship between these pure concept-categories of the understanding and the sensible objects given to us in space and time? With the introduction of the schematism, Kant seeks a resolution to the various issues inherent to the construction of mental representations (a position shared by contemporary cognitive scientists; see below). For Ricoeur, this serves to answer the problem of how metaphoric representations of reality can actually “refer” to reality (even if only at the existential level of experience).

Kant states “the Schematism” is a “sensible condition under which alone pure concepts of the understanding can be employed” (CPR/A 136). Though the doctrine is sometimes said to be notoriously confusing due to its circular nature, the schemata are meant as a distinctive set of mediating representations, rules, or operators in the mind which themselves display the universal and necessary characteristics of sensible objects; these characteristics are in turn synthesized and unified by the activity of the transcendental imagination.

In plainer terms, the schematic function is used by the imagination to guide it in the construction of images. It does not seem to be any kind of picture of an object, but rather the “form” or “listing” of how we produce the picture. For Ricoeur, the schematism lends the structural support for assigning an actual truth-value or cognitive contribution to the semantic innovation produced by metaphor. The construction of new meaning via new forms of predication entails a re-organization and re-interpretation of pre-existing forms, and the operations of the productive imagination enable the entire process.

In the work Figuring the Sacred, for example, Ricoeur, answering to his contemporary Mircea Eliade ( The Sacred and The Profane), moves metaphor beyond the natural “boundedness” of myths and symbols. While these manifest meaning, they are still constrained in that they must mirror the natural cosmic order of things. Metaphor, on the other hand, occupies the center of a “hermeneutic of proclamation;” it has the power to proclaim because it is a “free invention of discourse.” Ricoeur specifically explicates biblical parables, proverbs, and eschatological statements as extended metaphorical processes. Thus, “The Kingdom of God will not come with signs that you can observe. Do not say, ‘It is here; it is there.’ Behold the kingdom of God is among you” (Luke 17:20-21). This saying creates meaning by breaking down our ordinary or familiar temporal frameworks applied to interpretation of signs (of the kingdom). The quest for signs is, according to Ricoeur, “overthrown” for the sake of “a completely new existential signification” (FS 59).

This discussion follows from the earlier work in The Rule of Metaphor, where the mechanics of representation behind this linguistic act of “re-description” are further developed. The act points us towards a novel ontological domain of human possibility, enabled through new cognitive content. The linguistic act of creating a metaphor in essence becomes a hermeneutic act directed towards a gap which must be bridged, that between the abstract (considerations of reflection) understanding (Verstehen) and the finite living out of life. In this way Ricoeur’s theory, often contrasted with that of Derrida, takes metaphor beyond the mechanics of substitution.

4. Jacques Derrida: Metaphor as Metaphysics

In general, Derrida's deconstructive philosophy can be read as a radically alternative way of reading philosophical texts and arguments, viewing them in a novel way through the lens of a rhetorical methodology. This will amount to the taking apart of established ways in which philosophers define perception, concept formation, meaning, and reference.

Derrida, from the outset, will call into question the assumption that the formation of concepts (logos) somehow escapes the primordiality of language and the fundamentally metaphorical-mythical nature of philosophical discourse. In a move which goes much further than Ricoeur, Derrida argues for what Guiseseppe Stellardi so aptly calls the “reverse metaphorization of concepts.” The reversal is such that there can be no final separation between the linguistic-metaphorical and the philosophical realms. These domains are co-constitutive of one another, in the sense that either one cannot be fully theorized or made to fully or transparently explain the meaning of the other. The result is that language acquires a certain obscurity, ascendancy, and autonomy. It will permanently elude our attempts to fix its meaning-making activity in foundational terms which necessitate a transcendent or externalized (to language) unified being.

Derrida's White Mythology offers a penetrating critique of the common paradigm involving the nature of concepts, posing the following questions: “Is there metaphor in the text of philosophy, and if so, how?” Here, the history of philosophy is characterized as an economy, a kind of "usury" where meaning and valuation are understood as metaphorical processes involving “gain and loss.” The process is represented through Derrida’s well-known image of the coin:

I was thinking how the Metaphysicians, when they make a language for themselves, are like … knife-grinders, who instead of knives and scissors, should put medals and coins to the grindstone to efface … the value… When they have worked away till nothing is visible in these crown pieces, neither King Edward, the Emperor William, nor the Republic, they say: 'These pieces have nothing either English, German, or French about them; we have freed them from all limits of time and space; they are not worth five shillings any more ; they are of inestimable value, and their exchange value is extended indefinitely.’ (WM 210).

The “usury” of the sign (the coin) signifies the passage from the physical to the metaphysical. Abstractions now become “worn out” metaphors; they seem like defaced coins, their original, finite values now replaced by a vague or rough idea of the meaning-images that may have been present in the originals.

Such is the movement which simultaneously creates and masks the construction of concepts. Concepts, whose real origins have been forgotten, now only yield an empty sort of philosophical promise – that of “the absolute”, the universalized, unlimited “surplus value” achieved by the eradication of the sensory or momentarily given. Derrida reads this process along a negative Hegelian line: the metaphysicians are most attracted to “concepts in the negative, ab-solute, in-finite, non-Being” (WM 121). That is, their love of the most abstract concept, made that way “by long and universal use”, reveals a preference for the construction of a metaphysics of Being. This is made possible via the movement of the Hegelian Aufhebung. The German term refers to a dynamic of sublation where the dialectical, progressive movement of consciousness overcomes and subsumes the particular, concrete singularities of experience through successive moments of cognition. Derrida levels a strong criticism against Hegel’s attempts to overcome difference, arguing that consciousness as understood by Hegel takes on the quality of building an oppressive sort of narrative, subsuming the particular and the momentary under an artificial theoretical gaze. Derrida prefers giving theoretical privilege to the negative; that is, to the systematic negation of all finite determinations of meaning derived from particular aspects of particular beings.

Echoing Heidegger, Derrida conceives of metaphysical constructs as indicative of the Western "logocentric epoch" in philosophy. They depend for their existence on the machinery of binary logic. They remain static due to our adherence to the meaning of ousia (essence), the definition of being based on self-identitical substance, which can only be predicated or expressed in either/or terms. Reference to being, in this case, is constrained within the field of the proper and univocal. Both Heidegger and Derrida, and to some degree Ricoeur seek to free reference from these constraints. Unlike Heidegger, however, Derrida does not work from the assumption that being indicates some unified primordial reality.

For Derrida, there lies hidden within the merely apparent logical unity (with its attendant binary oppositions) or logocentricity of consciousness a white mythology, masking the primitive plurivocity of being which eludes all attempts to name it. Here we find traces of lost meanings, reminiscent of the lost inscriptions on coins. These are “philosophemes,” words, tropes or modes of figuration which do not express ideas or abstract representations of things (grounded in categories), but rather invoke a radically plurivocal notion of meaning. Having thus dismantled the logic of either/or with difference (difference), Derrida gives priority to ambiguity, in “both/and” and “neither/nor” modes of thought and expression. Meaning must then be constituted of and by difference, rather than identity, for difference subverts all preconceived theoretical or ontological structures. It is articulated in the context of all linguistic relations and involves ongoing displacement of a final idealized and unified form of meaning; such displacement reveals through hints and traces, the reality and experience of a disruptive alterity in meaning and being. Alterity is “always already there” by virtue of the presence of the Other.

With the introduction of “the white mythology,” Derrida’s alignment with Nietszche creates a strong opposition to traditional Western theoria. Forms of abstract ideation and theoretical systems representing the oppressive consciousness of the “white man,” built in the name of reason/logos, are in themselves a collection of analogies, existing as colorless dead metaphors whose primitive origins lie in the figurative realms of myths, symbol, and fable.

Derrida's project, resulting as it does in the deconstruction of metaphysics, runs counter to Ricoeur's tensive theory. In contrast to Heidegger’s restrained criticism Derrida’s deconstruction appears to Ricoeur “unbounded.” That is, Ricoeur still assumes a distinction between the speculative and the poetic, where the poetic “drives the speculative” to explicate a surplus of meaning. The surplus, or plurivocity is problematic from Derrida's standpoint. The latter argues that the theory remains logocentric in that it remains true to the binary mode of identity and difference which underlie metaphysical distinctions such as “being and non-being.” For Ricoeur, metaphors create a new space for meaning based on the tension between that which is (can be properly predicated of an object) and that which “is not” (which cannot be predicated of an object). Derrida begs to differ: in the final analysis, there can be no such separation, systematic philosophical theory or set of conceptual structures through which we subsume and “explain” the cognitive or existential value of metaphor.

Derrida's reverse metaphorization of concepts does not support a plurivocal characterization of meaning and being, it does not posit a wider referential field; for Derrida metaphors and concepts remain in a complex, always ambiguous relation to one another. Thus he seems to do away with “reference,” or the distinction between signifier and signified, moving even beyond polysemy (the many potential meaning that words carry). The point here is to preserve the flux of sense and the ongoing dissemination of meaning and otherness.

a. The Dispute between Ricoeur and Derrida

The dispute between Ricoeur and Derrida regarding the referential power of metaphor lies in where they position themselves with regard to Aristotle. Ricoeur's position, in giving priority to the noun-phrase instead of the singular name, challenges Aristotle while still appealing to the original taxonomy (categories) of being based on an architectonic system of predication. For Ricoeur, metaphoric signification mimics the fundamentally equivocal nature of being—we cannot escape the ontological implications of Aristotle’s statement: being can be “said in many ways.” Nevertheless, Ricoeur maintains the distinction between mythos and logos, for we need the tools provided by speculative discourse to explain the polysemic value of metaphors.

Derrida’s deconstruction reaches back to dismantle Aristotle's theory, rooted as it is in the ontology of the proper name/noun (onoma) which signifies a thing as self-identical being (homousion). This, states Derrida, “reassembles and reflects the culture of the West; the white man takes his own mythology, Indo-European mythology, his own logos, that is, the mythos of his idiom, for the universal form of that he must still wish to call Reason” (WM 213).

The original theory makes metaphor yet another link in the logocentric chain—a form of metaphysical oppression. If the value of metaphor is restricted to the transference of names, then metaphor entails a loss or negation of the literal which is still under the confines of a notion of discourse which upholds the traditional formulations of representation and reference in terms of the mimetic and the “proper” which are, in turn, based on a theory of perception (and an attendant metaphysics) that gives priority to resemblance, identity, or what we can call “the law of the same.”

5. Anglo-American Philosophy: Interactionist Theories

Contemporary phenomenological theories of metaphor directly challenge the straightforward theory of reference, replacing the ordinary propositional truth based on denotation with a theory of language which designates and discloses its referents. These interactionist theories carry certain Neo-Kantian features, particularly in the work of the analytic philosophers Nelson Goodman and Max Black. They posit the view that metaphors can reorganize the connections we make between our perceptions of the world. Their theories reflect certain phenomenological assumptions about the ways in which figurative language expands the referential field, allowing for the creation of novel meanings and creating new possibilities for constructing models of reality; in moving between the realms of art and science, metaphors have an interdisciplinary utility. Both Goodman and Black continue to challenge the traditional theory of linguistic reference, offering instead the argument that reference is enabled by the manipulation of predicates in figurative modes of thinking through language.


6. Metaphor, Phenomenology, and Cognitive Science

Recent studies underscore the connections between metaphors, mapping, and schematizing aspects of cognitive organization in mental life. Husserl’s approach to cognition took an anti-naturalist stance, opposed to defining consciousness as an objective entity and therefore unsuited to studying the workings of subjective consciousness; instead his phenomenological stance gave priority to subjectivity, since it constitutes the necessary set of pre-conditions for knowing anything at all as an object or a meaning. Recently, the trend has been renewed and phenomenology has made some productive inroads into the examination of connectionist and embodied approaches to perception, cognition and other sorts of dynamic and adaptive (biological) systems.

Zahavi and Thompson, for example, see strong links between Husserlian phenomenology and philosophy of mind with respect to the phenomena of consciousness, where the constitutive nature of subjective consciousness is clarified specifically in terms of the forms and relations of different kinds of intentional mental states. These involve the unity of temporal experience, the structural relations between intentional mental acts and their objects, and the inherently embodied nature of cognition. Those who study the embodied mind do not all operate in agreement with traditional phenomenological assumptions and methods. Nevertheless, some “naturalized” versions in the field of consciousness studies are now gaining ground, offering viable solutions to the kind of problematic Cartesian dualistic metaphysics that Husserl’s phenomenology suggests.

a. The Embodied Mind

In recent years, the expanding field of cognitive science has explored the role of metaphor in the formation of consciousness (cognition and perception). In a general sense, it appears that contemporary cognitivist, constructivist, and systems (as in self-organizing) approaches to the study of mind incorporate metaphor as a tool for developing an anti-metaphysical, anti-positivist theory of mind, in an attempt to reject any residual Cartesian and Kantian psychologies. The cognitive theories, however, remain partially in debt to Kantian schematism and its role in cognition.

There is furthermore in these theories an overturning of any remaining structuralist suppositions (that language and meaning might be based on autonomous configurations of syntactic elements). Many cognitive scientists, in disagreement with Chomsky’s generative grammar, study meaning as a form of cognition that is activated in context of use. Lakoff and Johnson, in Philosophy in the Flesh, find a great deal of empirical evidence for the ways in which metaphors shape our ordinary experience, exploring the largely unconscious perceptual and linguistic processes that allow us to understand one idea or domain of experience, both conceptual and physical, in terms of a “foreign” domain. The research follows the work of Srini Narayanan and Eleanor Rosch, cognitive scientists who also examine schemas and metaphors as key in embodied theories of cognition. Such theories generally trace the connective interplay between our neuronal makeup, or physical interactions with the environment, and our own private and social human purposes.

In a limited sense, the stress on the embodied nature of cognition aligns itself with the phenomenological position. Perceptual systems, built in physical response to determinate spatio-temporal and linguistic contexts, become phenomenological “spaces” shaped through language use. Yet these researchers largely take issue with Continental phenomenology and traditional philosophy in a dramatic and far-reaching way, objecting to the claim that the phenomenological method of introspection makes adequate space for our ability to survey and describe all available fields of consciousness in the observing subject. If it is the case that we do not fully access the far reaches of hidden cognitive processes, much of the metaphorical mapping which underlies cognition takes place at an unconscious level, which is sometimes referred to as “the cognitive unconscious.”(PIF 12-15)

Other philosophers of mind, including Stefano Arduini, and Antonio D’Amasio, work along similar lines in cognitive linguistics, cognitive science, neuroscience, and artificial intelligence. Their work investigates the ways in which metaphors ground various first and second-order cognitive and emotional operations and functions. Their conclusions share insights with the Continental studies conceiving of metaphor as a “refiguring” of experience. There is then some potential for overlap with this cognitive-conceptual version of metaphor, where metaphors and schemata embody emergent transformative categories enabling the creation of new fields of cognition and meaning.

Arduini, in his work, has explored what he calls the “anthropological ability” to build up representations of the world. Here rhetorical figures are realized on the basis of conceptual domains which create the borders of experience. We have access to a kind of reality that would otherwise be indeterminate, for human beings have the ability to conceptualize the world in imaginative terms through myth, symbol, the unconscious, or any expressive sign. For Arduini, figurative activity does not depict the given world, but allows for the ability to construct world images employed in reality. To be figuratively competent is to use the imagination as a tool which puts patterns together in inventive mental processes. Arduini then seems to recall Nieztsche; anthropologically speaking, humans are always engaging in some form of figuration or form of language, which allows for “cognitive competence” in that it chooses among particular forms which serve to define the surrounding contexts or environments. Again, metaphor is foundational to the apprehension of reality; it is part of the pre-reflective or primordial apparatus of experience, perception, and first- through second-order thought, comprising an entire theoretical approach as well as disciplines such as evolutionary anthropology (see Tooby and Cosmides).

b. The Literary Mind

The work of Gilles Fauconnier and Mark Turner extends that of Lakoff and Johnson outlined above. For Fauconnier, the task of language is to construct, and for the linguist and cognitive scientist it is “a window into the mind.” Independently and together, Fauconnier and Turner’s collaboration results in a theory of conceptual blending in which metaphorical forms take center stage. Basically, the theory of conceptual blending follows from Lakoff and Johnson’s work on the “mapping” or projective qualities of our cognitive faculties. For example, if we return to take Shakespearean line “in me thou seest the glowing of such fire”, the source is fire, whose sets of associations are projected onto the target – in this case the waning aspect of the narrator. Their research shows that large numbers of such cross-domain mappings are expressed as conceptual structures which have propositional content: for example, “life is fire, loss is extinction of fire.” There exist several categories of mappings across different conceptual domains, including spatio-temporal orientation, movement, and containment. For example: “time flies” or “this relationship is smothering.”

Turner’s work in The Literary Mind, takes a slightly different route, portraying these cognitive mechanisms as forms of “storytelling.” This may, superficially, seem counterintuitive to the ordinary observer, but Turner gives ample evidence for the mind’s ability to do much of its everyday work using various forms of narrative projection (LM 6-9). It is not too far a reach from this version of narrative connection back to the hermeneutic and cognitive-conceptual uses of metaphor outlined earlier. If we understand parables to be essentially forms of extended metaphor, we can clearly see the various ways in which they contribute to the making of intelligible experience.

The study of these mental models sheds light on the phenomenological and hermeneutic aspects of reality-construction. If these heuristic models are necessary to cognitive functioning, it is because they allow us to represent higher-order aspects of reality which involve expressions of human agency, intentionality, and motivation. Though we may be largely unaware of these patterns, they are based on our ability to think in metaphor, are necessary, and are continuously working to enable the structuring of intentional experience – which cannot always be adequately represented by straightforward first-order physical description. Fauconnier states:

We see their status as inventions by contrasting them with alternative representations of the world. When we watch someone sitting down in a chair, we see what physics cannot recognize: an animate agent performing an intentional act. (MTL 19-20)

Turner, along with Fauconnier and Lakoff, connects parabolic thought with the image-schematic or mapping between different domains of encounter with our environments. Fauconnier’s work, correlating here with Turner’s, moves between cognitive-scientific and phenomenological considerations; both depict mapping as a constrained form of projection, a complex mental manipulation which moves across mental structures which correspond to various phenomenological spaces of thought, action, and communication.

Metaphorical mapping allows the mind to cross and conflate several domains of experience. The cross-referencing, reminiscent of Black’s interactionist dynamics, amounts to a form of induction resulting from projected relations between a source structure, a pattern we already understand, onto a target structure, that which we seek to understand.

Mapping as a form of metaphoric construction leads to other forms of blending, conceptual integration, and novel category formation. We can, along with Fauconnier and the rest, describe this emergent evolution of linguistic meaning in dialectical terms, arguing that it is possible to mesh together two images of virus (biological and computational) into a third integrated idea that integrates and expands the meaning of the first two (MTL 22). Philosophically speaking, we seem to have come full circle back to the Hegelian theme which runs through the phenomenological analysis of metaphor as a re-mapping of mind and reality.

7. Conclusion

The Continental theories of metaphor that have extrapolated and developed variations on the theme expressed in Nietzsche’s apocryphal pronouncement that truth is “a mobile army of metaphors.” The notion that metaphorical language is somehow ontologically and epistemologically prior to ordinary propositional language has since been voiced by Heidegger, Ricoeur, and Derrida. For these thinkers metaphor serves as a foundational heuristic structure, one which is primarily designed to subvert ordinary reference and in some way dismantle the truth-bearing claims of first-order propositional language. Martin Heidegger’s existential phenomenology does away with the assumption that true or meaningful intentional statements reflect epistemic judgments about the world; that is, they do not derive referential efficacy through the assumed correspondence between an internal idea and an external object. While there may be a kind of agreement between our notions of things and the world in which we find those things, it is still a derivative agreement emerging from a deeper ontologically determined set of relations between things-in-the-world, given or presented to us as inherently linked together in particular historical, linguistic, or cultural contexts.

The role of metaphor in perception and cognition also dominates the work of contemporary cognitive scientists, linguists, and those working in the related fields of evolutionary anthropology and computational theory. While the latter may not be directly associated with Continental phenomenology, aspects of their work support an “anti-metaphysical” position and draw upon common phenomenological themes which stress the embodied, linguistic, contextual, and symbolic nature of knowledge. Thinkers and researchers in this camp argue that metaphoric schemas are integral to human reasoning and action, in that they allow us to develop our cognitive and heuristic capacities beyond simple and direct first order experience.

8. References and Further Reading

  • Aristotle. Categories and De Interpretatione. J.C. Ackrill, trans. Oxford, Clarendon, 1963. (CDI)
  • Aristotle. Peri Hermenenias. Hans Arens, trans. Philadelphia, Benjamins, 1984. (PH)
  • Arduini, Stefano (ed.). Metaphors. Edizioni Di Storia E Letteratura,
  • Barber, A. and Stainton, R. The Concise Encyclopedia of Language and Linguistics.
  • Oxford, Elsevier Ltd., 2010
  • Black, Max. Models and Metaphors. Ithaca, Cornell, 1962. (MAM)
  • Brentano, Franz C. On the Several Senses of Being in Aristotle. Berkeley, UC Press, 1975.
  • Cazeaux, Clive. Metaphor and Continental Philosophy, from Kant to Derrida. London, Routledge, 2007.
  • Cooper, David E. Metaphor. London, Oxford, 1986.
  • Derrida, Jacques. “White Mythology, Metaphor in the Text of Philosophy” in Margins of Philosophy, trans. A. Bass, Chicago, University of Chicago Press, 1982. (WM)
  • Fauconnier, Gilles. Mappings in Thought and Language. Cambridge, Cambridge University, 1997. (MTL)
  • Gallagher, Shaun. Phenomenology and Non-reductionist Cognitive Science” in Handbook of Phenomenology and Cognitive Science. ed. by Shaun Gallagher and Daniel Schmicking. Springer, New York, 2010.
  • Goodman, Nelson. Languages of Art. New York, Bobs-Merrill, 1968.
  • Hinman, Lawrence. “Nietzsche, Metaphor, and Truth” in Philosophy and Phenomenological Research, Vol. 43, #2, 1984.
  • Harnad, Stevan. “Category Induction and Representation” in Categorical Perception: The Groundwork of Cognition. New York, Cambridge, 1987.
  • Heidegger, Martin. Being and Time. John MacQuarrie and E. Robinson, trans. New York,
  • Harper and Row, 1962. (BT)
  • Heidegger, Martin. The Basic Problems of Phenomenology, trans. A. Hofstadter, Bloomington, Indiana University Press, 1982.
  • Huemer, Wolfgang. The Constitution of Consciousness: A Study in Analytic Phenomenology. Routledge, 2005.
  • Johnson, Mark. “Metaphor and Cognition” in Handbook of Phenomenology and Cognitive Science, ed. by Shaun Gallagher and Daniel Schmicking. Springer, New York, 2010.
  • Joy, Morny. “Derrida and Ricoeur: A Case of Mistaken Identity” in The Journal of Religion. Vol. 68, #04, University of Chicago Press, 1988.
  • Kant, Immanuel. The Critique of Pure Reason. Trans. N. K. Smith, New York, 1958. (CPR)
  • Kofman, Sarah, Nietzsche and Metaphor. Trans. D. Large, Stanford, 1993.
  • Lakoff, George and Johnson, Mark. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York, Perseus-Basic, 1999.
  • Malabou, Catharine. The Future of Hegel: Plasticity, Temporality, and the Dialectic. Trans. Lisabeth During, New York, Routledge, 2005.
  • Nietzsche, F. The Birth of Tragedy and the Case of Wagner. Trans. W. Kaufman. New York, Vintage, 1967. (BOT)
  • Lawlor, Leonard. Imagination and Chance: The Difference Between the Thought of Ricoeur and Derrida. Albany, SUNY Press, 1992
  • Mohanty, J.N. and McKenna, W.R. Husserl’s Phenomenology; A Textbook. Washington, DC, University Press, 1989.
  • Rajan, Tilottama. Deconstruction and the Remainders of Phenomenology: Sartre, Derrida, Foucault, Baudrillard. Stanford, Stanford University Press, 2002.
  • Ricoeur, Paul. Figuring the Sacred, trans. D. Pellauer, Minneapolis, Fortress, 1995. (FS)
  • Ricoeur, Paul. The Rule of Metaphor. Toronto, University of Toronto, 1993. (ROM)
  • Schrift Alan D. and Lawlor, Leonard (eds.) TheHistory of Continental Philosophy; Vol. 4 Phenomenology: Responses and Developments. Chicago, University of Chicago Press, 2010.
  • Stellardi, Giuseppe. Heidegger and Derrida on Philosophy and Metaphor: Imperfect Thought. New York, Humanity-Prometheus, 2000.
  • Tooby, J. and L. Cosmides (ed. with J.Barkow). The Adapted Mind: Evolutionary Psychology and The Generation of Culture. Oxford, 1992.
  • Turner, Mark. The Literary Mind: The Origins of Thought and Language. Oxford, 1996. (LM)
  • Woodruff-Smith, David and McIntyre, Ronald. Husserl and Intentionality: A Study of Mind, Meaning, and Language. Boston, 1982.
  • Zahavi, Dan. “Naturalized Phenomenology” in Handbook of Phenomenology and Cognitive Science.


Author Information

S. Theodorou
Email: stheodorou@immaculata.edu
Immaculata University
U. S. A.

Scientific Change

How do scientific theories, concepts and methods change over time? Answers to this question have historical parts and philosophical parts. There can be descriptive accounts of the recorded differences over time of particular theories, concepts, and methods—what might be called the shape of scientific change. Many stories of scientific change attempt to give more than statements of what, where and when change took place. Why this change then, and toward what end? By what processes did they take place? What is the nature of scientific change?

This article gives a brief overview of the most influential views on the shape and nature of change in science. Important thematic questions are: How gradual or rapid is scientific change? Is science really revolutionary? How radical is the change? Are periods in science incommensurable, or is there continuity between the first and latest scientific ideas? Is science getting closer to some final form, or merely moving away from a contingent, non-determining past? What role do the factors of community, society, gender, or technology play in facilitating or mitigating scientific change? The most important modern development in the topic is that none of these questions have the same answer for all sciences. When we speak of scientific change it should be recognized that it is only at a fairly contextualized level of description of the practices of scientists at rather specific times and places that anything substantial can be said.

Nonetheless, scientific change is connected with many other key issues in philosophy of science and broader epistemology, such as realism, rationality and relativism. The present article does not attempt to address them all. Higher-order debates regarding the methods of historiography or the epistemology of science, or the disciplinary differences between History and Philosophy, while important and interesting, represent an iteration of reflection on top of scientific change itself, and so go beyond the article’s scope.

Table of Contents

  1. If Science Changes, What is Science?
  2. History of Science and Scientific Change
  3. Philosophical Views on Change and Progress in Science
    1. Kuhn, Paradigms and Revolutions
      1. Key Concepts in Kuhn’s Account of Scientific Change
      2. Incommensurability as the Result of Radical Scientific Change
    2. Lakatos and Progressing and Degenerating Research Programs
    3. Laudan and Research Traditions
  4. The Social Processes of Change
    1. Fleck
    2. Hull’s Evolutionary Account of Scientific Change
  5. Cognitive Views on Scientific Change
    1. Cognitive History of Science
    2. Scientific Change and Science Education
  6. Further Reading and References
    1. Primary Sources
    2. Secondary Sources
      1. Concepts, Cognition and Change
      2. Feminist, Situated and Social Approaches
      3. The Scientific Revolution

1. If Science Changes, What is Science?

We begin with some organizing remarks. It is interesting to note at the outset the reflexive nature of the topic of scientific change. A main concern of science is understanding physical change, whether it be motions, growth, cause and effect, the creation of the universe or the evolution of species. Scientific views of change have influenced philosophical views of change and of identity, particularly among philosophers impressed by science's success at predicting and controlling change. These philosophical views are then reflected back, through the history and philosophy of science, as images of how science itself changes, of how its theories are created, evolve and die. Models of change from science—evolutionary, mechanical, revolutionary—often serve as models of change in science.

This makes it difficult to disentangle the actual history of science from our philosophical expectations about it. And the historiography and the philosophy of science do not always live together comfortably. Historians balk at the evaluative, forward-looking, and often necessitarian, claims of standard philosophical reconstructions of scientific events. Philosophers, for their part, have argued that details of the history of science matter little to a proper theory of scientific change, and that a distinction can and should be made between how scientific ideas are discovered and how they are justified. Beneath the ranging, messy, and contingent happenings which led to our current scientific outlook, there lies a progressive, systematically evolving activity waiting to be rationally reconstructed.

Clearly, to tell any story of ‘science changing’ means looking beneath the surface of those changes in order to find something that remains constant, the thing which remains science. Conversely, what one takes to be the demarcating criteria of science will largely dictate how one talks about its changes. What part of human history is to be identified with science? Where does science start and where does it end? The breadth of science has a dimension across concurrent events as well as across the past and future. That is, it has both synchronic (at a time) and diachronic (over time) dimensions. Science will consist of a range of contemporary events which need to be demarcated. But likewise, science has a temporal breadth: a beginning, or possibly several beginnings, and possibly several ends.

The synchronic dimension of science is one way views of scientific change can be distinguished. On one hand there are logical or rationalistic views according to which scientific activity can be reduced to a collection of objective, rational decisions of a number of individual scientists. On this latter view, the most significant changes in science can each be described through the logically-reconstructable actions and words of one historical figure, or at most a very few. According to many of the more recent views, however, an adequate picture of science cannot be formed with anything less than the full context of social and political structures: the personal, institutional, and cultural relations scientists are a part of. We look at some of these broader sociological views in the section on social process of change.

Historians and philosophers of science have wanted also to “broaden” science diachronically, to historicize its content, such that the justifications of science, or even its meanings, cannot be divorced from their past. We will begin with the most influential figure for history and philosophy of science in North America in the last half-century: Thomas Kuhn. Kuhn's work in the middle of the last century was primarily a reaction to the then prevalent, rationalistic and a-historical view described in the previous paragraph. Along with Kuhn, we describe the closely related views of Imre Lakatos and Larry Laudan. For an introduction to the most influential philosophical accounts of the diachronical development of science, see Losee 2004.

When Kuhn and the others advanced their new views on the development of science into Anglo-Saxon philosophy of science, history and sociology were already an important part of the landscape of Continental history and philosophy of science. A discussion of these views can be found as part of the sociology of science section as well. The article concludes with more recent naturalized approaches to scientific change, which turn to cognitive science for accounts of scientific understanding and how that understanding is formed and changed, as well as suggestions for further reading.

Science itself, at least in a form recognizable to us, is a twentieth century phenomenon. Although a matter of debate, the canonical view of the history of scientific change is that its seminal event is the one tellingly labeled the Scientific Revolution. It is usually dated to the 16th and 17th centuries. The first historiographies of science—as much construction of the revolution as they were documentation—were not far behind, coming in the eighteenth and nineteenth centuries. Professionalization of the history of science, characterized by reflections on the telling of the history of science, followed later. We begin our story there.

2. History of Science and Scientific Change

As history of science professionalized, becoming a separate academic discipline in the twentieth century, scientific change was seen early on as an important theme within the discipline. Admittedly, the idea of radical change was not a key notion for early practitioners of the field such as George Sarton (1884-1956), the father of history of science in the United States, but with the work of historians of science such as Alexandre Koyré (1892-1964), Herbert Butterfield (1900-1979) and A. Rupert Hall (1920-2009), radical conceptual transformations came to play a much more important role.

One of the early outcomes of this interest in change was the volume Scientific Change (Crombie, 1963) in which historians of science covering the span of science from the physical to the biological sciences, and the span of history from antiquity to modern science, all investigated the conditions for scientific change by examining cases from a multitude of periods, societies, and scientific disciplines. The introduction to Crombie's volume presented a large number of questions regarding scientific change that remained key issues in both history and philosophy of science for several decades:

What were the essential changes in scientific thought and how were they brought about? What was the part played in the initiation of change by mutations in fundamental ideas leading to new questions being asked, new problems being seen, new criteria of satisfactory explanation replacing the old? What was the part played by new technical inventions in mathematics and experimental apparatus; by developments in pure mathematics; by the refinements of measurement; by the transference of ideas, methods and information from one field of study to another? What significance can be given to the description and use of scientific methods and concepts in advance of scientific achievement? How have methods and concepts of explanation differed in different sciences? How has language changed in changing scientific contexts? What parts have chance and personal idiosyncrasy played in discovery? How have scientific changes been located in the context of general ideas and intellectual motives, and to what extent have extra-scientific beliefs given theories their power to convince? … How have scientific and technical changes been located in the social context of motives and opportunities? What value has been put on scientific activity by society at large, by the needs of industry, commerce, war, medicine and the arts, by governmental and private investment, by religion, by different states and social systems? To what external social, economic and political pressures have science, technology and medicine been exposed? Are money and opportunity all that is needed to create scientific and technical progress in modern society? (Crombie, 1963, p. 10)

Of particular interest among historians of science have been the changes associated with scientific revolutions and especially the period often referred to as the Scientific Revolution, seen as the sum of achievements in science from Copernicus to Newton (Cohen 1985; Hall 1954; Koyré 1965). The word ‘revolution’ had started being applied in the eighteenth century to the developments in astronomy and physics as well as the change in chemical theory which emerged with the work of Lavoisier in the 1770s, or the change in biology which was initiated by Darwin’s work in the mid-nineteenth century. These were fundamental changes that overturned not only the reigning theories but also carried with them significant consequences outside their respective scientific disciplines. In most of the early work in history of science, scientific change in the form of scientific revolutions was something which happened only rarely. This view was changed by the historian and philosopher of science Thomas S. Kuhn whose 1962 monograph The Structure of Scientific Revolutions (1970) came to influence philosophy of science for decades. Kuhn wanted in his monograph to argue for a change in the philosophical conceptions of science and its development, but based on historical case studies. The notion of revolutions that he used in Structure included not only fundamental changes of theory that had a significant influence on the overall world view of both scientists and non-scientists, but also changes of theory whose consequences remained solely within the scientific discipline in which the change had taken place. This considerably widened the notion of scientific revolutions compared to earlier historians and initiated discussions among both historians and philosophers on the balance between continuity and change in the development of science.

3. Philosophical Views on Change and Progress in Science

In the British and North American schools of philosophy of science, scientific change did not became a major topic until the 1960s onwards when historically inclined philosophers of science, including Thomas S. Kuhn (1922-1996), Paul K. Feyerabend (1924-1994), N. Russell Hanson (1924-1967), Michael Polanyi (1891-1971), Stephen Toulmin (1922-2009) and Mary Hesse (*1924) started questioning the assumptions of logical positivism, arguing that philosophy of science should be concerned with the historical structure of science rather than with an ahistorical logical structure which they found to be a chimera. The occupation with history led naturally to a focus on how science develops, including whether science progresses incrementally or through changes which represent some kind of discontinuity.

Similar questions had also been discussed among Continental scholars. The development of the theory of relativity and of quantum mechanics in the beginning of the twentieth century suggested that empirical science could overturn deeply held intuitions and introduce counter-intuitive new concepts and ideas; and several European philosophers, among them the German neo-Kantian philosopher Ernst Cassirer (1874-1945), directed their work towards rejecting Kant’s absolute categories in favor of categories that may change over time. In France, the historian and philosopher of science Gaston Bachelard (1884-1962) also noted that what Kant had taken to be absolute preconditions for knowledge had turned out wrong in the light of modern physics. On Bachelard’s view, what had seemed to be absolute preconditions for knowledge were instead merely contingent conditions. These conditions were still required for scientific reasoning and therefore, Bachelard concluded, a full account of scientific reasoning could only be derived from reflections upon its historical conditions and development. Based on the analysis of the historical development of science, Bachelard advanced a model of scientific change according to which the conceptions of nature are from time to time replaced by radical new conceptions – what Bachelard called epistemological breaks.

Bachelard’s view was later developed and modified by the historian and philosopher of science, and student of Bachelard, George Canguilhem (1904-1995) and by the philosopher and social historian, and student of Canguilhem, Michel Foucault (1926-1984). Beyond the teacher-student connections, there are other commonalities which unify this tradition. In North America and England, among those who wanted to make philosophy more like science, or to import into philosophical practice lessons from the success of science, the exemplar was almost always physics. The most striking and profound advances in science seemed to be, after all, in physics, namely the quantum and relativity revolutions. But on the Continent, model sciences were just as often linguistics or sociology, biology or anthropology, and not limited to those. Canguilhem's interest in changing notions of the normal versus the pathological, for example, coming from an interest in medicine, typified the more human-centered theorising of the tradition. What we as humans know, how we know it, and how we successfully achieve our aims, are the guiding questions, not how to escape our human condition or situatedness.

Foucault described his project as archaeology of the history of human thought and its conditions. He compared his project to Kant’s critique of reason, but with the difference that Foucault’s interest was in a historical a priori; that is, with what seem to be for a given period the necessary conditions governing reason, and how these constraints have a contingent historical origin. Hence, in his analysis of the development of the human sciences from the Renaissance to the present, Foucault described various so-called epistemes that determined the conditions for all knowledge of their time, and he argued that the transition from one episteme to the next happens as a break that entails radical changes in the conception of knowledge. Michael Friedman's work on the relativized and dynamic a priori can be seen as continuation of this thread (Friedman 2001). For a detailed account of the work of Bachelard, Canguilhem and Foucalt, see Gutting (1989).

With the advent of Kuhn’s Structure, “non-Continental” philosophy of science also started focusing in its own way on the historical development of science, often apparently unaware of the earlier tradition, and in the decades to follow alternative models were developed to describe how theories supersede their successors, and whether progress in science is gradual and incremental or whether it is discontinuous. Among the key contributions to this discussion, besides Kuhn’s famous paradigm-shift model, were Imre Lakatos’ (1922-1974) model of progressing and degenerating research programs and Larry Laudan’s (*1941) model of successive research traditions.

a. Kuhn, Paradigms and Revolutions

One of the key contributions that provoked interest in scientific change among philosophers of science was Thomas S. Kuhn’s seminal monograph The Structure of Scientific Revolutions from 1962. The aim of this monograph was to question the view that science is cumulative and progressive, and Kuhn opened with: “History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed” (p. 1). History was expected to do more than just chronicle the successive increments of, or impediments to, our progress towards the present. Instead, historians and philosophers should focus on the historical integrity of science at a particular time in its development, and should analyze science as it developed. Instead of describing a cumulative, teleological development toward the present, history of science should see science as developing from a given point in history. Kuhn expected a new image of science would emerge from this diachronic historiography. In the rest of Structure he used historical examples to question the view of science as a cumulative development in which scientists gradually add new pieces to the ever-growing aggregate of scientific knowledge, and instead he described how science develops through successive periods of tradition-preserving normal science and tradition-shattering revolutions. For introductions to Kuhn’s philosophy of science, see for example Andersen 2001, Bird 2000, and Hoyningen-Huene 1993.

i. Key Concepts in Kuhn’s Account of Scientific Change

On Kuhn’s model, science proceeds in key phases. The predominant phase is normal science which, while progressing successfully in its aims, inherently generates what Kuhn calls anomalies. In brief, anomalies lead to crisis and extraordinary science, followed by revolution, and finally a new phase of normal science.

Normal science is characterized by a consensus which exists throughout the scientific community as to (a) the concepts used in communication among scientists, (b) the problems which can meaningfully be formulated as relevant research problems, and (c) a set of exemplary problem solutions that serve as models in solving new problems. Kuhn first introduced the notion 'paradigm' to denote these shared communal aspects, and also the tools used by that community for solving its research problems. Because so much was apparently captured by the term ‘paradigm’, Kuhn was criticized for using the term in ambiguous ways (see especially Masterman 1970). He later offered the alternative notion 'disciplinary matrix', covering (a) symbolic generalizations, or laws in their most fundamental forms, (b) beliefs about which objects and phenomena that exist in the world, (c) values by which the quality of research can be evaluated, and (d) exemplary problems and problem situations. In normal science, scientists draw on the tools provided by the disciplinary matrix, and they expect the solutions of new problems to be in consonance with the descriptions and solutions of the problems that they have previously examined. But sometimes these expectations are violated. Problems may turn out not to be solvable in an acceptable way, and then instead they represent anomalies for the reigning theories.

Not all anomalies are equally severe. Some discrepancy can always be found between theoretical predictions and experimental findings, and this does not necessarily challenge the foundations of normal science. Hence, some anomalies can be neglected, at least for some time. Others may find a solution within the reigning theoretical framework. Only a small number will be so severe and so persistent, that they suggest the tools provided by the accepted theories must be given up, or at least be seriously modified. Science has then entered the crisis phase of Kuhn's model. Even in crisis, revolution may not be immediately forthcoming. Scientists may “agree” that no solution is likely to be found in the present state of their field and simply set the problems aside for future scientists to solve with more developed tools, while they return to normal science in its present form. More often though, when crisis has become severe enough for questioning the foundation, and the anomalies may be solved by a new theory, that theory gradually receives acceptance until eventually a new consensus is established among members of the scientific community regarding the new theory. Only in this case has a scientific revolution occurred.

Importantly though, even severe anomalies are not simply falsifying instances. Severe anomalies cause scientists to question the accepted theories, but the anomalies do not lead the scientists to abandon the paradigm without an alternative to replace it. This raises a crucial question regarding scientific change on Kuhn's model: where do new theories come from? Kuhn said little about this creative aspect of scientific change; a topic that later became central to cognitively inclined philosophers of science working on scientific change (see the section on Cognitive Views below). Kuhn described merely how severe anomalies would become the fixation point for further research, while attempts to solve them might gradually diverge more and more from the solution hitherto accepted as exemplary. Until, in the course of this development, embryonic forms of alternative theories were born.

ii. Incommensurability as the Result of Radical Scientific Change

For Kuhn the relation between normal science traditions separated by a scientific revolution cannot be described as incorporation of one into the other, or as incremental growth. To describe the relation, Kuhn adopted the term ‘incommensurability’ from mathematics, claiming that the new normal-scientific tradition which emerges from a scientific revolution is not only incompatible but often actually incommensurable with that which has gone before.

Kuhn's notion of incommensurability covered three different aspects of the relation between the pre- and post-revolutionary normal science traditions: (1) a change in the set of scientific problems and the way in which they are attacked, (2) conceptual changes, and (3) a change, in some sense, in the world of the scientists’ research. This latter, “world-changing” aspect is the most fundamental aspect of incommensurability. However, it is a matter of great debate exactly how strongly we should take Kuhn's meaning, for instance when he stated that “though the world does not change with a change of paradigm, the scientist afterwards works in a different world” (p. 121). To make sense of these claims it is necessary to distinguish between two different senses of the term ‘world’: the world as the independent object which scientists investigate and the world as the perceived world in which scientists practice their trade.

In Structure, Kuhn argued for incommensurability in perceptual terms. Drawing on results from psychological experiments showing that subjects’ perceptions of various objects were dependent on their training and experience, Kuhn suspected that something like a paradigm was prerequisite to perception itself and that, therefore, different normal science traditions would cause scientists to perceive differently. But when it comes to visual gestalt-switch images, one has recourse to the actual lines drawn on the paper. Contrary to this possibility of employing an ‘external standard’, Kuhn claimed that scientists can have no recourse above or beyond what they see with their eyes and instruments. For Kuhn, the change in perception cannot be reduced to a change in the interpretation of stable data, simply because stable data do not exist. Kuhn thus strongly attacked the idea of a neutral observation-language; an attack similarly launched by other scholars during the late 1950s and early 1960s, most notably Hanson (Hanson 1958).

These aspects of incommensurability have important consequences for the communication between proponents of competing normal science traditions and for the choice between such traditions. Recognizing different problems and adopting different standards and concepts, scientists may talk past each other when debating the relative merits of their respective paradigms. But if they do not agree on the list of problems that must be solved or on what constitutes an acceptable solution, there can be no point-by-point comparison of competing theories. Instead, Kuhn claimed that the role of paradigms in theory choice was necessarily circular in the sense that the proponents of each would use their own paradigm to argue in that paradigm’s defense. Paradigm choice is a conversion that cannot be forced by logic and neutral experience.

This view has led many critics of Kuhn to the misunderstanding that he saw paradigm choice as devoid of rational elements. However, Kuhn did emphasize that although paradigm choice cannot be justified by proof, this does not mean that arguments are not relevant or that scientists are not rationally persuaded to change their minds. In contrast, Kuhn argued that, “Individual scientists embrace a new paradigm for all sorts of reasons and usually for several at once.” (Kuhn 1996. p. 152)  According to Kuhn, such arguments are, first of all, about whether the new paradigm can solve the problems that have led the old paradigm to a crisis, whether it displays a quantitative precision strikingly better than its older competitor, and whether in the new paradigm or with the new theory there are predictions of phenomena that had been entirely unsuspected while the old one prevailed. Aesthetic arguments, based on simplicity for example, may enter as well.

Another common misunderstanding of Kuhn’s notion of incommensurability is that it should be taken to imply a total discontinuity between the normal science traditions separated by a scientific revolution. Kuhn emphasized, rather, that a new paradigm often incorporates much of the vocabulary and apparatus, both conceptual and manipulative, of its predecessor. Paradigm shifts may be “non-cumulative developmental episodes …,” but the former paradigm can be replaced “... in whole or in part …” (Ibid. p. 2). In this way, parts of the achievements of a normal science tradition will turn out to be permanent, even across a revolution. “[P]ostrevolutionary science invariably includes many of the same manipulations, performed with the same instruments and described in the same terms ...” (Ibid. p 129-130). Incommensurability is a relation that holds only between minor parts of the object domains of two competing theories.

b. Lakatos and Progressing and Degenerating Research Programs

Lakatos agreed with Kuhn’s insistence on the tenacity of some scientific theories and the rejection of naïve falsification, but he was opposed to Kuhn’s account of the process of change, which he saw as “a matter for mob psychology” (Lakatos, 1970, p. 178). Lakatos therefore sought to improve upon Kuhn’s account by providing a more satisfactory methodology of scientific change, along with a meta-methodological justification of the rationality of that method, both of which were seen to be either lacking or significantly undeveloped in Kuhn’s early writings. On Lakatos’ account, a scientific research program consists of a central core that is taken to be inviolable by scientists working within the research program, and a collection of auxiliary hypotheses that are continuously developing as the core is applied. In this way, the methodological rules of a research program divide into two different kinds: a negative heuristic that tells the scientists which paths of research to avoid, and a positive heuristic that tells the scientists which paths to pursue. On this view, all tests are necessarily directed at the auxiliary hypotheses which come to form a protective belt around the hard core of the research program.

Lakatos aims to reconstruct changes in science as occurring within research programs. A research program is constituted by the series of theories resulting from adjustments to the protective belt but all of which share a hard core. As adjustments are made in response to problems, new problems arise, and over a series of theories there will be a collective problem-shift. Any series of theories is theoretically progressive, or constitutes a theoretically progressive problem-shift, if and only if there is at least one theory in the series which has some excess empirical content over its predecessor. In the case if this excess empirical content is also corroborated the series of theories is empirically progressive. A problem-shift is progressive, then, if it is both theoretically and empirically progressive, otherwise it is degenerate. A research program is successful if it leads to progressive problem-shifts and unsuccessful if it leads to degenerating problem-shifts. The further aim of Lakatos’ account, in other words, is to discover, through reconstruction in terms of research programs, where progress is made in scientific change.

The rationally reconstructive aspect of Lakatos’ account is the target of criticism. The notion of empirical content, for instance, is carrying a pretty heavy burden in the account. In order to assess the progressiveness of a program, one would seem to need a measure of the empirical content of theories in order to judge when there is excess content. Without some such measure, however, Lakatos' methodology is dangerously close to being vacuous or ad hoc.

We can instead take the increase in empirical content to be a meta-methodological principle, one which dictates an aim for scientists (that is, to increase empirical knowledge), while cashing this out at the methodological level by identifying progress in research programs with making novel predictions. The importance of novel predictions, in other words, can be justified by their leading to an increase in the empirical content of the theories of a research program. A problem-shift which results in novel predictions can be taken to entail an increase in empirical content. It remains a worry, however, whether such an inference is warranted, since it seems to simply assume novelty and cumulativity go together unproblematically. That they might not was precisely Kuhn's point.

A second objection is that Lakatos' reconstruction of scientific change through appeal to a unified method runs counter to the prevailing attitude among philosophers of science from the second half of the twentieth century on, according to which there is no unified method for all of science. At best, anything they all have in common methodologically will be so general as to be unhelpful or uninteresting.

At any rate, Lakatos does offer us a positive heuristic for the description and even explanation of scientific change. For him, change in science is a difficult and delicate thing, requiring balance and persistence. “Purely negative, destructive criticism, like ‘refutation’ or demonstration of an inconsistency does not eliminate a program. Criticism of a program is a long and often frustrating process and one must treat budding programs leniently. One may, of course, whop up on [criticize] the degeneration of a research program, but it is only constructive criticism which, with the help of rival research programs, can achieve real successes; and dramatic spectacular results become visible only with hindsight and rational reconstruction” (Lakatos, 1970, p. 179).

c. Laudan and Research Traditions

In his Progress and Its Problems: Towards a Theory of Scientific Growth (1977), Laudan defined a research tradition as a set of general assumptions about the entities and processes in a given domain and about the appropriate methods to be used for investigating the problems and constructing the theories in that domain. Such research traditions should be seen as historical entities created and articulated within a particular intellectual environment, and as historical entities they would “wax and wane” (p. 95). On Laudan’s view, it is important to consider scientific change both as changes that may appear within a research tradition and as changes of the research tradition itself.

The key engine driving scientific change for Laudan is problem solving. Changes within a research tradition may be minor modifications of subordinate, specific theories, such as modifications of boundary conditions, revisions of constants, refinements of terminology, or expansion of a theory’s classificatory network to encompass new discoveries. Such changes solve empirical problems, essentially those problems Kuhn conceives of as anomalies. But, contrary to Kuhn's normal science and to Lakatos' research programs, Laudan held that changes within a research tradition might also involve changes to its most basic core elements. Severe anomalies which are not solvable merely by modification of specific theories within the tradition may be seen as symptoms of a deeper conceptual problem. In such cases scientists may instead explore what sorts of (minimal) adjustments could be made in the deep-level methodology or ontology of that research tradition (p. 98). When Laudan looked at the history of science, he saw Aristotelians who had abandoned the Aristotelian doctrine that motion in a void is impossible, and Newtonians who had abandoned the Newtonian demand that all matter has inertial mass, and he saw no reason to claim that they were no longer working within those research traditions.

Solutions to conceptual problems may even result in a theory with less empirical support and still count as progress since it is overall problem solving effectiveness (not all problems are empirical ones) which is the measure of success of a research tradition (Laudan 1996). Most importantly for Laudan, if there are what can be called revolutions in science, they reflect different kinds of problems, not a different sort of activity. David Pearce calls this Laudan's methodological monism (see Pearce 1984). For Kuhn and Lakatos, identification of a research tradition (or program or paradigm) could be made at the level of specific invariant, non-rejectable elements. For Laudan, there is no such class of sacrosanct elements within a research tradition—everything is open to change over time. For example, while absolute time and space were seen as part of the unrejectable core of Newtonian physics in the eighteenth century, they were no longer seen as such a century later. This leaves a dilemma for Laudan’s view. If research traditions undergo deep-level transformations of their problem solving apparatus this would seem to constitute a significant change to the problem solving activity that may warrant considering the change the basis of a new research tradition. On the other hand, if the activity of problem solving is strong enough to provide the identity conditions of a tradition across changes, consistency might force us to identify all problem solving activity as part of one research tradition, blurring distinctions between science and non-science. Distinguishing between a change within a research tradition and the replacement of a research tradition with another seems both arbitrary and open-ended. One way of solving this problem is by turning from just internal characteristics of science to external factors of social and historical context.

4. The Social Processes of Change

Science is not just a body of facts or sets of sentences. However one characterizes its content, that content must be embodied in institutions and practices comprised of scientists themselves. An important question then, with respect to scientific change, regards how “science” is constructed out of scientists, and which unit of analysis – the individual scientist or the community—is the proper one for understanding the dynamic of scientific change? Popper's falsificationism was very much a matter of personal responsibility and reflection. Kuhn, on the other hand, saw scientific change as a change of community and generations. While Structure may have been largely responsible for making North American philosophers aware of the importance of historical and social context in shaping scientific change, Kuhn was certainly not the first to theorize about it. Kuhn himself recognized his views in the earlier work of Ludwick Fleck (See for example Brorson and Andersen 2001, Babich 2007 and Mössner 2011 for comparisons between the views of Kuhn and Fleck).

a. Fleck

As early as the mid-1930s, Ludwik Fleck (1896-1961) gave an account of how thoughts and ideas change through their circulation within the social strata of a thought-collective (Denkkollektiv) and how this thought-traffic contributes to the process of verification. Drawing on a case study from medicine on the development of a diagnostic test for syphilis, Ludwik Fleck argued in his 1935 monograph Genesis and the Development of a Scientific Fact that a thought collective is a functional unit in which people who interact intellectually are tied together through a particular ‘thought style’ that forces narrow constraints upon the thinking of the individual. The thought-style is dogmatically transmitted from one generation to the next, by initiation, training, education or other devices whose aim is introduction into the collective. Most people participate in numerous thought-collectives, and any individual therefore possesses several overlapping thought-styles and may become carriers of influence between the various thought-collectives in which they participate. This traffic of thoughts outside the collective is linked to the most outstanding alterations in thought-content. The ensuing modification and assimilation according to the foreign thought-style is a significant source of divergent thinking. According to Fleck, any circulation of thoughts therefore also causes transformation of the circulated thought.

In Kuhn’s Structure, the distinction between the individual scientist and the community as the agent of change was not quite clear, and Kuhn later regretted having used the notion of a gestalt switch to characterize changes in a community because “communities do not have experiences, much less gestalt switches.” Consequently, he realized that “to speak, as I repeatedly have, of a community’s undergoing a gestalt switch is to compress an extended process of change into an instant, leaving no room for the microprocesses by which the change is achieved” (Kuhn 1989, p. 50). Rather than helping himself to an unexamined notion of communal change, Fleck, on the other hand, made the process by which individual interacted with collective central to his account of scientific development and the joint construction of scientific thought. What the accounts have in common is a view that the social plays a role in scientific change through the social shaping of science content. It is not a relation between scientist and physical world which is constitutive of scientific knowledge, but a relation between the scientists and the discipline to which they belong. That relation can be restrictive of change in science. It can also provide the dynamics for change.

b. Hull’s Evolutionary Account of Scientific Change

Several philosophers of science have held the view that the dynamics of scientific change can be seen as an evolutionary process in which some kind of selection plays a central role. One of the most detailed evolutionary accounts of scientific change has been provided by David Hull (1935-2010). On Hull's account of scientific change, the development of science is a function of the interplay between cooperation and competition for credit among scientists. Hence, selection in the form of citations plays a central role in this account.

The basic structure of Hull’s account is that, for the content element of science—problems and their solutions, accumulated data, but also beliefs about the goals of science, proper ways to realize these goals, and so forth—to survive in science they must be transmitted more or less intact through history. That is, they must be seen as replicators that pass on their structure in successive replication. Hence, conceptual replication is a matter of information being transmitted largely intact by different vehicles. These vehicles of transmission may be media such as books or journals, but also scientists themselves. Whereas books and journals are passive vehicles, scientists are active in testing and changing the transmitted ideas. They are therefore not only vehicles of transmission but also interactors, interacting with their environment in a way that causes replication to be differential and hence enabling of scientific change.

Hull did not elaborate much on the inner structure of differential replication, apart from arguing that the underdetermination of theory by observation made it possible. Instead, the focus of his account is on the selection mechanism that can cause some lineages of scientific ideas to cease and others to continue. First, scientists tend to behave in ways that increase their conceptual fitness. Scientists want their work to be accepted, which requires that they gain support from other scientists. One kind of support is to show that their work rests on preceding research. But that is at the same time a decrease in originality. There is a trade-off between credit and support. Scientists whose support is worth having are likely to be cited more frequently.

Second, this social process is highly structured. Scientists tend to organize into tightly knit research groups in order to develop and disseminate a particular set of views. Few scientists have all the skills and knowledge necessary to solve the problems that they confront; they therefore tend to form research groups of varying degrees of cohesiveness. Cooperating scientists may often share ideas that are identical in descent, and transmission of their contributions can be viewed as similar to kin selection. In the wider scientific community, scientists may form a deme in the sense that they use the ideas of each other much more frequently than the ideas of scientists outside the community.

Initially, criticism and evaluation come from within a research group. Scientists expose their work to severe tests prior to publication, but some things are taken so much for granted that it never occurs to them to question it. After publication, it shifts to scientists outside the group, especially opponents who are likely to have different—though equally unnoticed—presuppositions. The self-correction of science depends on other scientists having different perspectives and different career interests—scientists’ career interests are not damaged by refuting the views of their opponents.

5. Cognitive Views on Scientific Change

Scientific change received new interest during the 1980s and 1990s with the emergence of cognitive science; a field that draws on cognitive psychology, cognitive anthropology, linguistics, philosophy, artificial intelligence and neuroscience. Historians and philosophers of science adapted results from this interdisciplinary work to develop new approaches to their field. Among the approaches are Paul Churchland’s (*1942) neurocomputational perspective (Churchland, 1989; Churchland, 1992), Ronald Giere’s (*1938) work on cognitive models of science (Giere, 1988), Nancy Nersessian’s (*1947) cognitive history of science (Nersessian, 1984; Nersessian, 1992; Nersessian, 1995a; 1995b), and Paul Thagard’s (*1950) computational philosophy of science (Thagard, 1988; Thagard, 1992). Rather than explaining scientific change in terms of a priori principles, these new approaches aim at being naturalized by drawing on cognitive science to provide insights on how humans generally construct and develop conceptual systems and how they use these insights in analyses of scientific change as conceptual change. (For an overview of research in conceptual change, see (Vosniadou, 2008).)

a. Cognitive History of Science

Much of the early work on conceptual change emphasized the discontinuous character of major changes by using metaphors like ‘gestalt switch’, indicating that such major changes happen all at once. This idea had originally been introduced by Kuhn, but in his later writings he admitted that his use of the gestalt switch metaphor had its origin in his experience as a historian working backwards in time and that, consequently, it was not necessarily suitable for describing the experience of the scientists taking part in scientific development. Instead of dramatic gestalt shifts, it is equally plausible that for the historical actors there exist micro-processes in their conceptual development. The development of science may happen stepwise with minor changes and yet still sum up over time to something that appears revolutionary to the historian looking backward and comparing the original conceptual structures to the end product of subsequent changes. Kuhn realized this, but also saw that his own work did not offer any details on how such micro-processes would work, though it did leave room for their exploration (Kuhn 1989).

Exploration of conceptual microstructures has been one of the main issues within the cognitive history and philosophy of science. Historical case studies of conceptual change have been carried out by many scholars, including Nersessian, Thagard, the Andersen-Barker-Chen groupThat (see for example Nersessian, 1984; Thagard, 1992; Andersen, Barker, and Chen, 2006).

Some of the early work in cognitive history and philosophy of science focused on mapping conceptual structures at different stages during scientific change (see for example Thagard, 1990; Thagard and Nowak, 1990; Nersessian and Resnick, 1989) and developing typologies of conceptual change in terms of their degree of severeness (Thagard, 1992). These approaches are useful for comparing between different stages of scientific change and for discussing such issues as incommensurability. However, they do not provide much detail on the creative process through which changes are created.

Other lines of research have focused on the reasoning processes that are used in creating new concepts during scientific change. One of the early contributions to this line of work was Shapere who argued that, as concepts evolve, chains of reasoning connect the successive versions of a concept. These chains of reasoning therefore also establish continuity in scientific change, and this continuity can only be fully understood by analysis of the reasons that motivated each step in the chain of changes (Shapere 1987a;1987b). Over the last two decades, this approach has been extended and substantiated by Nersessian (2008a; 2008b) whose work has focused on the nature of the practices employed by scientists in creating, communicating and replacing scientific representations within a given scientific domain. She argues that conceptual change is a problem-solving process. Model-based reasoning processes, especially, are used to facilitate and constrain abstraction and information from multiple sources during this process.

b. Scientific Change and Science Education

Aiming at insights into general mechanisms of conceptual development, some of the cognitive approaches have been directed toward investigating not only the development of science, but also how sciences are learned. During the 1980s and early 1990s, several scholars argued that conceptual divides of the same kind as described by Kuhn’s incommensurability thesis might exist in science education between teacher and student. Science teaching should, therefore, address these misconceptions in an attempt to facilitate conceptual change in students. Part of this research incorporated the (controversial) thesis that the development of ideas in students mirrors the development of ideas in the history of science—that cognitive ontogeny recapitulates scientific phylogeny. For the field of mechanics in particular, research was done to show that children’s’ naïve beliefs parallel early scientific beliefs, like impetus theories, for example. (Champagne, Klopfer, and Anderson, 1980; Clement, 1983; McClosky, 1983). However, most research went beyond the search for analogies between students’ naïve views and historically held beliefs. Instead, they carried out material investigations of the cognitive processes employed by scientists in constructing scientific concepts and theories more generally, through the available historical records, focussing on the kinds of reasoning strategies communicated in those records (see Nersessian, 1992; Nersessian, 1995a). Thus, this work still assumed that the cognitive activities of scientists in their construction of new scientific concepts was relevant to learning, but it marked a return to a view of the relevance of the history of science as a repository of case studies demonstrating how scientific concepts are constructed and changed. In assuming a conceptual continuity between scientific understanding “then and now,” the cognitive approach had moved away from the Kuhnian emphasis on incommensurability and gestalt shift conceptual change.

6. Further Reading and References

It is impossible to disentangle entirely the history and philosophy of scientific change from a great number of other issues and disciplines. We have not addressed here the epistemology of science, the role of experiments in science (or of thought experiments), for instance. The question of whether science, or knowledge in general, is approaching truth, or tracking truth, or approximating to truth, are debates taken up in epistemology. For more on those issues one should consult the relevant references. Whether science progresses (and not just changes) is a question which supports its own literature as well. Many iterations of interpretations, criticism and replies to challenges of incommensurability, non-cumulativity, and irrationality of science have been given. Beliefs in scientific progress founded on a naïve realism, according to which science is getting ever closer to a literally true picture of the world, have been criticized soundly. A simple version of the criticism is the pessimistic meta-induction: every scientific image of reality in the past has been proven wrong, therefore all future scientific images will be wrong (see Putnam 1978; Laudan 1984). In response to challenges to realism, much attention has been paid to structural realism, an attempt to describe some underlying mathematical structure which is preserved even across major theory changes. Past theories were not entirely wrong, on this view, and not entirely discarded, because they had some of the structure correct, albeit wrongly interpreted or embedded in a mistaken ontology or broader world view which has been since abandoned.
On the question of unity of science, on whether the methods of science are universal or plural, and whether they are rational, see the references given for Cartwright (2007), Feyerabend (1974), Mitchell (2000;2003); Kellert, et al (2006). For feminist criticisms and alternatives to traditional philosophy and history of science the interested reader should consult Longino (1990;2002); Gary, et al (1996); Keller, et al (1996); Ruetsche (2004). Clough (2004) puts forward a program combining feminism and naturalism. Among twenty-first century approaches to the historicity of science there are Friedman's dynamic a priori approach (Friedman 2001), the evolving subject-object relation of McGuire and Tuchanska (2000), and complementary science of Hasok Chang (2004).

Finally, on the topic of the Scientific Revolution, there are the standard Cohen (1985), Hall (1954) and Koyré (1965); but for subsequent discussion of the appropriateness of revolution as a metaphor in the historiography of science we recommend the collection Rethinking the Scientific Revolution, edited by Osler (2000).

a. Primary Sources

  • Crombie, A. C. (1963). Scientific Change: Historical studies in the intellectual, social and technical conditions for scientific discovery and technical invention, from antiquity to the present. London: Heinemann.
  • Feyerabend, P. (1974) Against Method. London: New Left Books.
  • Feyerabend, P. (1987) Farewell to Reason. London: Verso.
  • Fleck, L. (1979) The Genesis and Development of a Scientific Fact, (edited by T.J. Trenn and R.K. Merton, foreword by Thomas Kuhn) Chicago: University of Chicago Press
  • Hull, D.L. (1988). Science as a Process: Evolutionary Account of the Social and Conceptual Development of Science. Chicago: The University of Chicago Press.
  • Kuhn, T. S. (1970). The Structure of Scientific Revolutions. Chicago: Chicago University Press.
  • Kuhn, T. S. (1989). Speaker´s Reply. In S. Allén (Ed.), Possible Worlds in Humanities, arts, and Sciences. Berlin: de Gruyter. 49-51.
  • Lakatos, I. (1970). Falsification and the Methodology of Scientific Research Programs. In I. Lakatos and A. Musgrave, eds., Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press. 91-196.
  • Laudan, L. (1977). Progress and Its Problems. Towards a Theory of Scientific Growth. Berkeley: University of California Press.
  • Laudan, L. (1996). Beyond Positivism and Relativism: Theory, Method, and Evidence. Boulder: Westview Press.
  • Toulmin, S. (1972). Human Understanding: The Collective Use and Evolution of Concepts. Princeton: Princeton University Press.

b. Secondary Sources

  • Andersen, H. (2001). On Kuhn, Belmont CA: Wadsworth
  • Babich, B. E. (2003). From Fleck’s Denkstil to Kuhn’s paradigm: conceptual schemes and incommensurability, International Studies in the Philosophy of Science 17: 75-92
  • Bird, A. (2000). Thomas Kuhn, Chesham: Acumen
  • Brorson, S. and H. Andersen (2001). Stabilizing and changing phenomenal worlds: Ludwik Fleck and Thomas Kuhn on scientific literature, Journal for General Philosophy of Science 32: 109-129
  • Cartwright, Nancy (2007). Hunting Causes and Using Them. Cambridge: Cambridge University Press.
  • Chang, H. (2004). Inventing Temperature: Measurement and Scientific Progress. Oxford: Oxford University Press.
  • Clough, S. Having It All: Naturalized Normativity in Feminist Science Studies. Hypatia, vol. 19 no. 1 (Winter 2004). 102-18.
  • Feyerabend, P. K. (1981). Explanation, reduction and empiricism. In Realism, Rationalism and Scientific Method: Philosophical Papers. Volume 1. Cambridge: Cambridge University Press. 44-96.
  • Friedman, M. (2001). Dynamics of Reason. Stanford: CSLI Publications.
  • Gutting G. (1989). Michel Foucault's archaeology of scientific reason. Cambridge: Cambridge University Press
  • Gutting G. (2005). Continental philosophy of science. Oxford: Blackwell
  • Hall, A.R. (1954). The Scientific Revolution 1500-1800. Boston: Beacon Press.
  • Hoyningen-Huene, P. (1993). Reconstructing Scientific Revolutions, Chicago: University of Chicago Press.
  • Losee, J. (2004). Theories of Scientific Progress. London: Routledge.
  • McGuire, J. E. and Tuchanska, B. (2000). Science Unfettered. Athens: Ohio University Press.
  • Mössner, N. (2011). Thought styles and paradigms – a comparative study of Ludwik Fleck and Thomas S. Kuhn, Studies in History and Philosophy of Science 42: 362-371.

i. Concepts, Cognition and Change

  • Andersen, H., Barker, P., and Chen, X. (2006). The Cognitive Structure of Scientific Revolutions. Cambridge: Cambridge University Press.
  • Champagne, A. B., Klopfer, L. E., and Anderson, J. (1980). Factors Influencing Learning of Classical Mechanics. American Journal of Physics, 48, 1074-1079.
  • Churchland, P. M. (1989). A Neurocomputational Perspective. The Nature of Mind and the Structure of Science. Cambridge, MA: MIT Press.
  • Churchland, P. M. (1992). A deeper unity: Some Feyerabendian themes in neurocomputational form. In R. N. Giere, ed., Cognitive models of science. Minnesota studies in the philosophy of science. Minneapolis: University of Minnesota Press. 341-363.
  • Clement, J. (1983). A Conceptual Model Discussed by Galileo and Used Intuitively by Physics Students. In D. Gentner and A. L. Stevens, eds. Mental Models. Hillsdale: Lawrence Earlbaum Associates. 325-340.
  • Giere, R. N. (1988). Explaining Science: A Cognitive Approach. Chicago: University of Chicago Press.
  • Hanson, N.R.(1958). Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science. Cambridge: Cambridge University Press.
  • McClosky, M. (1983). Naive Theories of Motion. In D. Gentner and A. L. Stevens (Eds.), Mental Models. Hillsdale: Lawrence Erlbaum Associates. 75-98.
  • Nersessian, N. J. (1984). Faraday to Einstein: Constructing Meaning in Scientific Theories. Dordrecht: Martinus Nijhoff.
  • Nersessian, N. J. (1992). Constructing and Instructing: The Role of "Abstraction Techniques" in Creating and Learning Physics. In R.A. Duschl and R. J. Hamilton, eds. Philosophy of Science, Cognition, Psychology and Educational Theory and Practice. Albany: SUNY Press. 48-53.
  • Nersessian, N. J. (1992). How Do Scientists Think? Capturing the Dynamics of Conceptual Change in Science. In R. N. Giere, ed. Cognitive Models of Science. Minneapolis: University of Minnesota Press. 3-44.
  • Nersessian, N. J. (1995a). Should Physicists Preach What They Practice? Constructive Modeling in Doing and Learning Physics. Science and Education, 4. 203-226.
  • Nersessian, N. J. (1995b). Opening the Black Box: Cognitive Science and History of Science. Osiris, 10. 194-211.
  • Nersessian, N. J. (2008a). Creating Scientific Concepts. Cambridge MA: MIT Press.
  • Nersessian, N. J. (2008b). Mental Modelling in Conceptual Change. In S.Vosniadou, ed. International Handbook of Research on Conceptual Change. New York: Routledge. 391-416.
  • Nersessian, N., ed. (1987). The Process of Science. Netherlands: Kluwer Academic Publisher.
  • Nersessian, N. J. and Resnick, L. B. (1989). Comparing Historical and Intuitive Explanations of Motion: Does "Naive Physics" Have a Structure. Proceedings of the Cognitive Science Society, 11. 412-420.
  • Shapere, D. (1987a). “Method in the Philosophy of Science and Epistemology: How to Inquire about Inquiry and Knowledge.” In Nersessian, N., ed. The Process of Science. Netherlands: Kluwer Academic Publisher.
  • Shapere, D. (1987b.) “External and Internal Factors in the Development of Science.” Science and Technology Studies, 1. 1–9.
  • Thagard, P. (1990). The Conceptual Structure of the Chemical Revolution. Philosophy of Science 57, 183-209.
  • Thagard, P. (1992). Conceptual Revolutions. Princeton: Princeton University Press.
  • Thagard, P. and Nowak, G. (1990). The Conceptual Structure of the Geological Revolution. In J. Shrager and P. Langley, eds. Computational Models of Scientific Discovery and Theory Formation. San Mateo: Morgan Kaufmann. 27-72.
  • Thagard, P. (1988). Computational Philosophy of Science. Cambridge: MIT Press.
  • Thagard, P. (1992). Conceptual Revolutions. Princeton: Princeton University Press.
  • Vosniadou, S. (2008). International Handbook of Research in Conceptual Change. London: Routledge.

ii. Feminist, Situated and Social Approaches

  • Garry, Ann and Marilyn Pearsall, eds. (1996). Women, Knowledge and Reality: Explorations in Feminist Epistemology. New York: Routledge.
  • Goldman, Alvin. (1999). Knowledge in a Social World. New York: Oxford University Press.
  • Hacking, Ian. (1999). The Social Construction of What? Cambridge: Harvard University Press.
  • Keller, Evelyn Fox and Helen Longino, eds. (1996). Feminism and Science. Oxford: Oxford University Press.
  • Keller, Stephen H., and Helen E. Longino, and C. Kenneth Waters, eds (2006). Scientific Pluralism. Minnesota Studies in the Philosophy of Science, Volume 19, Minneapolis: University of Minnesota Press.
  • Longino, H. E. (2002). The Fate of Knowledge. Princeton: Princeton University Press.
  • Longino, H. E. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton, NJ: Princeton University Press.
  • McMullin, Ernan, ed. (1992). Social Dimensions of Scientific Knowledge. South Bend: Notre Dame University Press.
  • Ruetsche, Laura, 2004, “Virtue and Contingent History: Possibilities for Feminist Epistemology”, Hypatia, 19.1: 73–101
  • Solomon, Miriam. (2001). Social Empiricism. Cambridge: Massachusetts Institute of Technology Press.

iii. The Scientific Revolution

  • Cohen, I. B., (1985). Revolution in Science, Cambridge: Harvard University Press.
  • Koyré, A. (1965). Newtonian Studies. Chicago: The University of Chicago Press.
  • Osler, Margaret (2000). Rethinking the Scientific Revolution. Cambridge: Cambridge University Press.


Author Information

Hanne Andersen
Email: hanne.andersen@ivs.au.dk
University of Aarhus


Brian Hepburn
Email: bhepburn@ivs.au.dk
University of Aarhus

Infinitism in Epistemology

This article provides an overview of infinitism in epistemology. Infinitism is a family of views in epistemology about the structure of knowledge and epistemic justification. It contrasts naturally with coherentism and foundationalism. All three views agree that knowledge or justification requires an appropriately structured chain of reasons. What form may such a chain take? Foundationalists opt for non-repeating finite chains. Coherentists (at least linear coherentists) opt for repeating finite chains. Infinitists opt for non-repeating infinite chains. Appreciable interest in infinitism as a genuine competitor to coherentism and foundationalism developed only in the early twenty-first century.

The article introduces infinitism by explaining its intuitive motivations and the context in which they arise. Next it discusses the history of infinitism, which is mostly one of neglect, punctuated by brief moments of hostile dismissal. Then there is a survey of arguments for and against infinitism.

For the most part, philosophers have assumed that knowledge requires justified belief. That is, for some proposition (statement, claim or sentence) P, if you know that P, then you have a justified belief that P. Knowledge that P thus inherits its structure from the structure of the constituent justified belief that P. If the justified belief is inferential, then so is the knowledge. If the justified belief is “basic,” then so is the knowledge. These assumptions are taken for granted in the present article.

Table of Contents

  1. Introduction
  2. Historical Discussion of Infinitism
  3. Contemporary Arguments for Infinitism
    1. The Features Argument
    2. Regress Arguments
      1. The Enhancement Argument
      2. The Interrogation Argument
    3. The Proceduralist Argument
  4. Common Objections to Infinitism
    1. The Finite Mind Objection
    2. The Proof of Concept Objection
    3. The AC/DC Objection
    4. The Unexplained Origin Objection
    5. The Misdescription Objection
  5. References and Further Reading
    1. References
    2. Further Reading

1. Introduction

We often provide reasons for the things we believe in order to justify holding the beliefs. But what about the reasons? Do we need reasons for holding those reasons? And if so, do we need reasons for holding those reasons that were offered as reasons for our beliefs? We're left to wonder:

Does this regress ever end?

Infinitism is designed to answer that question. Given that one of the goals of reasoning is to enhance the justification of a belief, Q, infinitism holds that there are two necessary (but not jointly sufficient) conditions for a reason in a chain to be capable of enhancing the justification of Q:

  1. No reason can be Q itself, or equivalent to a conjunction containing Q as a conjunct. That is, circular reasoning is excluded.
  2. No reason is sufficiently justified in the absence of a further reason. That is, there are no foundational reasons.

If both (1) and (2) are true, then the chain of reasons for any belief is potentially infinite, that is, potentially unlimited.

The reason for accepting (1), and thereby rejecting circular reasoning as probative, that is, as tending to prove, is that reasoning ought to be able to improve the justificatory status of a belief. But if the propositional content of a belief is offered as a reason for holding the belief, then no additional justification could arise. Put more bluntly, circular reasoning begs the question by positing the very propositional content of the belief whose justificatory status the reasoning is designed to enhance.

Condition (1) is generally accepted, although some coherentists seem to condone the sort of circular reasoning that it proscribes (for example, Lehrer 1997). However, these coherentists might not actually be denying (1). Rather, they might instead be claiming that it is epistemically permissible to offer a deliverance of a cognitive faculty as a reason for believing that the faculty produces justified beliefs. On this alternative reading, these coherentists don't deny (1), because (1) concerns the structure, not the source, of probative reasons. For example, suppose you employ beliefs produced by perception as reasons for believing that perception is reliable. This need not involve employing the proposition “perception is reliable” as one of the reasons.

Condition (2) is much more controversial. Indeed, denying (2) is a component of the dominant view in epistemology: foundationalism. Many foundationalists claim that there are beliefs, so-called “basic beliefs” or “foundational beliefs,” which do not require further reasons in order to function effectively as reasons for “non-basic” or “non-foundational” beliefs. Basic beliefs are taken to be sufficiently justified to serve as, at least, prima facie reasons for further beliefs in virtue of possessing some property that doesn't arise from, or depend on, being supported by further reasons. For example, the relevant foundationalist property could be that the belief merely reports the contents of sensations or memories; or it could be that the belief is produced by a reliable cognitive faculty. The general foundationalist picture of epistemic justification is that foundational beliefs are justified to such an extent that they can be used as reasons for further beliefs, and that no reasons for the foundational beliefs are needed in order for the foundational beliefs to be justified.

Infinitists accept (2) and so deny that there are foundational beliefs of the sort that foundationalists champion. The motivation for accepting (2) is the specter of arbitrariness. Infinitists grant that in fact every actually cited chain of reasons ends; but infinitists deny that there is any reason which is immune to further legitimate challenge. And once a reason is challenged, then on pain of arbitrariness, a further reason must be produced in order for the challenged reason to serve as a good reason for a belief.

In addition to denying the existence of so-called basic beliefs, infinitism takes reasoning to be a process that generates an important type of justification — call it “reason-enhanced justification.” In opposition to foundationalism, reasoning is not depicted as merely a tool for transferring justification from the reasons to the beliefs. Instead, a belief's justification is enhanced when sufficiently good reasons are offered on its behalf. Such enhancement can occur even when the reasons offered have not yet been reason-enhanced themselves. That is, citing R as a reason for Q can make one's belief that Q reason-enhanced, even though R, itself, might not yet have been reason-enhanced.

As mentioned above, infinitists reject the form of coherentism – sometimes called “linear coherentism” – that endorses question-begging, circular reasoning. But by allowing that reasoning can generate epistemic justification, infinitists partly align themselves with another, more common form of coherentism – often called “holistic coherentism.” Holistic coherentism also accepts that reasoning can generate reason-enhanced justification (see BonJour 1985, Kvanvig 2007). As the name “holistic coherentism” indicates, epistemic justification is taken to be a property of entire sets of beliefs, rather than a property of individual beliefs. Holistic coherentism holds that individual beliefs are justified only in virtue of their membership in a coherent set of beliefs. On this view, justification does not transfer from one belief to another, as foundationalists or linear coherentists would claim; rather, the inferential relationships among beliefs in a set of propositions generates a justified set of beliefs; individual beliefs are justified merely in virtue of being members of such a set. Sosa (1991, chapter 9) raises serious questions about whether holistic coherentism is ultimately merely just a disguised version of foundationalism; and if Sosa is correct, then some of the objections to foundationalism would apply to holistic coherentism as well.

The argument pattern for infinitism employs the epistemic regress argument and, thus, infinitists defend their view in a manner similar to the way in which foundationalism and coherentism have been defended. This is the pattern:

  1. There are three possible, non-skeptical solutions to the regress problem: foundationalism, coherentism and infinitism.
  2. There are insurmountable difficulties with two of the solutions (in this case, foundationalism and coherentism).
  3. The third view (in this case, infinitism) faces no insurmountable difficulties.
  4. Therefore, the third view (in this case, infinitism) is the best non-skeptical solution to the regress problem.

2. Historical Discussion of Infinitism

The term ‘epistemic infinitism’ was used by Paul Moser in 1984, and the phrase "infinitist's claim" was used by John Post in 1987. Both philosophers rejected infinitism.

Infinitism was well known by the time of Aristotle – and he rejected the view. The empiricist and rationalist philosophers of the 17th and 18th centuries rejected the view. Contemporary foundationalists and coherentists reject the view.

Indeed, it is fair to say that the history of infinitism is primarily a tale of neglect or rejection, with the possible exception of Charles Pierce (Aikin 2011, pp. 80–90; see also “Some Questions Concerning Certain Faculties Claimed for Man” in Peirce 1965, v. 5, bk. 2, pp. 135–155, esp. pp. 152–3). Some have questioned whether Peirce was defending infinitism (BonJour 1985, p. 232, n. 10; Klein 1999, pp. 320–1, n. 32). There has been some recent interest in infinitism, beginning when Peter Klein published the first in a series of articles defending infinitism (Klein 1998). But it clearly remains in the early 21st century a distinctly minority view about the structure of reasons.

Ever since Aristotle proposed objections to infinitism and defended foundationalism, various forms of foundationalism have dominated Western epistemology. For example, consider the epistemologies of the seventeenth and eighteenth centuries; this is the formative period in which modern philosophy shaped the issues addressed by contemporary epistemologists. Both the empiricists and rationalists were foundationalists, although they clearly disagreed about the nature of foundational reasons.

Consider this passage from Descartes's Meditation One, where he explains his method of radical doubt:

But in as much as reason already persuades me that I ought no less carefully to withhold my assent from matters which are not entirely certain and indubitable than from those which appear to me manifestly to be false, if I am able to find in each one some reason to doubt, this will suffice to justify rejecting the whole. And for that end it will not be requisite that I should examine each in particular, which would be an endless undertaking; for owing to the fact that the destruction of the foundations of necessity brings with it the downfall of the rest of the edifice, I shall only in the first place attach those principles upon which all my former opinions rest. (Descartes 1955 [1641], p. 145)

After producing a “powerful” reason for doubting all of his former beliefs based on his senses, Descartes begins his search anew for a foundational belief that is beyond all doubt and writes in Meditation Two:

Archimedes, in order that he might draw the terrestrial globe out of its place, and transport it elsewhere demanded only that one point should be fixed and unmovable; in the same way I shall have the right to conceive high hopes if I am happy enough to discover one thing only which is certain. (Descartes 1955 [1641], p. 149)

He then happily produces what he takes—at least at that point in the Meditations – to be that one, foundational proposition:

So that after having reflected well and carefully examined all things we must come to the definite conclusion that this proposition: I am, I exist, is necessarily true each time I pronounce it, or that I mentally conceive it. (Descartes 1955 [1641], p. 150)

Regardless of the success or failure of his arguments, the point here is that Descartes clearly takes it as given that both he and the empiricist, his intended foil, will accept that knowledge is foundational and that the first tasks are to identify the foundational proposition(s) and to uncover the correct account of the nature of the foundational proposition(s). Once that is accomplished, the second task is to move beyond it (or them) to other beliefs by means of truth-preserving inferences. The Meditations presupposes a foundationalist model of reasons without any hint of argument for foundationalism.

Now consider this passage from Hume:

In a word, if we proceed not upon some fact present to the memory or senses, our reasonings would be merely hypothetical; and however the particular links might be connected with each other the whole chain of inferences would have nothing to support it, nor could we ever, by its means arrive at the knowledge of any real existence. If I ask you why you believe a particular matter of fact which you relate, you must tell me some reason; and this reason will be some other fact connected with it. But as you cannot proceed after this manner in infinitum, you must at last terminate with some fact which is present to your memory or senses or must allow that your belief is entirely without foundation. (Hume 1955 [1748], pp. 59–60)

Setting aside an evaluation of the steps in Hume's argument for foundationalism, notice that he too simply discards infinitism with the stroke of a pen: “But as you cannot proceed in this manner in infinitum ...”. To Hume, infinitism seemed so obviously mistaken that no argument against it was needed.

So why did infinitism come to be so easily and so often rejected?

The short answer is: Aristotle. His arguments against infinitism and for foundationalism were so seemingly powerful that nothing else needed to be said. We can divide Aristotle's objections to infinitism into three types. Each pertains to the infinitist solution to the regress problem.

  • Misdescription Objection: Infinitism does not correctly describe our epistemic practices; but foundationalism does.
  • Finite Mind Objection: Our finite minds are not capable of producing or grasping an infinite set of reasons.
  • Unexplained Origin Objection: Infinitism does not provide a good account of how justification is generated and transferred by good reasoning; but foundationalism does.

We will return Aristotle's objections below, in section 4.

3. Contemporary Arguments for Infinitism

There are three main contemporary arguments for infinitism.

a. The Features Argument

Infinitism has been defended on the grounds that it alone can explain two of epistemic justification's crucial features: it comes in degrees, and it can be complete (Fantl 2003). This argument concerns propositional justification, rather than doxastic justification. Propositional justification is a matter of having good reasons; doxastic justification is typically thought to be a matter of properly believing based on those reasons.

For purposes of this argument, understand infinitism as the view that a proposition Q is justified for you just in case there is available to you an infinite series of non-repeating reasons that favors believing Q. And understand foundationalism as the view that Q is justified for you just in case you have a series of non-repeating reasons that favors believing Q, terminating in a properly basic foundational reason “that needs no further reason.” And further suppose that infinitism and foundationalism are the only relevant non-skeptical alternatives for a theory of epistemic justification, so that if skepticism about justification is false, then either infinitism or foundationalism is true.

The features argument is based on two features of justification. First, justification comes in degrees. We can be more or less justified in believing some claim. An adequate theory of justification must respect this, and explain why justification comes in degrees. Call this the degree requirement on an acceptable theory of justification. Second, it's implausible to identify adequate justification with complete justification. Adequate justification is the minimal degree of justification required for knowledge. Complete justification is maximal justification, beyond which justification cannot be increased or strengthened. An adequate theory of justification should explain how justification could be complete. Call this the completeness requirement on an acceptable theory of justification.

Infinitism satisfies the degree requirement by pointing out that length comes in degrees, which justification may mirror. Other things being equal, the longer the series of reasons you have for believing Q, the better justified Q is for you (as long as the shorter set is a proper subset of the longer set). Infinitism can satisfy the completeness requirement by offering an account of complete justification: Q is completely justified for you just in case you have an infinite array of adequate reasons (Fantl 2003: 558). To have an infinite array of reasons favoring Q, for each potential challenge to Q, or to any of the infinite reasons in the chain supporting Q, or to any of the inferences involved in traversing any link in the chain, you must have available a further infinite series of reasons. In short, it requires having an infinite number of infinite chains.

Can foundationalism meet the degree and completeness requirements? To assess this, we need first to explain how foundationalists understand foundational reasons. Traditional foundationalists contend that foundational reasons are self-justifying, because their mere truth suffices to justify them. The claims “I am thinking” and “There is at least one proposition that is neither true nor false” are plausible candidates for self-justifying reasons. Metajustificatory foundationalists deny that the mere truth of a foundational reason ensures its foundational status. Instead, they say, foundational reasons must have some other property, call it ‘F’. Metajustificatory foundationalists disagree among themselves over what F is. Some say it is reliability, others say it is coherence, and yet others say it is clear and distinct perception or social approval. The important point to recognize is that metajustificatory foundationalism can't “require that a believer have access to the metajustificatory feature as a reason for the foundational reason,” because that would undermine its putative status as foundational (Fantl 2003: 541). It would effectively require a further reason for that which supposedly stood in no need of it.

Having divided all foundationalists into two jointly exhaustive and mutually exclusive groups, the argument against foundationalism goes like this:

  1. All foundationalist theories are either traditional or metajustificatory. (Premise)
  2. Traditional foundationalism can't satisfy the degree requirement. (Premise)
  3. Metajustificatory foundationalism can't satisfy the completeness requirement. (Premise)
  4. So no foundationalist theory can satisfy both the degree and completeness requirements (From 1–3)
  5. An adequate theory of justification must satisfy both the degree and completeness requirements. (Premise)
  6. So no foundationalist theory of justification is adequate. (From 4–5)

The argument is valid. Line 1 is trivially true, given the way the categories are defined. Line 2 is supported on the grounds that all self-justifying reasons are by definition true, and their truth justifies them. But truth doesn't come in degrees. So traditional foundationalism lacks the resources to satisfy the degree requirement. Truth isn't flexible enough.

Line 3 is supported on the grounds that the foundationalist will have to analyze complete justification along these lines:

Q is completely justified for you iff you have a non-repeating series of reasons for Q, ultimately founded on a reason that exemplifies the metajustificatory feature [F] to the highest possible degree. (Fantl 2003: 546)

But any such proposal must fail for a simple reason: no matter what F is, if you gain a reason to think that the foundational reason completely exemplifies F, and that exemplifying F is epistemically important, then Q will thereby become better justified for you. To see why, for the sake of argument suppose that we accept a reliabilist version of metajustificatory foundationalism, according to which Q is completely justified for you if and only if you have a non-repeating series of reasons for Q, ultimately founded on a perfectly reliable reason. Now if you gain a reason to believe that the reason is perfectly reliable, then Q will thereby become better justified for you. But then metajustificatory foundationalism hasn't satisfied the completeness requirement after all, because it will be possible for you to increase your justification for Q beyond what the maximal exemplification of F would allow. But this violates the definition of complete justification. So metajustificatory foundationalism can't meet the completeness requirement.

In response, foundationalists have pointed out that the reasoning in support of line 2 of the argument is undermined to the extent that a degree-theoretic conception of truth is plausible — that is, to the extent it's plausible that truth comes in degrees. Foundationalists have also responded that the supporting reasoning for line 3 overlooks the possibility of adequate justification being over-determined. The more reasons you have that independently adequately justify Q for you, the better justified Q is for you. A natural foundationalist proposal, then, is that Q is completely justified for you if and only if it is infinitely over-determined that Q is adequately justified for you (Turri 2010).

b. Regress Arguments

There are at least two regress arguments for infinitism: the enhancement argument and the interrogation argument. Each concerns a very specific epistemic status closely connected to reasons and reasoning. Neither purports to establish that infinitism is true about all interesting epistemic statuses. Although infinitists take skepticism seriously, for the purposes of these two arguments, we'll simply assume that skepticism is false.

i. The Enhancement Argument

The enhancement argument begins by asking a question (Klein 2005): What sort of reasoning could enhance the justification of a non-evident proposition, in a context where its truth has been legitimately questioned? What structural form would the reasons offered in the course such reasoning take? We can divide all answers to that question into three groups. Enhancement coherentists answer that some repeating chains could enhance justification; enhancement foundationalists answer that no repeating chain could enhance justification, but some finite and non-repeating chains could; enhancement infinitists answer that no repeating or finite chain could enhance justification, but some infinite and non-repeating chains could.

The enhancement argument for infinitism is that neither coherentism nor foundationalism provides a satisfactory answer to the question posed, whereas infinitism does. Given that these three answers exhaust the (non-skeptical) alternatives, it follows that infinitism is the only satisfactory account of the epistemic status in question, which for convenience we can call rational enhancement of justification.

The objection to enhancement coherentism is that repeating chains are objectionably question-begging and so can't rationally enhance justification. If Corrie believes Q, and someone asks her, “Why believe Q?”, and she responds by citing a chain of reasoning that relies on Q itself, then in that context she has clearly done nothing to rationally enhance her justification for Q. Her response simply presupposes the claim in question, so how could it rationally enhance her justification?

Enhancement foundationalists claim that some reasons are special: the foundational enhancers. Foundational enhancers can rationally enhance the justification for believing other things, even though they are not rationally supported by further reasons in turn. This is why some finite chains can rationally enhance justification: a foundational enhancer appropriately terminates the affair.

The objection to enhancement foundationalism is that all finite chains are objectionably arbitrary at their terminus. Suppose that Fontana believes A, and someone asks him, “Why believe A?”, and he responds by citing some reason B. But B is not a foundational enhancer, and Fontana is in turn asked, “Why believe B?” This continues until Fontana reaches the point where he cites a reason that, according to him, is a foundational enhancer. Let Z be this purported foundational enhancer. Fontana's interlocutor presses further, “Why think that foundational enhancers are likely to be true?” In response to this last question, Fontana has three options: affirm, deny, or withhold. If he denies, then using Z as a reason is arbitrary and the reasoning can't rationally enhance A for him. If he withholds, then, from his own point of view, he should not use Z as the basis for further beliefs. If it is not good enough to affirm in and of itself, then it isn't proper to use it as a basis for affirming something else. If he affirms, then there is no immediate problem, but this is because the reasoning has continued, and what was supposed to be a foundational enhancer turned out not to be one.

Enhancement infinitism avoids the problems faced by coherentism and foundationalism. It endorses neither circular reasoning nor arbitrary endpoints.

The enhancement argument for infinitism can be understood as follows:

  1. If skepticism about rational enhancement is false, then either coherentism, foundationalism or infinitism is the correct theory of rational enhancement. (Premise)
  2. Skepticism about rational enhancement is false. (Premise)
  3. Coherentism isn't the correct theory. (Premise)
  4. Foundationalism isn't the correct theory. (Premise)
  5. So infinitism is the correct theory of rational enhancement. (From 1–4)

Line 1 is true because the way that coherentism, foundationalism and infinitism are characterized exhausts logical space. Every rationally enhancing chain is either circular or not. If it is circular, then it's a coherentist chain; if it isn't, then either it is finite or infinite. If it is finite, then it is a foundationalist chain; if it is infinite, then it is an infinitist chain. Line 2 is assumed without defense in the present context, as mentioned above. Lines 3 and 4 are defended on grounds already explained: line 3 on the grounds that circular reasoning can rationally enhance justification, and line 4 on the grounds that arbitrary reasoning can't do so either.

ii. The Interrogation Argument

The interrogation argument concerns “the most highly prized form of true belief” (Plato, Meno, 98a), which is the sort of knowledge that human adults take themselves to be capable of and sometimes even attain (Klein 2011). More specifically, the interrogation argument concerns one of the essential requirements of this sort of knowledge, namely, full justification.

A key idea in the infinitist's discussion here is that distinctively human knowledge is distinguished by the importance of reasoning in attaining full justification: we make our beliefs fully justified by reasoning in support of them. The reasoning is partly constitutive of full justification, and so is essential to it. A mechanical calculator might know that 2+2=4, and a greyhound dog might know that his master is calling, but neither the calculator nor the greyhound reasons in support of their knowledge. Their knowledge is merely mechanical or brute. Adult humans are capable of such unreasoned knowledge, but we are also capable of a superior sort of knowledge involving full justification, due to the value added by reasoning.

The interrogation argument is motivated by a specific version of the regress problem, which emerges from an imagined interrogation. Suppose you believe that Q. Then someone asks you a legitimate question concerning the basis of your belief that Q. You respond by citing reason R1. You are then legitimately asked about your basis for believing R1. You cite reason R2. Then you are legitimately asked about your basis for believing R2. A pattern is emerging. How, if it all, can the reasoning resolve itself such that you're fully justified in believing Q? Either the process goes on indefinitely, which suggests that the reasoning you engage in is fruitless because another reason is always needed; or some reason is repeated in the process, which means that you reasoned circularly and thus fruitlessly; or at some point the reasoning ends because the last reason cited isn't supported by any other reason, which suggests that the reasoning is fruitless because it ends arbitrarily. No matter how the reasoning resolves itself, it seems, you're no better offer for having engaged in it. Thus, it can seem doubtful that any reasoning will result in a fully justified belief.

This is essentially the argument given by Sextus Empiricus (1976, lines 164-170, p. 95) to motivate a version of Pyrrhonian Skepticism. What are we to make of this problem? The infinitist agrees that circular reasoning is fruitless, and that finite reasoning ends arbitrarily and so is fruitless too. However, the infinitist disagrees with the claim that reasoning that goes on indefinitely must be fruitless. Every belief is potentially susceptible to legitimate questioning, and interrogation can, in principle, go on indefinitely. You need to be able to answer legitimate questions, and so you need available to you an indefinite number of answers. Each answer is a further reason. So, far from seeming fruitless, potentially indefinitely long reasoning seems to be exactly what is needed for the reasoning to be epistemically effective and result in full justification.

The interrogation argument for infinitism can be summarized like so:

  1. Adult human knowledge requires full justification.(Premise)
  2. Full justification requires proper reasoning. (Premise)
  3. Proper reasoning requires that there be available an infinite and non-repeating series of reasons. (Premise)
  4. So adult human knowledge requires that there be available an infinite and non-repeating series of reasons. (From 1–3)

Lines 1 and 2 can be understood as stipulating the epistemic status that the infinitist is interested in, as explained above. Line 3 is defended on the grounds that (a) circular reasoning is illegitimate, and (b) finite chains won't suffice because every reason offered is potentially susceptible to legitimate interrogation, and full justification requires that an answer to every legitimate question be at least available to you. Foundationalists point to beliefs with an allegedly special foundational property F, which, it is claimed, suites them to put a definitive end to legitimate questioning. But, the infinitist responds, foundationalists always pick properties that they think are truth-conducive, and it is always, potentially at least, legitimate to ask, “Why think that reasons with the property F are truth-conducive?” Once this legitimate question is raised, the foundationalist must abandon the supposed foundational citadel, in search of further reasons. But this looks suspiciously like infinitism in disguise.

c. The Proceduralist Argument

The proceduralist argument for infinitism pertains to knowledge. It begins from the premise that knowledge is a “reflective success” (Aikin 2009). Reflective success requires succeeding through proper procedure. Proper procedure requires thinking carefully. Moreover, we can make our careful thinking explicit. To make our careful thinking explicit is to state our reasons. And for a reason to legitimately figure into our careful thinking, we must have a reason for thinking that it is true in turn.

We can encapsulate the proceduralist argument for infinitism like so:

  1. Knowledge is a reflective success. (Premise)
  2. Reflective success requires careful thinking. (Premise)
  3. Careful thinking requires the availability of an infinite series of reasons. (Premise)
  4. So knowledge requires the availability of an infinite series of reasons. (From 1–3)

Lines 1 and 2 can be understood as characterizing the sort of knowledge that the infinitist is interested in. (Aikin 2005 and 2009 strongly suggests that this is knowledge ordinarily understood, though the matter is not entirely clear.) Line 3 is defended by appeal to a guiding intuition, namely, that if you know, then you can properly answer all questions about your belief and your reasons. But in principle there are an infinite number of questions about your belief and your reasons. And no proper answer will implicate you in question-begging circularity. So, in principle you need an infinite number of answers (Aikin 2009: 57–8). If there were a proper stopping point in the regress of reasons, then beliefs at the terminus would not be susceptible to legitimate challenges from those who disagree. Your opponents would be simply mistaken for challenging you at this point. But it doesn't seem like there even is a point where your opponents must be simply mistaken for challenging you.

What about the examples featured prominently by foundationalists? For example, what about your belief that 2+2=4, or that you have a headache (when you do have one)? It can easily seem implausible that a challenge to these beliefs must be legitimate. It can easily seem that someone who questioned you on these matters would be simply mistaken. The infinitist disagrees. We always should be able to offer reasons. At the very least, careful thinking requires us to have an answer to the question, “Are our concepts of a headache or addition fit for detecting the truth in such matters?” Even if we think there are good answers to such questions, the infinitist claims, the important point is that we need those answers in order to think carefully and, in turn, gain knowledge.

Infinitism can appear counterintuitive because, as a matter of fact, we never answer very many questions about any of our beliefs, but we ascribe knowledge to people all the time. But this an illusion because we often carelessly attribute knowledge, or attribute knowledge for practical reasons that aren't sensitive to the attribution's literal truth.

4. Common Objections to Infinitism

a. The Finite Mind Objection

For most cases of effective reasoning, justified belief or knowledge, infinitism requires more of us than we can muster. We have finite lives and finite minds. Given the way that we are actually constituted, we cannot produce an infinite series of reasons. So skepticism is the immediate consequence of any version of infinitism that requires us to produce an infinite series of reasons (Fumerton 1995; compare BonJour 1976: 298, 310 n. 22).

In a remark in the Posterior Analytics reflecting his general worries about regresses, Aristotle gives a reason for rejecting infinitism: “one cannot traverse an infinite series.” But if one cannot traverse an infinite series of reasons, then if infinitism is the correct account of justification, then skepticism is the correct view. We cannot traverse an infinite series of reasons because we have finite minds. It is useful to quote the passage in full, because it is also a famous passage advocating a regress argument for foundationalism.

Aristotle expresses dissatisfaction with both infinitism and question-begging coherentism, and so opts for foundationalism. He writes:

Some hold that, owing to the necessity of knowing primary premisses, there is no scientific knowledge. Others think there is, but that all truths are demonstrable. Neither doctrine is either true or a necessary deduction from the premisses. The first school, assuming that there is no way of knowing other than by demonstration, maintain that an infinite regress is involved, on the ground that if behind the prior stands no primary, we could not know the posterior through the prior (wherein they are right, for one cannot traverse an infinite series [emphasis added]); if on the other hand – they say – the series terminates and there are primary premisses, yet these are unknowable because incapable of demonstration, which according to them is the only form of knowledge.

And since thus [sic] one cannot know the primary premisses, knowledge of the conclusions which follow from them is not pure scientific knowledge nor properly knowing at all, but rests on the mere supposition that the premisses are true. The other party agree with them as regards to knowing, holding that it is possible only by demonstration, but they see no difficulty in holding that all truths are demonstrated on the ground that demonstration may be circular or reciprocal. (72b5–18)

Aristotle here focuses on “scientific knowledge” and syllogistic “demonstration.” But his remarks are no less plausible when taken to apply to all knowledge and reasoning. Aristotle himself hints at this with his comment about “knowing at all.”

The spirit of Aristotle's original finite-mind objection is alive and well in contemporary epistemology. Here is a representative example:

The [proposed] regress of justification of S's belief that p would certainly require that he hold an infinite number of beliefs. This is psychologically, if not logically, impossible. If a man can believe an infinite number of things, then there seems to be no reason why he cannot know an infinite number of things. Both possibilities contradict the common intuition that the human mind is finite. Only God could entertain an infinite number of beliefs. But surely God is not the only justified believer. (Williams 1981, p. 85)

But infinitists have been careful not to claim that we must actually produce an infinite series of reasons. Rather, they typically say that we must have an appropriately structured, infinite set of reasons available to us. About this milder infinitist requirement, it might be worried that it's not clear that we could even understand an infinite series of reasons. But being able to understand a series of reasons is required for that series to be available — at least in some senses of “available” — to us as reasons. So, even this milder infinitist requirement might lead to skepticism.

b. The Proof of Concept Objection

Contrary to what was suggested at the end of the previous objection, it seems that we could understand an infinite series, provided that each element in the series was simple enough. And it doesn't seem impossible for a justificatory chain to include only simple enough elements.

Grant that it's possible that every element of an infinite series could be comprehensible to us. But what evidence is there that there actually are such series? And what evidence is there that, for at least most of the things that we justifiably believe (or most of the things we know, or most of the acceptable reasoning we engage in), there is a properly structured infinite series available to us? Unless infinitists can convincingly respond to these questions — unless they can offer a proof of concept — then it seems likely that infinitism leads to skepticism.

The objection can be made more poignant by pairing it with the finite mind objection. To handle the finite mind objection, infinitists deny that you need to actually produce the infinite series of reasons in order for your belief to be justified. Just having the reasons available, and producing enough of them to satisfy contextual demands, suffices to justify your belief. But since contextual demands are never so stringent as to demand more than, say, ten reasons, we're left with no actual example of a chain that seems a promising candidate for an infinite series (Wright 2011: section 3).

At least one example has been given of a readily available infinite chain of reasons, but ironically it is one compatible with foundationalism, offered by a foundationalist in response to infinitism (Turri 2009). (Peijnenburg and Atkinson 2011 sketch some formal possibilities and provide an analogy with heritable traits.)

c. The AC/DC Objection

For any proposition we might believe, both it and its denial can be supported by similar, appropriately structured infinite chains of reasons (Post 1980 32–7; Aikin 2005: 198–9; Aikin 2008: 182–3). Importantly, neither chain of reasons is, in any meaningful sense, more available to us than the other. To appreciate the point, suppose you are inquiring into whether P. An infinite affirming chain could be constructed like so:

Affirmation chain (AC)

Q & (Q → P)

R & (R → (Q & (Q → P)))

S & (S → (R & (R → (Q & (Q → P)))))

whereas an infinite denial chain could be constructed like so:

Denial chain (DC)

Q & (Q → ~P)

R & (R → (Q & (Q → ~P)))

S & (S → (R & (R → (Q & (Q → ~P)))))

It is an equally long way to the top of each chain, but which is, so to speak, the road to epistemic heaven, and which the road to hell? Having one such chain available to you isn't a problem, but having both available is a touch too much (at least in non-paradoxical cases), and infinitism lacks the resources to eliminate one.

A further worry is that if infinitists embrace additional resources to eliminate one of these chains, those very same resources could in turn form the basis of a satisfactory finitist epistemology (Cling 2004: section 5). Aikin 2008 defends a version of infinitism, “impure infinitism,” intended to address this problem by incorporating elements of foundationalism; and Klein has argued that specifying the conditions for the availability of reasons will eliminate the possibility of both chains being available in non-paradoxical cases.

d. The Unexplained Origin Objection

Aristotle begins the Posterior Analytics with this statement: “All instruction given or received by way of argument proceeds from pre-existent knowledge.” And later in the Posterior Analytics, after having rejected both infinitism and question-begging coherentism as capable of producing knowledge, he writes:

Our own doctrine is that not all knowledge is demonstrative; on the contrary, knowledge of the immediate premisses is independent of demonstration. (The necessity of this is obvious; for since we must know the prior premisses from which the demonstration is drawn, and since the regress must end in immediate truths, those truths must be indemonstrable.) Such, then, is our doctrine, and in addition we maintain that besides scientific knowledge there is an originative source which enables us to recognize the definitions [that is, the first principles of a science].(72b18–24)

What is this “originative source” and how does it produce knowledge not based on reasoning? The answer is a proto-reliabilist one that relies on humans having a “capacity of some sort” (99b33) that produces immediate (non-inferential) knowledge. Although most contemporary reliabilists will not take the foundational propositions employed in demonstration to be the first principles of a science, they will take foundational beliefs to result from the operation of some capacities humans possess that do not employ conscious reasoning (Goldman 2008).

Here is Aristotle's account of the “originating source” of justified beliefs:

But though sense-perception is innate in all animals, in some perception comes to persist, in others it does not. So animals in which this persistence does not come to be have either no knowledge at all outside of the act of perceiving, or no knowledge of objects of which no impression persists; animals in which it does come into being have perception and can continue to retain the sense-impression in the soul; and when such persistence is frequently repeated a further distinction at once arises between those which out of persistence of such sense impressions develop a power of systematizing them and those which do not. So out of sense perception comes to be what we call memory, and out of frequently repeated memories of the same thing develops experience; for a number of memories constitute a single experience. From experience … originate the skill of the craftsman and the knowledge of the man of science. (99b36–100a5)

Thus, Aristotle holds that foundationalism can explain how justification can arise in basic beliefs and how it is transmitted through reasoning to non-foundational beliefs. This, he claims, contrasts with infinitism and question-begging coherentism, which have no way of explaining how justification arises. He seems to assume that reasoning cannot originate justification, but can merely transmit it. If each belief were to depend on another for its justification, then there would be no originative source, or starting point, that generates the justification in the first place.

Writing in the second century AD, Sextus Empiricus wondered how we might show that believing a proposition is better justified than the alternatives of either disbelieving it or suspending judgment. He employed the “unexplained origin objection” to reject an infinitist attempt to show how believing could be better justified. He argues that infinitism must lead to suspension of judgment.

The Mode based upon regress ad infinitum is that whereby we assert that the thing adduced as a proof of the matter proposed needs a further proof, and this another again, and so on ad infinitum, so that the consequence is suspension [of judgment], as we possess no starting-point for our argument.(1976, I:164–9)

The unexplained origin objection remains popular today. Carl Ginet, a contemporary foundationalist, puts it this way:

A more important, deeper problem for infinitism is this: Inference cannot originate justification, it can only transfer it from premises to conclusion. And so it cannot be that, if there actually occurs justification, it is all inferential. (Ginet 2005, p. 148)

Jonathan Dancy, another contemporary foundationalist, makes a similar point:

Suppose that all justification is inferential. When we justify belief A by appeal to belief B and C, we have not yet shown A to be justified. We have only shown that it is justified if B and C are. Justification by inference is conditional justification only; A's justification is conditional upon the justification of B and C. But if all justification is conditional in this sense, then nothing can be shown to be actually non-conditionally justified. (Dancy 1985, p. 55)

e. The Misdescription Objection

In the Metaphysics, Aristotle writes:

There are … some who raise a difficulty by asking, who is to be the judge of the healthy man, and in general who is likely to judge rightly on each class of questions. But such inquiries are like puzzling over the question whether we are now asleep or awake. And all such questions have the same meaning. These people demand that a reason shall be given for everything; for they seek a starting point, and they seek to get this by demonstration, while it is obvious from their actions that they have no such conviction. But their mistake is what we have stated it to be; they seek a reason for things for which no reason can be given; for the starting point of demonstration is not demonstration. (1011a2–14)

The point of this objection is that, assuming that skepticism is false, infinitism badly misdescribes the structure of reasons supporting our beliefs, as revealed by or expressed in our actual deliberative practices. Our actual practices do not display what infinitism would predict (again, assuming that skepticism is false).

Of the three objections to infinitism presented by Aristotle, this one has gained the least traction in contemporary epistemology. This might be because it rests on two easily challenged assumptions: (i) a theory of justification can be tested by determining whether our actual deliberations meet its demands; (ii) our actual deliberations meet foundationalism's demands. Regarding (i), can we test an ethical theory by determining whether our actual behavior meets its demands? (Let us hope not!) If not, then why should we accept (i)? Regarding (ii), would a foundationalist accept the following as a foundational proposition: “The train schedule says so”? Such claims often end deliberation about when the next train departs. But it's not the sort of proposition that foundationalists have taken to be basic.

5. References and Further Reading

a. References

  • Aikin, S., 2005, “Who Is Afraid of Epistemology's Regress Problem,” Philosophical Studies 126: 191–217.
  • Aikin, S., 2008, “Meta-epistemology and the Varieties of Epistemic Infinitism,” Synthese 163: 175–185.
  • Aikin, S., 2009, “Don't Fear the Regress: Cognitive Values and Epistemic Infinitism,” Think Autumn 2009: 55–61.
  • Aikin, S., 2011, Epistemology and the Regress Problem, Routledge.
  • Aristotle, Metaphysics.
  • Aristotle, Posterior Analytics.
  • BonJour, L., 1976, “The Coherence Theory of Empirical Knowledge,” Philosophical Studies 30: 281–312.
  • Cling, A., 2004, “The Trouble with Infinitism,” Synthese 138: 101–123.
  • Dancy, J., 1985, Introduction to Contemporary Epistemology, Blackwell.
  • Descartes, R., 1955 [1641], Meditations on First Philosophy, in Philosophical Works of Descartes, trans. and ed. By E.S. Haldane and G.R.T. Ross, v. 1, Dover.
  • Fantl, J., 2003, “Modest Infinitism,” Canadian Journal of Philosophy 33: 537–62.
  • Fumerton, R., 1995, Metaepistemology and Skepticism, Rowman & Littlefield.
  • Ginet, C., 2005, “Infinitism is Not the Solution to the Regress Problem,” Contemporary Debates in Epistemology, ed. M. Steup and E. Sosa, Blackwell.
  • Goldman, A., 2008, “Immediate Justification and Process Reliabilism,” Epistemology: New Essays, ed. Q. Smith, Oxford University Press.
  • Hume, D., 1955 [1748], An Inquiry Concerning Human Understanding, ed Charles Hendel, Bobbs-Merrill Company.
  • Klein, P., 1998, “Foundationalism and the Infinite Regress of Reasons,” Philosophy and Phenomenological Research 58: 919-925.
  • Klein, P., 1999, “Human Knowledge and the Infinite Regress of Reasons,” J. Tomberlin, ed., Philosophical Perspectives 13: 297-325.
  • Klein, P., 2005, “Infinitism is the Solution to the Regress Problem,” in M. Steup and E. Sosa, eds., Contemporary Debates in Epistemology, Blackwell.
  • Klein, P., 2012, “Infinitism and the Epistemic Regress Problem,” in S. Toldsdorf, ed., Conceptions of Knowledge, de Gruyter.
  • Lehrer, K., 1997, Self-Trust, Oxford University Press.
  • Moser, P., 1984, “A Defense of Epistemic Intuitionism,” Metaphilosophy 15: 196–209.
  • Peijnenburg, J. and D. Atkinson, 2011, “Grounds and Limits: Reichenbach and Foundationalist Epistemology,” Synthese 181: 113–124
  • Peirce, C.S., 1965, Collected papers of Charles Sanders Peirce, ed. Charles Hartshorne and Paul Weiss, Harvard University Press.
  • Plato, Meno.
  • Post, J., 1980, “Infinite Regresses of Justification and of Explanation,” Philosophical Studies 38: 31–52.
  • Post, J., 1984, The Faces of Existence, Cornell University Press.
  • Sextus Empiricus, 1976, Outlines of Pyrrhonism, Harvard University Press.
  • Sosa, E., 1991, Knowledge in Perspective, Cambridge University Press.
  • Turri, J., 2009, “On the Regress Argument for Infinitism,” Synthese 166: 157–163.
  • Turri, J., 2010, “Foundationalism for Modest Infinitists,” Canadian Journal of Philosophy 40: 275–284.
  • Wright, S., 2011, “Does Klein's Infinitism Offer a Response to Agrippa's Trilemma?” Synthese, DOI 10.1007/s11229-011-9884-x.

b. Further Reading

  • Atkinson, D. and J. Peijnenburg, 2009, “Justification by an Infinity of Conditional Probabilities,” Notre Dame Journal of Formal Logic 50: 183–93.
  • Coffman, E.J. and Howard-Snyder, D. 2006, “Three Arguments Against Foundationalism: Arbitrariness, Epistemic Regress, and Existential Support,” Canadian Journal of Philosophy 36.4: 535–564.
  • Ginet, C., “Infinitism is not the Solution to the Regress Problem,” in M. Steup and E. Sosa, eds., Contemporary Debates in Epistemology, Blackwell.
  • Klein, P., 2000, “The Failures of Dogmatism and a New Pyrrhonism,” Acta Analytica 15: 7-24.
  • Klein, P., 2003a, “How a Pyrrhonian Skeptic Might Respond to Academic Skepticism,” S. Luper, ed., The Skeptics: Contemporary Essays, Ashgate Press.
  • Klein, P., 2003b, “When Infinite Regresses Are Not Vicious,” Philosophy and Phenomenological Research 66: 718-729.
  • Klein, P., 2004a, “There is No Good Reason to be an Academic Skeptic,” S. Luper, ed., Essential Knowledge, Longman Publishers.
  • Klein, P., 2004b, “What IS Wrong with Foundationalism is that it Cannot Solve the Epistemic Regress Problem,” Philosophy and Phenomenological Research 68: 166-171.
  • Klein, P., 2005b, “Infinitism's Take on Justification, Knowledge, Certainty and Skepticism,” Veritas 50: 153-172.
  • Klein, P., 2007a, “Human Knowledge and the Infinite Progress of Reasoning,” Philosophical Studies 134: 1-17.
  • Klein, P., 2007b, “How to be an Infinitist about Doxastic Justification,” Philosophical Studies 134: 25-29.
  • Klein, P., 2008, “Contemporary Responses to Agrippa's Trilemma,” J. Greco, ed., The Oxford Handbook of Skepticism, Oxford University Press.
  • Klein, P., 2011, “Infinitism,” S. Bernecker and D. Pritchard, eds., Routledge Companion to Epistemology, Routledge.
  • Peijnenburg, J., 2007, “Infinitism Regained,” Mind 116: 597–602.
  • Peijnenburg, J., 2010, “Ineffectual Foundations: Reply to Gwiazda,” Mind 119: 1125–1133.
  • Peijnenburg, J. and D. Atkinson, 2008, “Probabilistic Justification and the Regress Problem,” Studia Logica 89: 333–41.
  • Podlaskowski, A.C. And J.A. Smith, 2011, “Infinitism and Epistemic Normativity,” Synthese 178: 515–27.
  • Turri, J., 2009, “An Infinitist Account of Doxastic Justification,” Dialectica 63: 209–18.
  • Turri, J., 2012, “Infinitism, Finitude and Normativity,” Philosophical Studies, DOI: 10.1007/s11098-011-9846-7.


Author Information

Peter D. Klein
Email: pdklein@rci.rutgers.edu
Rutgers University, New Brunswick
U. S. A.


John Turri
Email: John.turri@gmail.com
University of Waterloo

Applied Ethics

Under what conditions is an abortion morally permissible?  Does a citizen have a moral obligation to actively participate (perhaps by voting) in the democratic process of one’s nation (assuming one is living in a democracy)?  What obligations, if any, does one have to the global poor?  Under what conditions is female genital excision morally permissible?  If there are conditions under which it is morally wrong, what measures, if any, should be taken against the practice?  These are just some of the thousands of questions that applied ethicists consider. Applied ethics is often referred to as a component study of the wider sub-discipline of ethics within the discipline of philosophy. This does not mean that only philosophers are applied ethicists, or that fruitful applied ethics is only done within academic philosophy departments. In fact, there are those who believe that a more informed approach is best gotten outside of the academy, or at least certainly outside of philosophy. This article, though, will mostly focus on how applied ethics is approached by trained academic philosophers, or by those trained in very closely related disciplines.

This article first locates applied ethics as distinct from, but nevertheless related to, two other branches of ethics. Since the content of what is studied by applied ethicists is so varied, and since working knowledge of the field requires considerable empirical knowledge, and since historically the pursuit of applied ethics has been done by looking at different kinds of human practices, it only makes sense that there will be many different kinds of applied ethical research, such that an expert working in one kind will not have much to say in another. For example, business ethics is a field of applied ethics, and so too is bioethics. There are plenty of experts in one field that have nothing to say in the other. This article discusses each field, highlighting just some of the many issues that fall within each. Throughout the presentation of the different areas of applied ethics, some methodological issues continue to come up. Additionally, the other two branches of ethics are consulted in dealing with many of the issues of almost all the different fields. So, what may be a methodological worry for a business ethics issue may also be a worry for bioethical issues.

One particular kind of applied ethics that raises distinct concerns is bioethics. Whereas with other kinds of applied ethics it is usually implicit that the issue involves those who we already know to have moral standing, bioethical issues, such as abortion, often involve beings whose moral standing is much more contentious. Our treatment of non-human animals is another area of bioethical research that often hinges on what moral standing these animals have. As such, it is important that this article devote a section to the issues that arise concerning moral standing and personhood.

This article ends with a discussion of the role of moral psychology in applied ethics, and in particular how applied ethicists might appropriate social psychological knowledge for the purpose of understanding the role of emotion in the formation of moral judgments. Additionally, to what extent is it important to understand the role of culture in not only what is valued but in how practices are to be morally evaluated?

Table of Contents

  1. Applied Ethics as Distinct from Normative Ethics and Metaethics
  2. Business Ethics
    1. Corporate Social Responsibility
    2. Corporations and Moral Agency
    3. Deception in Business
    4. Multinational Enterprises
  3. Bioethics
    1. Beginning of Life Issues, including Abortion
    2. End of Life Issues
    3. Research, Patients, Populations, and Access
  4. Moral Standing and Personhood
    1. Theories of Moral Standing and Personhood
    2. The Moral Status of Non-Human Animals
  5. Professional Ethics
    1. What is a Profession?
    2. Engineering Ethics
  6. Social Ethics, Distributive Justice, and Environmental Ethics
    1. Social Ethics
    2. Distributive Justice, and Famine Relief
    3. Environmental Ethics
  7. Theory and Application
  8. References and Further Reading

1. Applied Ethics as Distinct from Normative Ethics and Metaethics

One way of categorizing the field of ethics (as a study of morality) is by distinguishing between its three branches, one of them being applied ethics. By contrasting applied ethics with the other branches, one can get a better understanding what exactly applied ethics is about. The three branches are metaethics, normative ethics (sometimes referred to as ethical theory), and applied ethics. Metaethics deals with whether morality exists. Normative ethics, usually assuming an affirmative answer to the existence question, deals with the reasoned construction of moral principles, and at its highest level, determines what the fundamental principle of morality is. Applied ethics, also usually assuming an affirmative answer to the existence question, addresses the moral permissibility of specific actions and practices.

Although there are many avenues of research in metaethics, one main avenue starts with the question of whether or not moral judgments are truth-apt. The following will illuminate this question. Consider the following claims:  ‘2+2=4’, ‘The volume of an organic cell expands at a greater rate than its surface area’, ‘AB=BA, for all A,B matrices’, and ‘Joel enjoys white wine.’  All of these claims are either true or false; the first two are true, the latter two are false, and there are ways in which to determine the truth or falsity of them. But how about the claim ‘Natalie’s torturing of Nate’s dog for the mere fun of it is morally wrong’?  A large proportion of people, and perhaps cross-culturally, will say that this claim is true (and hence truth-apt). But it’s not quite as obvious how this claim is truth-apt in the way that the other claims are truth-apt. There are axioms and observations (sometime through scientific instruments) which support the truth-aptness of the claims above, but it’s not so clear that truth-aptness is gotten through these means with respect to the torturing judgment. So, it is the branch of metaethics that deals with this question, and not applied ethics.

Normative ethics is concerned with principles of morality. This branch itself can be divide into various sub-branches (and in various ways):  consequentialist theories, deontological theories, and virtue-based theories. A consequentialist theory says that an action is morally permissible if and only if it maximizes overall goodness (relative to its alternatives). Consequentialist theories are specified according to what they take to be (intrinsically) good. For example, classical utilitarians considered intrinsic goodness to be happiness/pleasure. Modern utilitarians, on the other hand, define goodness in terms of things like preference-satisfaction, or even well-being. Other kinds of consequentialists will consider less subjective criteria for goodness. But, setting aside the issue of what constitutes goodness, there is a rhetorical argument supporting consequentialist theories:  How could it ever be wrong to do what’s best overall?  (I take this straight from Robert N. Johnson.)  Although intuitively the answer is that it couldn’t be wrong to do what’s best overall, there are a plentitude of purported counterexamples to consequentialism on this point – on what might be called “the maximizing component” of consequentialism. For example, consider the Transplant Problem, in which the only way to save five dying people is by killing one person for organ transplantation to the five. Such counterexamples draw upon another kind of normative/ethical theory – namely, deontological theory. Such theories either place rights or duties as fundamental to morality. The idea is that there are certain constraints placed against persons/agents in maximizing overall goodness. One is not morally permitted to save five lives by cutting up another person for organ transplantation because the one person has a right against any person to be treated in this way. Similarly, there is a duty for all people to make sure that they do not treat others in a way that merely makes them a means to the end of maximizing overall goodness, whatever that may be. Finally, we have virtue theories. Such theories are motivated by the idea that what’s fundamental to morality is not what one ought to do, but rather what one ought to be. But given that we live in a world of action, of doing, the question of what one ought to do creeps up. Therefore, according to such theories, what one ought to do is what the ideally virtuous person would do. What should I do?  Well, suppose I’ve become the kind of person I want to be. Then whatever I do from there is what I should do now. This theory is initially appealing, but nevertheless, there are lots of problems with it, and we cannot get into them for an article like this.

Applied ethics, unlike the other two branches, deals with questions that started this article – for example, under what conditions is an abortion morally permissible?  And, what obligations, if any, do we have toward the world’s global poor?  Notice the specificity compared to the other two branches. Already, though, one might wonder whether the way to handle these applied problems is by applying one of the branches. So, if it’s the case that morality doesn’t exist (or: moral judgments are not truth-apt), then we can just say that any claims about the permissibility of abortion or global duties to the poor are not true (in virtue of not being truth-apt), and there is therefore no problem; applied ethics is finished. It’s absolutely crucial that we are able to show that morality exists (that moral judgments are truth-apt) in order for applied ethics to get off the ground.

Actually, this may be wrong. It might be the case that even if we are in error about morality existing, we can nevertheless give reasons which support our illusions in specified cases. More concretely, there really is no truth of the matter about the moral permissibility of abortion, but that does not stop us from considering whether we should have legislation that places constraints on it. Perhaps there are other reasons which would support answers to this issue. The pursuit and discussion of these (purported) reasons would be an exercise in applied ethics. Similarly, suppose that there is no such thing as a fundamental principle of morality; this does not exclude, for one thing, the possibility of actions and practices from being morally permissible and impermissible/wrong. Furthermore, suppose we go with the idea that there is a finite list of principles that comprise a theory (with no principle being fundamental). There are those who think that we can determine, and explain, the rightness/wrongness of actions and practices without this list of non-fundamental principles. (We’ll look at this later in this article)  If this is the case, then we can do applied ethics without an explicit appeal to normative ethics.

In summary, we should consider whether or not the three branches are as distinct as we might think that they are. Of course, the principle questions of each are distinct, and as such, each branch is in fact distinct. But it appears that in doing applied ethics one must (or less strongly, may) endeavor into the other two branches. Suppose that one wants to come to the conclusion that our current treatment of non-human animals, more specifically our treatment of chickens in their mass production in chicken warehouses, is morally impermissible. Then, if one stayed away from consequentialist theories, they would have either a deontological or virtue-based theory to approach this issue. Supposing they dismissed virtue-theory (on normative ethical grounds), they would then approach the issue from deontology. Suppose further, they chose a rights-based theory. Then they would have to defend the existence of rights, or at least appeal to a defense of rights found within the literature. What reasons do we have to think that rights exist?  This then looks like a metaethical question. As such, even before being able to appeal to the issue of whether we’re doing right by chickens in our manufactured slaughtering of them, we have to do some normative ethics and metaethics. Yes, the three branches are distinct, but they are also related.

2. Business Ethics

Some people might think that business ethics is an oxymoron. How can business, with all of its shady dealings, be ethical?  This is a view that can be taken even by well educated people. But in the end, such a position is incorrect. Ethics is a study of morality, and business practices are fundamental to human existence, dating back at least to agrarian society, if not even to pre-agrarian existence. Business ethics then is a study of the moral issues that arise when human beings exchange goods and services, where such exchanges are fundamental to our daily existence. Not only is business ethics not something oxymoronical, it is important.

a. Corporate Social Responsibility

One important issue concerns the social responsibility of corporate executives, in particular those taking on the role of a CEO. In an important sense, it is stockholders, and not corporate executives (via their role as executives), who own a corporation. As such, a CEO is an employee, not an owner, of a corporation. And who is their employer?  The stockholders. Who are they, the CEO and other executives, directly accountable to?  The board of directors, representing the stockholders. As such, there is the view taken by what’s called stockholder theorists, that the sole responsibility of a CEO is to do what the stockholders demand (as expressed by the collective decision of the board of directors), and usually that demand is to maximize profits. Therefore, according to stockholder theory, the sole responsibility of the CEO is to, through their business abilities and knowledge, maximize profit. (Friedman, 1967)

The contesting viewpoint is stakeholder theory. Stakeholders include not just stockholders but also employees, consumers, and communities. In other words, anyone who has a stake in the operations of a corporation is a stakeholder of that corporation. According to stakeholder theory, a corporate executive has moral responsibilities to all stakeholders. Thus, although some corporate ventures and actions might maximize profit, they may conflict with the demands of employees, consumers, or communities. Stakeholder theory very nicely accounts for what some might consider to be a pre-theoretical commitment – namely, that an action should be assessed in terms of how it affects everyone involved by it, not just a select group based on something morally arbitrary. Stakeholder theorists can claim that the stakeholders are everyone affected by a business’s decision, and not just the stockholders. To consider only stockholders is to focus on a select group based on something that is morally arbitrary.

There are at least two problems for stakeholder theory worth discussing. First, as was mentioned above, there are conflicts between stockholders and the rest of stakeholders. A stakeholder account has to handle such conflicts. There are various ways of handling such conflicts. For example, some theorists take a Rawlsian approach, by which corporate decisions are to be made in accordance with what will promote the least well-off.  (Freeman, 2008)  Another kind of Rawlsian approach is to endorse the use of the veil of ignorance without appeal to the Difference Principle, whereby it might result that what is morally correct is actually more in line with the stockholders (Dittmer, 2010). Additionally, there are other decision making principles by which one could appeal in order to resolve conflict. Such stakeholder theories will then be assessed according to the plausibility of their decision making theories (resolving conflict) and their ability to achieve intuitive results in particular cases.

Another challenge of some stakeholder theories will be their ability to make some metaphysical sense of such entities as community, as well as making sense of potentially affecting a group of people. If a corporate decision is criticized in terms of it affecting a community, then we should keep in mind what is meant by community. It is not as if there is an actual person that is a community. As such, it is hard to understand how a community can be morally wronged, like a person can be wronged. Furthermore, if the decisions of a corporate executive are to be measured according to stakeholder theory, then we need to be clearer about who counts as a stakeholder. There are plenty of products and services that could potentially affect a number of people that we might not initially consider. Should such potential people be counted as stakeholders?  This is a question to be considered for stakeholder theorists. Stockholder theorists could even us this question as a rhetorical push for their own theory. 

b. Corporations and Moral Agency

In the media, corporations are portrayed as moral agents: “Microsoft unveiled their latest software”, “Ford morally blundered with their decision to not refit their Pinto with the rubber bladder design”, and “Apple has made strides to be the company to emulate”, are the types of comments heard on a regular basis. Independently of whether or not these claims are true, each of these statements relies on there being such a thing as corporations having some kind of agency. More specifically, given that intuitively corporations do things that result in morally good and bad things, it makes sense to ask whether such corporations are the kind of entities that can be moral agents. For instance, take an individual human being, of normal intelligence. Many of us are comfortable with judging her actions as morally right or wrong, and also holding onto the idea that she is a moral agent, eligible for moral evaluation. The question relative to business ethics is:  Are corporations moral agents?   Are they the kind of thing capable of being evaluated in such a way as to determine if they are either morally good or bad?

There are those who do think so. Peter French has argued that corporations are moral agents. It is not just that we can evaluate such entities as shorthand for the major players involved in corporate practices and policies. Instead, there is a thing over and above the major players which is the corporation, and it is this thing that can be morally evaluated. French postulates what's called a “Corporate Internal Decision Structure” (CID structure), whereby we can understand a corporation over and above its major players as a moral agent. French astutely observes that any being that is a moral agent has to be capable of intentionality – that is, the being has to have intentions. It is through the CID structure that we can make sense of a corporation as having intentions, and as such as being a moral agent. (French, 1977). One intuitive idea driving CID structures as supporting the intentionality of corporations is that there are rules and regulations within a corporation that drives it to make decisions that no one individual within it can make. Certain decisions might require either majority or unanimous approval of all individuals recognized in the decision-making process. Those decisions then are a result of the rules regulating what is required for decision, and not any particular go ahead of any individual. As such, we have intentionality independent of any particular human agent.

But there are those who oppose this idea of corporate moral agency. Now, there are various reasons one might oppose it. In being a moral agent, it is usually granted that one then gets to have certain rights. (Notice here a metaethical and normative ethical issue concerning the status of rights and whether or not to think of morality in terms of rights respect and violation.)  If corporations are moral agents with rights, then this might allow for too much moral respect for corporations. That is, corporations would be entities that would have to have their rights respected, in so far as we're concerned with following the standard thoughts of what moral agency entails – that is, having both obligations and rights.

But there are also more metaphysical reasons supporting the idea that corporations are not moral agents. For example, John Danley gives various reasons, many of them metaphysical in nature, against the idea that corporations are moral agents (Danley, 1980). Danley agrees with French that intention is a necessary condition for moral agency. But is it a sufficient condition?  French sympathizers might reply that even if  it is not a sufficient condition, its being a necessary condition gives reason to believe that in the case of corporations it is sufficient. Danley then can be interpreted as responding to this argument. He gives various considerations under which theoretically defined intentional corporations are nevertheless not moral agents. In particular, such corporations fail to meet some other conditions intuitively present with other moral agents, namely most human beings. Danley writes “The corporation cannot be kicked, whipped, imprisoned, or hanged by the neck until dead. Only individuals of the corporation can be punished” (Danley, 1980). Danley then considers financial punishments. But then he reminds us that it is individuals who have to pay the costs. It could be the actual culprits, the major players. Or, it could be the stockholders, in loss of profits, or perhaps the downfall of the company. And furthermore, it could be the loss of jobs of employees; so, innocents may be affected.

In the literature, French does reply to Danley, as well as to the worries of others. Certainly, there is room for disagreement and discussion. Hopefully, it can be seen that this is an important issue, and that room for argumentative maneuver is possible.

c. Deception in Business

Deception is usually considered to be a bad thing, in particular something that is morally bad. Whenever one is being deceptive, one is doing something morally wrong. But this kind of conventional wisdom could be questioned. In fact, it is questioned by Albert Carr in his famous piece “Is Business Bluffing Ethical?”  (Carr, 1968). There are at least three arguments one can take from this piece. In this section, we will explore them.

The most obvious argument is his Poker Analogy Argument. It goes something like this:  (1) Deception in poker is morally permissible, perhaps morally required. (2) Business is like poker. (3) Therefore, deception in business is morally permissible. Now, obviously, this argument is overly simplified, and certain modifications should be made. In poker, there are certain things that are not allowed; you could be in some serious trouble if it were found out what you were doing. So, for example, the introduction of winning cards slid into the mix would not be tolerated. As such, we can grant that such sliding would not be morally permissible. Similarly, any kind of business practice that would be considered sliding according to Carr's analogy would also not be permissible.

But there are some obvious permitted kinds of deception involved in poker, even if it's disliked by the losing parties. Similarly, there will be deceptive practices in business that, although disliked, will be permitted. Here is one objection though. Whereas, the loser of deception in poker is the player, the loser of deception in business is a wide group of people. Whether we go with stockholder theory or stakeholder theory, we are going to have losers/victims that had nothing to do with the poker/deceptive playing of the corporative executives. Employees, for example, could lose their jobs because of the deception of either corporate executive of competing companies or the bad deception of the home companies. Here is a response, though:  When one is involved in corporate culture, as employee for example, they take on the gamble that the corporate executives take on. There are other ways to respond to this charge, as well.

The second reason one might side with Carr's deception thesis is based on a meta-theoretical position. One might take the metaethical position that moral judgments are truth-apt, but that they are categorically false. So, we might think that a certain action is morally wrong when in fact there is no such thing as moral wrongness. When we make claims condemning a moral practice we are saying something false. As such, condemning deception in business is really just saying something false, as all moral judgments are false. The way to reply to this worry is then through a metaethical route, where one argues against such a theory, which is called Error Theory.

The third reason one might side with Carr is via what appears to be a discussion, on his part, of the difference between ordinary morality and business morality. Yes, in ordinary morality, deception is not morally permissible. But with business morality, it is not only permissible but also required. We are misled in judging business practices by the standards of ordinary morality, and so, deception in business is in fact morally permissible. One response is this is:  Following Carr's lead, one is to divide her life into two significant components. They are to spend their professional life in such a way that involves deception, but then spend the rest of their life, day by day, in a way that is not deceptive with their family and friends, outside of work. This kind of self looks very much like a divisive self, a self that is conflicted and perhaps tyrannical.

d. Multinational Enterprises

Business is now done globally. This does not just mean the trivial statement of global exchange of goods and services between nations. Instead, it means that goods and services are produced by other nations (often underdeveloped) for the exchange between nations that do not partake in the production of such goods and services.

There are various ways to define multiple national enterprises (MNE's). Let us consider this definition, though: An MNE is a company that produces at least some of its goods or services in a nation that is distinct from (i) where it is located and (ii) its consumer base. Nike would be a good example of a MNE. The existence of MNE's is motivated by the fact that in other nations, an MNE could produce more at lesser cost, usually due to the fact that in such other nations wage laws are either absent or such that paying employees in such countries is much less than in the host nation. As a hypothetical example, a company could either pay 2000 employees $12/hr for production of their goods in their own country or they could pay 4000 employees $1.20/hr in a foreign country. The cheaper alternative is going with the employment in the foreign country. Suppose an MNE goes this route. What could morally defend such a position?

One way to defend the MNE route is by citing empirical facts concerning the average wages of the producing nation. If, for example, the average way is $.80/hr, then one could say that such jobs are justified in virtue of providing opportunities to make higher wages than otherwise. To be concrete, $1.20 is more than $.80, and so such jobs are justified.

There are at least two ways to respond. First, one might cite the wrongness of relocating jobs from the host nation to the other nation. This is a good response, except that it does not do well in answering to pre-theoretical commitment concerning fairness:  Why should those in a nation receiving $12/hr be privileged over those in a nation receiving $1.20/hr?   Why do the $12/hr people count more than the $1.20/hr people?  Notice that utilitarian responses will have to deal with how the world could be made better (and not necessarily morally better). Second, one might take the route of Richard Miller. He proposes that the $1.20/hr people are being exploited, and it is not because they are doing worse off than they would otherwise. He agrees that they are doing better than they would otherwise ($1.20/hr is better than $.80/hr). It's just that their cheapness of labor is determined according to what they would get otherwise. They should not be offered such wages because doing so exploits their vulnerability of already having to work for unjust compensation; being compensated for a better wage than the wage they would get under unjust conditions does not mean that the better wage is just (Miller, 2010).

3. Bioethics

Bioethics is a very exciting field of study, filled with issues concerning the most basic concerns of human beings and their close relatives. In some sense, the term bioethics is a bit ridiculous, as almost anything of ethical concern is biological, and certainly anything that is sentient is of ethical concern. (Note that with silicon based sentient beings, what I say is controversial, and perhaps false.)  Bioethics, then, should be understood as a study of morality as it concerns issues dealing with the biological issues and facts concerning ourselves, and our close relatives, for examples, almost any non-human animal that is sentient. This part of the article will be divided into three sections: beginning of life issues, including abortion; end of life issues, for example euthanasia; and finally, ethical concerns doing medical research, as well as availability of medical care.

a. Beginning of Life Issues, including Abortion

All of the beginning of life issue are contentious. There are four for us to consider:  abortion, stem-cell procurement and research, cloning, and future generations. Each of these big issues (they could be considered research fields themselves) are related to each other.

Let us start with abortion. Instead of asking 'Is abortion morally permissible?' a better question will be 'Under what conditions is an abortion morally permissible?'. In looking at the conditions surrounding a particular abortion, we are able to get a better understanding of all of the possibly morally relevant considerations in determining permissibility/impermissibility. Now, this does not exclude the possibility of a position where all abortions are morally wrong. It's just that we have to start with the conditions, and then proceed from there.

Up until just 40 or so years ago, the conventional wisdom, at least displayed in the academic literature, was that just so long as a fetus is a person (or counts morally), it would be morally wrong to abort it. Judith Thomson challenged the received wisdom by positing a number of cases that would show, at least as she argued, that even with a fetus being a person, with all of the rights we would confer to any other person, it would still be permissible to abort, under certain conditions (Thomson, 1971). So, for example, with her Violinist Case, it's permissible for a pregnant woman to abort a fetus under the circumstances that she was raped, even with the granting that the aborted fetus is a full-fledged person. Three remarks should be made here. First, there are those who have questioned whether her case actually establishes this very important conclusion. Second, it should be recognized that it's not completely clear what all of the points Thomson is making with her Violinist Case. Is she saying something fundamentally about the morality of abortion?  Or is she saying something fundamentally about the nature and structure of moral rights?  Or both?  Minimally, we should be sensitive to the fact that Thomson is saying something important, even if false, about the nature of moral rights. Third, and this is very important, Thomson's Violinist Case, if successful, only shows the permissibility of abortion in cases where the pregnant woman was raped, where conception occurred due to non-consensual sex. But what about consensual sex?

Thomson does have a way to answer this question. She continues in her essay with another case, called Peopleseeds. (Thomson, 1971)  Imagine a woman (or, perhaps a man) who enjoys her days off in her house with the windows open. It just so happens that she lives in a world in which there are these things called peopleseeds, such that if they make their way into a house's carpet, they will root and eventually develop, unless uprooted, into full-fledged people (perhaps only human infants). Knowing this, she takes precautions and places a mesh screen in her windows. Nevertheless, there are risks, in that it's possible, and has been documented, that seeds come through the window. She places the screens in, and because she enjoys Saturdays with her windows open, she leaves her windows open (actually just one), thereby eventually allowing a seed to root, and there she has a problematic person growing. She then decides to uproot the seed, thereby killing the peopleseed. Has she done anything wrong?  Intuitively, the answer is no. Therefore, even in cases of pregnancy due to consensual sex, and with the consideration that the fetus is a person, it is morally permissible to abort it. It's interesting, though, that very little has been said in the literature to this case; or, there has been very little that has caught on in such a way that is reflected in more basic bioethics texts. One way to question Thomson with this case is by noting that she is having us consult our intuitions about a world where its biological laws are different than ours; it is just not the case that we live in a world (universe) where this kind of fetal development can happen. Perhaps in the world in which this can occur, it would be considered morally wrong by such inhabitants of that world to kill such peopleseed fetuses. Or maybe not. It is, minimally, hard to know.

Thomson's essay is revolutionary, groundbreaking, more-than-important, and perhaps ““true””. What is so important about it is the idea of arguing for the permissibility of abortion, even with fetuses being considered persons, just like us.  There are others who significantly expand on her approach. Frances Kamm, for example, does so in her Creation and Abortion. This is a sophisticated deontological approach to abortion. Kamm notices certain problems with Thomson's argument, but then offers various reasons which would support the permissibility of aborting. She takes into consideration such things as third party intervention and morally responsible creation (Kamm, 1992).

Note that I have mentioned Kamm's deontological approach, where the rights and duties of those involved matter. Also note that with a utilitarian approach, such things as rights and duties are going to be missing, and if they are there, it is only in terms of understanding what will maximize overall goodness/utility. According to utilitarianism, abortion is going to be settled according to whether policies for or against maximize overall goodness/utility. There is a third approach, though. This approach draws from the third major kind of ethical theory, namely virtue theory. In general, virtue theory says that an action is morally permissible if and only if it is what an ideally virtuous person would do. Such a theory sounds very intuitive.  Rosalind Hursthouse argues that it is through virtue theory that we can best understand the issues surrounding abortion. She, I think controversially, asks questions about the personal state under which a woman becomes pregnant. It is from her becoming-pregnant state that we are to understand whether her possible abortion is morally permissible. Perhaps a more generous reading of Hursthouse is that we need to understand where a woman is at in her life to best evaluate whether or not an abortion is morally appropriate for her (Hursthouse, 1991).

There are, of course, the downright naysayers to abortion. Almost all take the position that all fetuses are persons, and thereby, aborting a fetus is tantamount to (wrongful) murder. Any successful position should take on Thomson's essay. Some, though, might bypass her thoughts, and just say that abortion is the killing of an innocent person, and any killing of an innocent person is morally wrong.

Let's end, though, with a discussion of an approach against abortion that allows for the fetus to not be a person, and to not have any (supposed) moral standing. This is clever, as Thomson's argument attempts to show that aborting a person is permissible, and this approach shows that aborting a non-person is impermissible. We see very quickly, though, that this argument is different than the potentiality argument against abortion. The potentiality argument says that some x is a potential person, and therefore the aborting of it is wrong because had x not been aborted, it would eventually had been a person.  This argument, on the other hand, does not appeal to potentiality, and furthermore, does not assume that the fetus is a person. Don Marquis argues that aborting a fetus is wrong on the grounds that explains the wrongness of any killing of people. Namely, what is wrong with killing a person?  It is that in killing a person, the person is being deprived of a future life. A future life contains quite a bit of things, including in general joy and suffering. In killing a fetus via abortion one is depriving it of a future life, even if it is not a person. It's future life is just like ours; it contains joy and suffering. By killing it, you are depriving it of the same things that are deprived of us if we are killed. The same explanation of why it's wrong to kill us applies to fetuses; therefore, it's wrong to abort under all cases (with some exceptions) (Marquis, 1989).

Another beginning of life issue is stem cell research. Stem cell research is important because it provides avenues for the development of organs and tissues that can be used to replace those that are diseased for those suffering from certain medical conditions; in theory, an entire cardiac system could be generated through stem cells, as well as through all of the research required on stem cells in order to eventually produce successful organ systems. There are various routes by which stem cell lines can be procured, and this is where things get controversial. First, though, how are stem cells generally produced in general, in the abstract?  Answering this question first requires specifying what is meant by stem cells. Stem cells are undifferentiated cells, ones that are pluripotent, or more colloquially, ones that can divide and eventually become a number of many different kinds of cells – for example, blood cells, nerve cells, and cells specific to kinds of tissues, for example, muscles, heart, stomach, intestine, prostate, and so forth. A differentiated, non-pluripotent cell is no good for producing pluripotent cells; such a cell is not a candidate for stem cell lines.

And so, how are stem cells produced, abstractly?  Stem cells, given that they must come from a human clump of matter that is not no good, are extracted from an embryo – a cluster of cells that are of both the differentiated and undifferentiated (stem cell) sort. The undifferentiated, pluripotent cells are extracted from the embryo in order to then be specialized into a number of different kind of cells – for example, cells developing into cardiac tissue. Such extraction amounts to the destruction of the human clump of matter - that is, the destruction of the human embryo, and some claim that is tantamount to murder. More mildly, one could condemn such stem cell procurement as an unjustified killing of something that morally counts. Now, it is important to note that such opponents of stem cell line procurement, in the way characterized, will note that there are alternative ways to get the stem cell lines. They will point out that we can get stem cells from already existing adult cells which are differentiated, non-pluripotent. There are techniques that can then “non-specialize” them back into a pluripotent,  undifferentiated state, without having to destroy an embryo for procurement of stem cells; basically, we can get the stem cells without having to kill something, an embryo, that counts morally.

There are some very good responses to those who are opponents of stem cell procurement in the typical (embryo destruction) manner. Typically, they will resort to the idea that such destruction is merely a destruction of something that doesn't morally count. The idea is that embryos, at least of the kind that are used and destroyed in getting stem cells, are not the kind of thing that morally counts. The sophistication of such embryos is such that they are very early stage embryos, comparable to the kinds of embryos one would find in the early stages of the first trimester of a natural pregnancy.

There are other considerations that proponents of typical stem cell procurement will appeal to. For example, they might give a response to certain slippery slope arguments against (typical) stem cell procurement (Holm, 2007). The main kind of slippery slope argument against stem cell research is that if we allow such procurement and research, then this leaves open the door to the practice of the cloning of full-scale human beings. A rather reasonable way of responding to this worry is two-fold:  If the cloning of full-scale human beings is not problematic, then this is not a genuine slippery slope as, in the words of one author, “there is no slope in the first place” (Holm, 2007). The idea is that, all other things equal, human cloning is not morally problematic, and there is therefore no moral worry about stem cell procurement causing human cloning to come about, as human cloning is not a morally bad thing. But suppose that human cloning (on a full scale) is morally problematic. Then proponents of stem cell procurement will then need to give reasons why stem cell procurement and research won't cause/lead to human cloning, and there are plausible, but still controversial, reasons that can be given to support this defense. To summarize, there is a slope, but it is not slippery (Holm, 2007).

A third beginning of life issue, which follows quite nicely from the previous discussion, is that of human cloning. There are those who argue that human cloning is wrong, and for various reasons. One could first go with the repugnance route. It's repugnant to create human beings through this route. One way to respond to this is by noting that it certainly would be different, at least for a period of time, but that such difference, perhaps resulting in the feeling of repugnance, is by itself no reason to think that the practice (of human cloning) is morally wrong. Furthermore, one might say that with any kind of moral progress, feelings of repugnance by some of the population does occur, but that such repugnance is just an effect of moral change; if the moral change is actual progress, then such repugnance is merely the reaction to a change which is actually morally good.

Another way in which cloning may be criticized is that it could lead to a Brave New World world. By cloning, we are controlling people's destinies, in such a way that what we get is a dystopian result. The best response to this is that such a worry relies on a kind of genetic reductionism which is false. Are we merely the product of our genetic composition?  No. There are plenty of early childhood factors, as well as in general cultural/social factors, which explain the kind of people we are by the time we are adults. Of course, a Brave New World world is possible, but it is possibility is best understood in terms of all of the cultural and social factors that would have to be present to have such complacent and brain-dead people characterized in the book; they aren't born that way – they are socialized that way. The mere genetic replication of people, through cloning, should be less of a worry, given that there are so many other factors, social, that are relevant in explaining adult behavior.

The second way to criticize human cloning is that it closes the open future of the resulting clone. By cloning a person, P1, we are creating P2. Given that P1 has lived perhaps 52 years, P2 then has knowledge of what her life will be like in the next 52 years. Suppose that the 52 year old writes a very self-honest autobiography. Then P2 now can read how her life will unfold. Once again, this objection to cloning relies on a very ridiculous way of looking at the narrative of a human life; it requires a very, very strong kind of genetic reductionism, and it flies in the face of the results of twin studies. (Note that a human clone is biologically a delayed human twin.)  So, the response to the open future objection can be summed up as this:  A human clone might have their future closed, but it would only be in virtue of anyone else's future being closed, which would require lots of knowledge about social/cultural/economic knowledge of their future life. Given that these things are very unpredictable, as for everyone else, it's safe to say that such human clones will not have knowledge of how their life will unfold; as such, they, just like anyone else, have an open future.

b. End of Life Issues

This section is primarily devoted to issues concerning euthanasia and physician-assisted suicide. There are of course other issues relevant to the end of life – for example, issues surrounding consent, often through examining the status of such things as advance directives, living wills, and DNR orders, but for space limits, we will only look at euthanasia and physician-assisted suicide. It will be very important to get a clear idea about what is meant with respect to euthanasia, suicide, and all of its various kinds. First, we can think of euthanasia as the intentional killing of another person, where the intention is to benefit that person by ending their life, and that it, in fact, does benefit their life  (McMahan, 2002). Furthermore, we can distinguish between voluntary, involuntary, and non-voluntary euthanasia. Voluntary is where the person killed consents to it. Involuntary is where the person actively expresses that they do not give their consent, or where consent was possible but where they were not asked. Non-voluntary is where consent is not possible – for example, the person is in a vegetative state. Another distinction is active versus passive euthanasia. Active euthanasia involves doing something to the person which then ends their life, for example, shooting them, or injecting them with a lethal does. Passive euthanasia involves denying assistance or treatment to the person that they would need to otherwise live. Here is an example that should illustrate the difference. Smothering a person with a pillow would be active, even if it technically denies them something they need to live – that is, oxygen. Refusing to continue a breathing device, by unplugging the person from the device, would be passive.

Suicide is the act of a person taking their own life. Most ways that we speak and think of suicide are in terms of it being non-assisted. But suppose that you have a friend who wants to end their own life, but doesn't have the financial and technical means to do it in a way that she believes is as painless and successful as possible. If you give them money and knowledge in how to end their lives in this way, then you have assisted them in their suicide. Physicians are well-placed to assist others in ending their lives. Already, one could see how the distinction between physician-assisted suicide and voluntary active euthanasia can get rather blurred. (Imagine a terminally ill person whose condition is so extreme and debilitating that the only thing they can do to take part in the ending of their life is pressing a button that injects a lethal dose, but where the entire killing device is set up, both in design and construction, by a physician. Is this assisted suicide or euthanasia?)

Although as far as I know, no surveys have been done to support the following claim, one might think that the following is plausible:  Involuntary active euthanasia is the most difficult to justify, with non-voluntary active euthanasia following, and with voluntary active euthanasia following that; then it goes involuntary passive, non-voluntary passive, and then voluntary passive euthanasia in order from most difficult to least difficult to justify. It is difficult to figure out where physician-assisted suicide and non-assisted suicide would fit in, but it's plausible to think that non-assisted suicide would be the easiest to justify, where this becomes trivially true if the issue is in terms of what a third party may permissibly do.

It appears then that, minimally, it is more difficult to justify active euthanasia than passive. Some authors, however, have contested this. James Rachels gives various reasons, but perhaps the best two are as follows. First, in some cases, active euthanasia is more humane than passive. For example, if the only way to end the life of a terminally ill person is by denying them life-supporting measures, perhaps by unplugging them from a feeding tube, where it will take weeks, if not months for them to die, then this seems less humane, and perhaps outright cruel, in comparison to just injecting them with a lethal dose. Second, Rachels thinks of the distinction between active and passive euthanasia as being based on the distinction between killing and letting die. Now, this way of basing the distinction between active and passive might be placed under scrutiny – recall that we earlier defined the distinction between actively doing something that ends one life and withholding life-assisting measures, as opposed to killing someone and merely letting them die (Rachels, 1975). But suppose that we go with Rachels in allowing the killing versus letting die distinction base the distinction between active and passive euthanasia. Then consider Rachels' example as challenging the moral power of the distinction between killing and letting die:  Case 1 – A husband decides to kill his wife, and does so by placing a lethal poison in her red wine. Case 2 – A husband decides to kill his wife, and as he is walking into the bathroom to hand her the lethal dosed glass of wine, he notices her drowning in the bathtub. In case 1, the husband kills his wife, and in case 2, he merely lets her die. Does this mean that what he's done in case 2 is less morally worse?  Perhaps we might even think that in case 2 the husband is even more morally sinister.

Although it appears to be difficult to justify, there are proponents of voluntary active euthanasia. McMahan is one such proponent who gives a rather sophisticated, incremental argument for the permissibility of voluntary active euthanasia. The argument starts with an argument that rational suicide is permissible, where rational suicide is ending one's life when one believes that one's life is not worth living, and it is the case that one's life is not worth living. Then, McMahan takes the next “increment” and discusses conditions under which we would find it permissible that a physician aid someone in their rational suicide, by perhaps assisting them in the removal of their life support system; here, physician-assisted passive suicide is permissible. But then why is assisted passive suicide permissible but assisted active suicide impermissible?  As McMahan argues, there is no overriding reason why this is the case. In fact, there is a good reason to think assisted active suicide is permissible. First, consider that often people commit suicide actively, not passively, and the idea is that they want to be able to exercise control in how their life ends. Second, because one does not want to risk a failed suicide attempt, which could result in pain, humiliation, and disfigurement, one might find that they can meet their goal of death best by the assistance of another, in particular a physician. Finally, with physician-assisted active suicide being permissible, McMahan takes the next step to the permissibility of voluntary active euthanasia. So, suppose that it is permissible for a physician to design and construct an entire system where the person ending their life needs only to press a button. If the physician presses the button, then this is no longer assisted suicide and instead active euthanasia. As McMahan urges, how can it be morally relevant who presses the button (just so long as consent and intention are the same)?  Secondly, McMahan points out that some people will be so disabled by a terminal illness that they will not be able to press the button. Because they cannot physically end their life by physician-assisted active suicide, their only remaining option would then be deemed impermissible if voluntary active euthanasia is deemed impermissible, and yet those who could end their own lives still have a “permissible option” left open and available to them. On grounds of something like fairness, there is a further feature which speaks to the permissibility of voluntary active euthanasia just so long as physician-assisted active suicide is permissible (McMahan, 2002, 458-460).

c. Research, Patients, Populations, and Access

Access to, and quality of, health care is a very real concern. A good health care system is based on a number of things, one being medicine and delivery systems based on research. But research requires, at least to some extent, the use of subjects that are human beings. As such, one can see that ethical concerns arise here. Furthermore, certain populations of people may be more vulnerable to risky research than others. As such, there is another category of moral concern. There is also a basic question concerning how to finance such health care systems. This concern will be addressed in the sixth main section of this article, social ethics and issues of justice.

First, let's start with randomized clinical trials (RCT's). RCT's are such that the participants of such studies don't know whether they are obtaining the promising (but not yet certified) treatment for their condition. Informed consent is usually obtained and assumed in addressing the ethicality of RCT's. Notice, though, that if the promising treatment is life-saving, and the standard treatment received by the control group is inadequate, then there is a basis for criticism of RCT's. The idea here is that those who are in the control group could have been given the experimental, promising, and successful treatment, thereby most likely successfully treating their condition, and in the case of terminal diseases, saving their lives. Opponents of RCT's can characterize RCT's in these cases as condemning someone to death, arbitrarily, as those in the experimental group had a much higher likelihood of living/being treated. Proponents of RCT's have at least two ways of responding. They could first appeal to the modified kind of RCT's designed by Zelen. Here, those in the control group have knowledge of being in the group; they can opt out, given their knowledge of being assigned to the control group. A second, and more addressing, way of responding is by acknowledging that there is an apparent unfairness in RCT's, but then one would say that in order to garner scientifically valid results, RCT's must be used. Given that scientifically valid results here have large social benefits, the practice of using them is justified. Furthermore, those who are in control groups are not made worse off than they would be otherwise. If the only way to even have access to such “beneficial” promising, experimental treatments is through RCT's, then those assigned to control groups have not been made worse off – they haven't been harmed (For interesting discussions see Hellman and Hellman, 1991 and Marquis, 1999).

Another case (affecting large numbers of people) is this:  Certain medications can be tested on a certain population of people and yet benefit those outside the population used for testing. So, take a certain medication that can reverse HIV transmission to fetuses from mothers. This medication needs to be tested. If you go to an underdeveloped country in Africa to test it, then what kinds of obligations does the pharmaceutical company have to those participating in the study and those at large in the country upon making it available to those in developed nations like the U.S?  If availability to those in the research country is not feasible, is it permissible in the first place to conduct the study?  These are just some of the questions that arise in the production of pharmaceutical and medical services in a global context. (See Glantz, et. al., 1998 and Brody, 2002)

4. Moral Standing and Personhood

a. Theories of Moral Standing and Personhood

Take two beings, a rock and a human being. What is it about each such that it's morally okay to destroy the rock in the process of procuring minerals but not okay to destroy a human being in the process of procuring an organ for transplantation?  This question delves into the issue of moral standing. To give an answer to this question is to give a theory of moral standing/personhood. First, some technical things should be said. Any given entity/being has a moral status. Those beings that can't be morally wronged have the moral status of having no (that is, zero) moral standing. Those beings that can be morally wronged have the moral status of having some moral standing. And those beings that have the fullest moral standing are persons. Intuitively, most, if not all human beings, are persons. And intuitively, an alien species of a kind of intelligence as great as ours are persons. This leaves open the possibility that certain beings, which we would not currently know exist, could be greater in moral standing than persons. For example, if there were a god, then it seems that such a being would have greater moral standing than us, than persons; this would have us reexamine the idea that persons have the fullest moral standing. Perhaps, we could say that a god or gods were super-persons, with super moral standing.

Why is the question of moral standing important?  Primarily, the question is important in the case of non-human animals and in the case of fetuses. For this article, we will only focus on human animals directly. But before considering animals, let's take a look at some various theories of what constitutes moral standing for a being. A first shot is the idea that being a human being is necessary and sufficient for being something with moral standing. Notice that according to this theory/definition, rocks are excluded, which is a good thing. But then this runs into the problem of excluding all non-human animals, even for example, primates like chimps and bonobos. As such, the next theory motivated would be this:  A being/entity has moral standing (moral counts/can be morally wronged) if and only if it is living. But according to this theory, things like plants and viruses can be morally wronged. A virus has to be considered in our moral deliberations in considering whether or not to treat a disease, and because the viral entities have moral standing; well, this is counterintuitive, and indicates that with this theory, there is a problem of being too inclusive. So, another theory to consider is one which excludes plants, viruses, and bacteria. This theory would be rationality. According to this theory, those who morally count would have rationality. But there are problems. Does a mouse possess rationality?  But even if one is comfortable with mice not having rationality, and thereby not counting morally, one might then have a problem with certain human beings who lack genuinely rational capacities. As such, another way to go is the theory of souls. One might say that what morally counts is what has a soul; certain human beings might lack rationality, but they at least have a soul. What's problematic with this theory of moral standing is that it posits an untestable/unobservable entity – namely, a soul. What prohibits a virus, or even a rock, from having a soul?  Notice that this objection to the soul theory of moral standing does not deny the existence of souls. Instead, it is that such a theory posits the existence of an entity that is not observable, and which there cannot be a test for its existence.

Another theory, which is not necessarily true and which is not unanimously accepted as true, is the sentience theory of moral standing. According to this theory, what gives something moral standing is that it is something that is sentient – that is, it is something that has experiences, and more specifically has experiences of pain and pleasure. With this theory, rocks and plants don't have moral standing; mice and men do. One problem, though, is that many of us think that there is a moral difference between mice and men. According to this theory, there is no way to explain how although mice have moral standing, human beings are persons (Andrews, 1996). It appears that to do this, one would have to appeal to rationality/intelligence. But as discussed, there are problems with this. Finally, there is another theory, intimately tied with sentience theory. We can safely say that most beings who experience pain and pleasure have an interest in the kinds of experiences that they have. There is, however, the possibility that there are beings who experience pain and pleasure but who don't care about their experiences. So what should we say about those who care about their experiences?  Perhaps it is not their experiences that matter, but the fact that they care about their experiences. In that case, it looks like what matters morally is their caring about their experiences. As such, we should call this new theory “interest theory.”  A being/entity has moral standing if and only if it has interests (in virtue of caring about the experiences it has).

b. The Moral Status of Non-Human Animals

In the literature, though, how are non-human animals considered?  Are they considered as having moral standing?  Peter Singer is probably one of the first to advocate, in the academic literature, for animals as having moral standing. Very importantly, he documented how current agrarian practices treated animals, from chimps to cows to chickens (Singer, 1975). The findings were astonishing. Many people would find the conditions under which these animals are treated despicable and morally wrong. A question arises, though, concerning what the basis is for moral condemnation of the treatment of such animals. Singer, being a utilitarian, could be characterized as saying that treating such animals in the documented ways does not maximize overall goodness/utility. It appears, though, that he appeals to another principle, which can be called the principle of equitable treatment. It goes:  It is morally permissible to treat two different beings differently only if there is some moral difference between the two which justifies the differential treatment (Singer, 1975). So, is there a moral difference between human beings and cows such that the killing of human beings for food is wrong but the killing of cows is not?  According to Singer, there is not. However, we could imagine a difference between the two, and perhaps there is.

Another theorist in favor of non-human animals is Tom Regan. He argues that non-human animals, at least of a certain kind, have moral rights just as human animals do. As such, there are no utilitarian grounds which could justify using non-human animals in a way different than human animals. To be more careful, though, we could imagine a situation in which treating a human a certain way violates her rights but the same treatment does not violate a non-human's rights. Regan supports this possibility (Regan, 1983). This does not change the fact that non-humans and humans equally have rights, but just that the content of rights will depend on their nature. Finally, we should note that there are certain rights-theorists who, in virtue of their adherence to rights theory, will say that non-human animals do not have rights. As such, they do not have moral standing, or at least a robust enough moral standing in which we should consider them in our moral deliberations as beings that morally count (Cohen, 1986).

5. Professional Ethics

a. What is a Profession?

Certain things like law, medicine, and engineering are considered to be professions. Other things like unskilled labor and art are not. There are various ways to try to understand what constitutes something as a profession. For the purposes of this article, there will be no discussion of necessary and jointly sufficient conditions proposed for something constituting a profession. With that said, some proposed general characteristics will be discussed. We will discuss these characteristics in terms of a controversial case, the case of journalism. Is journalism a profession?  Generally, there are certain financial benefits enjoyed by professions such as law, medicine, and engineering. As such, we can see that there may be a financial motivation on the part of some journalists to consider it to be a profession. Additionally, one can be insulated from criticism by being part of a profession; one could appeal to some kind of professional authority against the layperson (or someone outside that profession) (Merrill, 1974). One could point out, though, that just because some group desires to be some x does not mean that they are x (a basic philosophical point). One way to respond to this is that the law, medicine, and engineering have a certain esteem attached to them. If journalists could create that same esteem, then perhaps they could be regarded as professions.

But as Merrill points out, journalism seems to lack certain important characteristics shared by the professions. With the professional exemplars already mentioned, one has to usually take a series of professional exams. These exams test a number of things, one of them being the jargon of the profession. Usually, one is educated specifically for a certain profession, often with terminal degrees for that profession. Although there are journalism schools, entry into the practice of journalism does not require education in a journalism school, nor does it require anything like the testing involved in, say, the law. Furthermore, there is usually a codified set of principles or rules, even if rather vague and ambiguous, which apply to professionals. Perhaps journalists can appeal to such mottos as tell the truth, cite your sources, protect your sources, and be objective. But in addition to the almost emptiness of these motto's, there is the problem that under interpretation, there is plenty of disagreement about whether they are valid principles in the first place. For example, if one wants to go with a more literal appeal to truth telling, then how are we to think of the gonzo journalism of Hunter Thomson?  Or with documentary making, there are some who believe that the documentarian should stay objective by not placing themselves in the documentary or by not assisting subjects. Notice here that although journalism may not be a profession, there are still ethical issues involved, ones that journalists should be mindful of. Therefore, even if journalism cannot be codified and organized into something that counts as a profession, this does not mean that there are not important ethical issues involved in doing one's work. This should be no surprise, as ethical issues are abundant in life and work. 

b. Engineering Ethics

In this section, we will discuss engineering ethics for two purposes. One purpose is to use engineering ethics as a case study in professional ethics. More importantly, the second purpose is to give the reader some idea of some of the ethical issues involved in engineering as a practice.

One way to approach engineering ethics is by first thinking of it as a profession, and then given its features as a profession, examine ethical issues according to those features. So, for example, given that professions usually have a codified set of principles or rules for their professionals, one could try to articulate, expand, and flesh out such principles. Another way to approach engineering ethics is by starting with particular cases, usually of the historical as opposed to the hypothetical kind, and then draw out any moral lessons and perhaps principles from there. Accordingly, one would start with such cases as the Hyatt-Regency Walkway Collapse, the Challenger Space Shuttle Accident, and the Chernobyl and Bhopal Plant Accidents, just to name a few(Martin and Schinzinger, 2005).

The Challenger Space Shuttle Accident brings up a number of ethical issues, but one worth discussing is the role of engineer/manager. When one is both an engineer and also in upper or middle-level management, and when one has the responsibility as an engineer to report safety problems with a design but also has the pressure of project completion being a manager, (i) does one role trump the other in determining appropriate courses of action, and if so which one?; (ii) or are the two reconcilable in such a way that there really is no conflict?; (iii) or are the two irreconcilable such that inevitably assigning people to an engineer/manager role will lead to moral problems?

One philosophically interesting issue that is brought up by engineering is the assessment of safety and risk. What constitutes something being safe?  And what constitutes something being a risk?  Tversky and Kahneman (Tversky and Kahneman, 1981) famously showed that in certain cases, where risk-assessment is made, most people will prefer one option over another even when the expected value of both options are identical. What could explain this?  One explanation appeals to the idea that people are able to appropriately think about risk in a way that is not capturable by standard risk-cost-benefit analyses. Another explanation is that most people are in error and that their basing one preference over another is founded on an illusion concerning risk. With either interpretation/explanation determining risk is important, and understanding risk is then important in determining the safety of a product/design option. It is of great ethical concern that engineers be concerned with producing safe products, and thereby identifying and assessing properly the risks of such products.

There are also concerns with respect to what kinds of projects engineers should participate in. Should they participate in the development of weaponry?  If so, what kind of weapon production is morally permissible?  Furthermore, to what extent should engineers be concerned with the environment in proposing products and their designs?  Should engineers as professionals work to make products that are demanded by the market?  If there are competing claims to a service/product that cannot be explained in terms of market demand, then to what extent do engineers have a responsibility to their corporate employers, if their corporate employers require production design for things that run counter to what's demanded by those “outside of” the market?  Let us be concrete with an unfortunately hypothetical example. Suppose you have a corporation called GlobalCyber Initiatives, with the motto: making the world globally connected from the ground up. And suppose that your company has a contract in a country with limited cell towers. Wealthy business owners of that country complain that their middle-level manager would like a processing upgrade to their hand-held devices so that they can access more quickly the cell towers (which are conveniently placed next to factories). Your company could provide that upgrade. But you, as lead in R&D, have been working on instead providing upgrades to PC's, so that these PC's can be used in remote, rural areas that have no/limited access to cell towers. With your upgrade, PC's could be sold to the country in question for use in local libraries. The contract with the business owners would be more lucrative (slightly) but a contract with that country's government, which is willing to participate, would do much more good for that country, at both the overall level, and also specifically for the very many people throughout the very rural country. What should you do as lead of the R&D?  How far should you be concerned?  How far should you be pushy in making the government contract come about?  Or should you not be concerned at all?

These questions are supposed to highlight how engineering ethics thought of merely as an ethic of how to be a good employee is perhaps too limiting, and how engineering as a profession might have a responsibility to grapple with what the purposes of it, as a profession, are supposed to be. As such, this then highlights how framing the purposes of a profession is inherently ethical, insofar as professions are to be responsive to the values of those that they serve.

6. Social Ethics, Distributive Justice, and Environmental Ethics

This section is an oddity, but due to space limitations, is the best way to structure an article like this. First of all, take something like “social ethics”. In some sense, all ethics is social, as it deals with human beings and other social creatures. Nevertheless, some people think that certain moral issues apply only to our private lives while we are behind closed doors. For example, is masturbation morally wrong?  Or, is homosexual sex morally wrong?   One way such questions are viewed is that, in a sense, they are not simple private questions, but inherently social. For example with homosexual sex, since sex is also a public phenomenon in some way, and sense the expression of sexual orientation is certainly public, there is definitely a way of understanding even this issue as public and therefore social. Perhaps the main point that needs to be emphasized is that when I say social I mean those issues that need to be understood obviously in a public, social way, and which cannot be easily subsumed under one of the other sub-disciplines discussed above.

Another reason this section is an oddity is that the topic of distributive justice is often thought of as one properly falling within the discipline of political philosophy, and not applied ethics. One of various reasons for including a section on it is that often distributive justice is talked about directly and indirectly in business ethics courses, as well as in courses discussing the allocation of health care resources (which may be included in a bioethics course). Another reason for inclusion is that famine relief is an applied ethical topic, and distributive justice, in a global context, obviously relates to famine relief. Finally, this section is an oddity because here environmental ethics only gets a subsection of this encyclopedia article and not an entire section, like equally important fields like bioethics or business ethics. The justification, though, for this is (i) space limitations and (ii) that various important moral considerations involving the environment are discussed within the context of bioethics, business ethics, and moral standing.

a. Social Ethics

To start with, perhaps some not-as-controversial (compared to earlier times) topics that fall within social ethics are affirmative action and smoking bans. The discussions involved with these topics are rich in discussion of such moral notions as fairness, benefits, appropriation of scarce resources, liberty, property rights, paternalism, and consent.

Other issues have to do with appropriating the still very real gender differences in wealth, responsibilities, social roles, and employment opportunities. How are these differences to be understood?  Obviously not because such differences are deserved. Given this, such differences need to either be morally justified (doubtful) or morally rectified, and so, if they can't be justified, then such differences should be morally eliminated/rectified. Very good work can be done on understanding how to do this in a way that does not create further moral problems. Additionally, work on the visibility of transgendered persons is important, and how transgendered persons can be incorporated into the modern life of working in corporations, government, education, or industry, living in predominantly non-transgendered communities and networks of families with more typical gender narratives, and doing this all in a way that respects the personhood of transgendered persons.

b. Distributive Justice, and Famine Relief

The term distributive justice is misleading in so far as justice is usually thought in terms of punitive justice. Punitive justice deals with determining the guilt or innocence of actions on the part of defendants, as well as just punishments of those found guilty of crimes. Distributive justice on the other hand deals with something related but yet much different. Take a society, or group of societies, and consider a limited number of resources, goods, and services. The question arises about how those resources, goods, and services should be distributed across individuals of such societies. Furthermore, there is the question about what kind of organization, or centralizing power, should be set up to deal with distribution of such goods (short for goods, resources, and services); let's call such organizations which centralize power governments.

In this subsection, we will examine some very simplified characterizations to the question of distribution of goods, and subsequent questions of government. We will first cover a rather generic list of positions on distributive justice and government, and then proceed to a discussion of distributive justice and famine relief. Finally, we will discuss a number of more contemporary approaches to distributive justice, leaving it open to how each of these approaches would handle the issue of famine relief.

Anarchism is a position in which no such government is justified. As such, there is no centralizing power that distributes goods. Libertarianism is the position that says that government is justified in so far as it is a centralizing power used to enforce taxation for the purpose of enforcing person's property rights. This kind of theory of distributive justice emphasizes a minimal form of government for the purpose of protecting and enforcing the rights of individuals to their property. Any kind of theory that advocates any further kind of government for purposes other than enforcement of property rights might be called socialist, but to be more informative, it will help to distinguish between at least three theories of distributive justice that might be called socialist. First, we might have those who care about equality. Egalitarian theories will emphasize that government exists to enforce taxation to redistribute wealth to make things as equal as possible between people in terms of their well-being. Bare-minimum theories will instead specify some bare minimum needed for any citizen/individual to minimally do well (perhaps have a life worth living). Government is then to specify policies, usually through taxation, in order to make sure that the bare minimum is met for all. Finally, we have meritocracy theories, and in theory, these may not count as socialist. The reason for this is that we could imagine a society in which there are people that do not merit the help which would be given to them through redistributive taxation. In another sense, however, it is socialist in that we can easily imagine societies where there are people who merit a certain amount of goods, and yet do not have them, and such people, according to the theory of merit, would be entitled to goods through taxation on others.

The debate concerning theories of distributive justice is easily in the 10's of thousands of pages.  Instead of going into the debates, we should, for the purpose of applied ethics, go on to how distributive justice applies to famine relief, easily something within applied ethics. Peter Singer takes a position on famine relief in which it is morally required of those in developed nations to assist those experiencing famine (usually in underdeveloped nations)  (Singer, 1999). If we take such theories of distributive justice as applying across borders, then it is rather apparent that Singer rejects the libertarian paradigm, whereby taxation is not justified for anything other than protection of property rights. Singer instead is a utilitarian, where his justification has to do with producing overall goodness. Libertarians on the other hand will allow for the justice of actions and polices which do not produce the most overall goodness. It is not quite clear what socialist position Singer takes, but no matter.. It is obvious that he argues from a perspective that is not libertarian. In fact, he uses an example from Peter Unger to make his point, which is obviously not libertarian. The example (modified):  Imagine someone who has invested some of her wealth in some object (a car, for example) that is then the only thing that can prevent some innocent person from dying; the object will be destroyed in saving their life. Suppose that the person decides not to allow her object from being destroyed, thereby allowing the other (innocent) person to die. Has the object (car) owner done something wrong?  Intuitively, yes. Well, as Singer points out, so has anyone in the developed world, with enough money, in not giving to those experiencing famine relief; they have let those suffering people die. One such response is libertarian, Jan Narveson being an exemplar here (Narveson, 1993). Here, we have to make a difference between charity and justice. According to Narveson, it would be charitable (and a morally good thing) for one to give up some of one's wealth or the saving object, but doing so is not required by justice. Libertarians in general have even more sophisticated responses to Singer, but that will not concern us here, as it can be seen how there is a disagreement on something important like famine relief, based on differences in political principles, or theories of distributive justice.

As discussed earlier in this subsection, libertarian theories were contrasted with socialist positions, where socialist is not to be confused with how it is used in the rhetoric of most media. The earliest of the influential socialist theories is proposed by John Rawls (Rawls, 1971).  Rawls is more properly an egalitarian theorist, who does allow for inequalities just so far as they improve the least-advantaged in the best possible way, and in a way that does not compromise basic civil liberties. There have been reactions to his views, though. For example, his Harvard colleague, Robert Nozick, takes a libertarian perspective, where he argues that the kinds of distributive policies endorsed by Rawls infringe on basic rights (and entitlements) of persons – basically, equality, as Rawls visions, encroaches on liberty (Nozick, 1974). On the other end of the spectrum, there are those like Kai Nielson who argue that Rawls does not go far enough. Basically, the equality Rawls argues for, according to Nielson, will still allow for too much inequality, where many perhaps will be left without the basic things needed to be treated equally and to have basic equal opportunities. For other post-Rawlsian critiques and general theories, consult the works of Michael Sandel, Martha Nussbaum (a student of Rawls), Thomas Pogge (a student of Rawls), and Michael Boylan. 

c. Environmental Ethics

This subsection will be very brief, as some of the issues have already been discussed. Some things, however, should be said about how environmental ethics can be understood in a way that is foundational, independent of business ethics, bioethics, and engineering ethics.

First of all, there is the question of what status the environment has independent of human beings. Does the environment have value if human beings do not exist, and would never exist?   There are actually some who give the answer yes, and not just because there would be other sentient beings. Suppose, then, that we have an environment with no sentient beings, and which will never progress into having sentient beings. Does such an environment still matter?  Yes, according to some. But even if an environment matters in the context of either actual or potential sentient beings, there are those who defend such an idea, but do so without thinking that primarily what matters is sentient beings.

Another way to categorize positions concerning the status of the environment is by differentiating those who advocate anthropocentrism from those who advocate a non-anthropocentric position. This debate is not merely semantic, nor is it merely academic, nor is it something trivial. It's a question of value, and the role of human beings in helping or destroying things of (perhaps) value, independent of the status of human beings having value. To be more concrete, suppose that the environment of the Earth had intrinsic value, and value independently of human beings. Suppose then that human beings, as a collective, destroyed not only themselves but the Earth. Then, by almost definition, they have destroyed something of intrinsic value. Those who care about things with value, especially intrinsic value, should be rather concerned about this possibility (Here, consult: Keller, 2010; Elliot, 1996; Rolston, 2012; Callicot, 1994).

Many moral issues concerning the environment, though, can be seriously considered going with the two above options – that is, whether or not the environment (under which humans exist) matter if human beings do not exist. Even if one does not consider one of the two above options, it is hard to deny that the environment morally matters in a serious way. Perhaps such ways to consider the importance is through the study of how business and engineering affects the environment.

7. Theory and Application

One might still worry about the status of applied ethics for the reason that it is not quite clear what the methodology/formula is for determining the permissibility of any given action/practice. Such a worry is justified, indeed. The reason for the justification of skepticism here is that there are multiple approaches to determining the permissibility of actions/practices.

One such approach is very much top-down. The approach starts with a normative theory, where actions are determined by a single principle dictating the permissibility/impermissibility (rightness/wrongness) of actions/practices. The idea is that you start with something like utilitarianism (permissible just in case it maximizes overall goodness), Kantianism (permissible just in case it does not violate imperatives of rationality or respecting persons), or virtue theory (permissible just in case it abides with what the ideally virtuous person would do). From there, you get results of permissibility or impermissibility (rightness/wrongness).

Although each of these theories have important things to say about applied ethical issues, one might complain about them due to various reasons. Take utilitarianism, for example. It, as a theory, implies certain things morally required that many take to be wrong, or not required (for example, lynching an innocent person to please a mob, or spending ten years after medical school in a 3rd world country). There are also problems for the other two main kinds of theories, as well, such that one might be skeptical about a top-down approach that uses such theories to apply to applied ethical cases.

Another approach is to use a pluralist kind of ethical theory. Such a pluralist theory is comprised of various moral principles. Each of the principles might be justified by utilitarian, Kantian, or virtue theories. Or they may not. The idea here is that there are multiple principles to draw from to determine to the rightness/wrongness of any given action/practice within the applied ethical world. Such an approach sounds more than reasonable until another approach is considered, which will be discussed below.

What if, though, it was the case that some moral feature, of a purported moral principle, worked in such a way that it counted for the permissibility of an action in one case, case1, but counted against the permissibility of the same action in another case, case 2?  What should we say here?  An example would be helpful. Suppose that Jon has to hit Candy to get candy. Suppose that this counts as a morally good thing. Then the very same Jon hitting of Candy to get candy in a different contest could be a morally bad thing. This example is supposed to highlight the third theoretical possibility of moral particularism (Dancy, 1993).

To sum things up for applied ethics, it very much matters what theoretical approach one takes. Does one take the top-down approach of going with a normative/ethical theory to apply to specific actions/practices?  Or does one go with a pluralist approach?  Or does one go with a particularistic approach that requires, essentially, examining things case by case?

Finally, some things concerning moral psychology should be discussed. Moral psychology deals with understanding how we should appropriate actual moral judgments, of actual moral agents, in light of the very real contexts under which are made. Additionally, moral psychology tries to understand the limits of actions of human beings in relation to their environment, the context under which they act and live. (Notice that according to this definition, multicultural relativity of practices and actions has to be accounted for, as the differences in actions/practices might be due to differences in environments.)  Experiments from social psychology confirm the idea that how people behave is determined by their environment; for example, we have the Milgrim Experiment and the Stanford Prison Experiment. We might not expect people to act in such gruesome ways, but according to such experiments, if you place them in certain conditions, this will provoke ugly responses. Two reasons that these findings are important for applied ethics is:  (i) if you place persons in these conditions, you get non-ideal moral results, and (ii) our judgments about what to morally avoid/prevent are misguided because we don't keep in mind the findings of such experiments. If we kept in mind the fragility of human behavior relative to conditions/environment, we might try get closer to eradicating such conditions/environments, and subsequent bad results.

8. References and Further Reading

  • Allhoff, Fritz, and Vaidya, Anand J. “Business in Ethical Focus”. (2008), Broadview.
  • Andrews, Kristei. “The First Step in Case for Great Ape Equality: The Argument for Other Minds.”  (1996), Etica and Animali.
  • Beauchamp, Tom, and Bowie, Norman. “Ethical Theory and Business.”  (1983), Prentice-Hall.
  • Boylan, Michael. “A Just Society.” (2004), Lanham, MD: Rowman & Littlefield.
  • Boylan, Michael. “Morality and Global Justice: Justifications and Applications.”  (2011), Westview.
  • Boylan, Michael   “Morality and Global Justice:  Reader.”  (2011), Westview.
  • Brody, Baruch. “Ethical Issues in Clinical Trials in Developing Countries.”  (2002) (vol. 2), Statistics in Medicine. (2002), John Wiley & Sons.
  • Callahan, Joan. “Ethical Issues in Professional Life.”  (1988), Oxford.
  • Callicott, J. Baird. “Earth's Insights.”  (1994), University of California Press.
  • Carr, Albert Z. “Is Business Bluffing Ethical?” (1968), Harvard Business Review.
  • Chadwick, Ruth; Kuhse, Helga; Landman, Willem; Schuklenk, Udo; Singer, Peter. “The Bioethics Reader: Editor's Choice.”  (2007), Blackwell.
  • Cohen, Carl. “The Case for the Use of Animals in Biomedical Research.”  (1986), New England Journal of Medicine.
  • Danley, John. “Corporate Moral Agency: The Case for Anthropological Bigotry”. (1980), Action and Responsibility: Bowling Green Studies in Applied Philosophy, vol. 2.
  • Elliot, Robert. “Environmental Ethics.”  (1996), Oxford.
  • Freeman, R. Edward. “A Stakeholder Theory of the Modern Corporation.”  (1994)
  • French, Peter. “Corporations as a Moral Person.”  (1979), American Philosophical Quarterly.
  • Friedman, Milton. “The Social Responsibility of Corporations is to Increase its Profits.”  (1970), New York Times Magazine.
  • Glantz, Leonard; Annas, George J;  Grodin, Michael A; “Mariner, Wendy K. Research in Developing Countries: Taking Benefit Seriously.”  (1998), Hastings Center Report.
  • Hellman, Samuel; Hellman, Deborah S. “Of Mice But Not Men:  Problems of the Randomized Clinical Trial.”  (1991), The New England Journal of Medicine.
  • Holm, Soren. “Going to the Roots of the Stem Cell Controversy.”  In The Bioethics Reader, Chawick, et. al. (2007), Blackwell.
  • Hursthouse, Rosalind. “Virtue Theory and Abortion.”  (1991), Philosophy & Public Affairs.
  • LaFollte, Hugh. “The Oxford Handbood of Practical Ethics.” (2003), Oxford.
  • Kamm, Francis M. “Creation and Abortion.” (1996), Oxford.
  • Keller, David R. “Environmental Ethics: The Big Questions.”  (2010), Wiley-Blackwell.
  • Mappes, Thomas, and Degrazzia, David. “Biomedical Ethics.”  6Th ed. (2006), McGraw-Hill.
  • Marquis, Don. “How to Resolve an Ethical Dilemma Concerning Randomized Clinical Trials”. (1999), New England Journal of Medicine.
  • Martin, Mike W; Schinzinger, Roland. “Ethics in Engineering.”  4Th ed. (2005), McGraw-Hill.
  • McMahan, Jeff. “The Ethics of Killing.”  (2002), Oxford.
  • Narveson, Jan. “Moral Matters.”  (1993), Broadview Press.
  • Nielsen, Kai. “Radical Egalitarian Justice: Justice as Equality.”  Social Theory and Practice, (1979).
  • Nozick, Robert. “Anarchy, State, and Utopia.”  (1974), Basic Books.
  • Nussbaum, Martha C. “Sex and Social Justice.”  (1999), New York: Oxford University Press.
  • Pogge, Thomas W. “An Egalitarian Law of Peoples.” Philosophy and Public Affairs, (1994)
  • Pojman, Louis P; Pojman, Paul. “Environmental Ethics.”  6Th ed. (2011), Cengage.
  • Prinz, Jesse. “The Emotional Construction of Morals.”  (2007), Oxford.
  • Rachels, James. “Ethical Theory 1: The Question of Objectivity.”  (1998), Oxford.
  • Rachels, James. Ethical Theory 2:  “Theories about How We Should Live”. (1998) , Oxford.
  • Rachels, James. “The Elements of Moral Philosophy.”  McGraw Hill.
  • Rachels, James. “The Right Thing to Do.”  McGraw Hill.
  • Rachels, James. “Passive and Active Euthanasia.”  1975, Journal of New England Medicine.
  • Rawls, John. “A Theory of Justice.”  (1971), Harvard.
  • Roston III, Holmes. “A New Environmental Ethics.”  (2012), Routledge.
  • Sandel, Michael J. “Liberalism and the Limits of Justice.”  (1982), New York: Cambridge University Press.
  • Singer, Peter. “Practical Ethics.”  (1979), Oxford.
  • Singer, Peter. “Animal Liberation.”  (1975), Oxford.
  • Shaw, William H. “Business Ethics: A Textbook with Cases.”  (2011), Wadsworth.
  • Thomson, J.J. “In Defense of Abortion.” (1971), Philosophy & Public Affairs.
  • Unger, Peter. “Living High and Letting Die.”  (1996), Oxford.


Author Information

Joel Dittmer
Email: dittmerj@mst.edu
Missouri University of Science and Technology
U. S. A.