Category Archives: Logic

Leibniz: Logic

Leibniz: Logic

LeibnizThe revolutionary ideas of Gottfried Wilhelm Leibniz (1646-1716) on logic were developed by him between 1670 and 1690. The ideas can be divided into four areas: the Syllogism, the Universal Calculus, Propositional Logic, and Modal Logic.

These revolutionary ideas remained hidden in the Archive of the Royal Library in Hanover until 1903 when the French mathematician Louis Couturat published the Opuscules et fragments inédits de Leibniz. Couturat was a great admirer of Leibniz’s thinking in general, and he saw in Leibniz a brilliant forerunner of modern logic. Nevertheless he came to the conclusion that Leibniz’s logic had largely failed and that in general the so-called “intensional” approach to logic was necessarily bound to fail. Similarly, in their standard historiography of logic, W. & M. Kneale (1962) maintained that Leibniz “never succeeded in producing a calculus which covered even the whole theory of the syllogism”. Even in recent years, scholars like Liske (1994), Swoyer (1995), and Schupp (2000) argued that Leibniz’s intensional conception must give rise to inconsistencies and paradoxes.

On the other hand, starting with Dürr (1930), Rescher (1954), and Kauppi (1960), a certain rehabilitation of Leibniz’s intensional logic may be observed which was by and by supported and supplemented by Poser (1969), Ishiguro (1972), Rescher (1979), Burkhardt (1980), Schupp (1982), and Mugnai (1992). However, the full wealth of Leibniz’s logical ideas became visible only in Lenzen (1990), (2004a), and (2004b), where the many pieces and fragments were joined together to an impressive system of four calculi:

  • The algebra of concepts L1 (which turns out to be deductively equivalent to the Boolean algebra of sets)
  • The quantificational system L2 (where “indefinite concepts” function as quantifiers ranging over concepts)
  • A propositional calculus of strict implication (obtained from L1 by the strict analogy between the containment-relation among concepts and the inference-relation among propositions)
  • The so-called “Plus-Minus-Calculus” (which is to be viewed as a theory of set-theoretical containment, “addition,” and “subtraction”).

Table of Contents

  1. Leibniz’s Logical Works
  2. Works on the Theory of the Syllogism
    1. Axiomatization of the Theory of the Syllogism
    2. The Semantics of “Characteristic Numbers”
    3. Linear Diagrams and Euler-circles
  3. Works on the Universal Calculus
    1. The Algebra of Concepts L1
    2. The Quantificational System L2
    3. The Plus-Minus-Calculus
  4. Leibniz’s Calculus of Strict Implication
  5. Works on Modal Logic
    1. Possible-Worlds-Semantics for Alethic Modalities
    2. Basic Principles of Deontic Logic
  6. References and Further Reading
    1. Abbreviations for Leibniz’s works
    2. Secondary Literature

1. Leibniz’s Logical Works

Throughout his life (beginning in 1646 in Leipzig and ending in 1716 in Hanover), Gottfried Wilhelm Leibniz did not publish a single paper on logic, except perhaps for the mathematical dissertation “De Arte Combinatoria” and the juridical disputa­tion “De Conditionibus” (GP 4, 27-104 and AE IV, 1, 97-150; the abbrevi­ations for Leibniz’s works are resolved in section 6). The former work deals with some issues in the theory of the syllogism, while the latter contains investigations of what is nowadays called deontic logic. Leibniz’s main aim in logic, however, was to extend the traditional syllogistic to a “Universal Calculus.” Although there exist several drafts of such a calculus which seem to have been composed for publication, none of them was eventually sent to press. So Leibniz’s logical essays appeared only posthumously. The early editions of his philosophical works, however, contained only a small selection of logical papers. It was not before the beginning of the 20th century that the majority of his logical fragments became generally accessible by the valuable edition of Louis Couturat.

Since only few manuscripts were dated by Leibniz, his logical oeuvre shall not be described here in chronological order but from a merely systematic point of view by distinguishing four groups:

  1. Works on the Theory of the Syllogism
  2. Works on the Universal Calculus
  3. Works on Propositional Logic
  4. Works on Modal Logic.

2. Works on the Theory of the Syllogism

Leibniz’s innovations within the theory of the syllogism comprise at least three topics:

(a)   An "Axiomatization" of the theory of the syllogism, that is, a reduction of the traditional inferences to a small number of basic laws which are sufficient to derive all other syllogisms.

(b)   The development of the semantics of so-called "characteristic num­bers" for evaluating the logical validity of a syllogistic inference.

(c)    The invention of two sorts of graphical devices, that is to say, linear diagrams and (later) so-called "Euler-circles," as a heuristic for checking the validity of a syllogism.

a. Axiomatization of the Theory of the Syllogism

In the 17th century, logic was still strongly influenced, if not dominated, by syllogistic, that is, by the traditional theory of the four categorical forms:

Universal affirmative proposition (UA)        Every S is P          SaP

Universal negative proposition (UN)              No S is P               SeP

Particular affirmative proposition (PA)         Some S is P          SiP

Particular negative proposition (PN)              Some S isn’t P      SoP

A typical textbook of that time is the famous “Logique de Port Royal” (Arnauld & Nicole (1683)) which, apart from an introductory investigation of ideas, concepts, and propositions in general, basically consists of:

(i)       The theory of the so-called “simple” laws of subalternation, oppo­sition, and conversion;

(ii)      The theory of the syllogistic “moods” which are classified into four different “figures” for which specific rules hold.

As Leibniz defines it, a “subalternation takes place whenever a particular proposition is inferred from the corresponding universal proposition” (Cout, 80), that is:

SUB 1            SaP → SiP

SUB 2            SeP → SoP.

According to the modern analysis of the categorical forms in terms of first order logic, these laws are not strictly valid but hold only under the assumption that the subject term S is not empty. This problem of "existential import" will be discussed below.

The theory of opposition first has to determine which propositions are contradictories of each other in the sense that they can neither be together true nor be together false. Clearly, the PN is the contradictory, or negation, of the UA, while the PA is the negation of the UN:

OPP 1            ¬SaP ↔ SoP

OPP 2            ¬SeP ↔ SiP.

The next task is to determine which propositions are contraries to each other in the sense that they cannot be together true, while they may well be together false. As Leibniz states in “Theorem 6: The universal affirmative and the universal negative are contrary to each other” (Cout, 82). Finally, two propositions are said to be subcontraries if they cannot be together false while it is possible that are together true. As Leibniz notes in another theorem, the two particular propositions, SiP and SoP, are logically related to each other in this way. The theory of subalternation and opposition is often summarized in the familiar “Square of Opposition”:

leibniz_logic_graphic1

In the paper “De formis syllogismorum Mathematice definiendis” written around 1682 (Cout, 410-416, and the text-critical edition in AE VI, 4, 496-505) Leibniz tackled the task of "axiomatizing" the theory of the syllogistic figures and moods by reducing them to a small number of basic principles. The “Fundamentum syllogisticum”, that is, the axiomatic basis of the theory of the syllogism, is the “Dictum de omni et nullo” (The saying of ‘all’ and ‘none’):

If a total C falls within another total D, or if the total C falls outside D, then whatever is in C, also falls within D (in the former case) or outside D (in the latter case) (Cout, 410-411).

These laws warrant the validity of the following "perfect" moods of the “First Figure”:

BARBARA        CaD, BaC → BaD

CELARENT      CeD, BaC → BeD

DARII                 CaD, BiC → BiD

FERIO                 CeD, BiC → BoD.

On the one hand, if the second premise of the affirmative moods BARBARA and DARII is satisfied, that is, if B is either totally or partially contained in D, then, according to the “Dictum de Omni”, also B must be either totally or partially contained in D since, by the first premise, C is entirely contained in D. Similarly the negative moods CELARENT and FERIO follow from the “Dictum de Nullo”: “B is either totally or partially contained in C; but the entire C falls outside D; hence also B either totally or partially falls outside D” (Cout, 411).

Next Leibniz derives the laws of subalternation from the syllogisms DARII and FERIO by substituting ‘B’ for ‘C’ and ‘C’ for ‘D’, respectively. This derivation (and hence also the validity of the laws of subalternation) tacitly presupposes the following principle which Leibniz considered as an “identity”:

SOME             BiB.

With the help of the laws of subalternation, BARBARA and CELARENT may be "weakened" into

BARBARI      CaD, BaC → BiD

CELARO        CeD, BaC → BoD.

Thus the First Figure altogether has six valid moods, from which one obtains six moods of the Second and six of the Third Figure by means of a logical inference-scheme called “Regressus”:

REGRESS      If a conclusion Q logically follows from premises P1, P2, but if Q is false, then one of the premises must be false.

When Leibniz carefully carries out these derivations, he presupposes the laws of opposition, Opp 1, Opp 2. Finally, six valid moods of the Fourth Figure can be derived from corresponding moods of the First Figure with the help of the laws of conversions.According to traditional doctrines, the PA and the UN may be converted “simpliciter”, while the UA can only be converted “per accidens”:

CONV 1          BiD → DiB

CONV 2          BeD → DeB

CONV 3          BaD → DiB.

As Leibniz shows, these laws can in turn be derived from some previously proven syllogisms with the help of the "identical" proposition:

ALL                BaB.

Furthermore one easily obtains another law of conversion according to which the UN can also be converted "accidentally":

CONV 4          BeD → DoB.

The announced derivation of the moods of the Fourth Figure was not carried out in the fragment “De formis syllogismorum Mathematice definiendis” which just breaks off with a reference to “Figura Quarta”. It may, however, be found in the manuscript LH IV, 6, 14, 3 which, unfortunately, was only partially edited in Cout, 204. At any rate, Leibniz managed to prove that all valid moods can be reduced to the “Fundamentum syllogisticum” in conjunction with the laws of opposition, the inference scheme “Regressus”, and the "identical" propositions SOME and ALL.

Now while ALL is an identity or theorem of first order logic, ∀x(Bx → Bx), SOME is nowadays interpreted as ∃x(Bx ∧ Bx). This formula is equivalent to ∃x(Bx), that is, to the assumption that there "exists" at least one x such that x is B. Hence the laws of subalternation presuppose that each concept B (which can occupy the position of the subject of a categorical form) is "non-empty". Leibniz discussed this problem of "existential import" in a paper entitled “Difficultates quaedam logicae” (GP 7, 211-217) where he distinguished two kinds of "existence": Actual existence of the individuals inhabiting our real world vs. merely possible subsistence of individuals “in the region of ideas”. According to Leibniz, logical inferences should always be evaluated with reference to “the region of ideas”, that is, the larger set of all possible individuals. Therefore all that is required for the validity of subalternation is that the term B occupying the position of the subject of a categorical form has a non-empty extension within the domain of possible individuals. As will turn out below (compare the definition of an extensional interpretation of L1 in section 3.1), this weak condition of "existential import" becomes tantamount to the assumption that the respective concept B is self-consistent!

b. The Semantics of “Characteristic Numbers”

In a series of papers of April 1679, Leibniz elaborated the idea of assigning natural numbers to the subject and predicate of a proposition a in such a way that the truth of a can be "read off" from these numbers. Apparently Leibniz was hoping that mankind might once discover the "true" characteristic numbers which would enable one to determine the truth of arbitrary propositions just by mathematical calculations! In the essays of April 1679, however, he pursued only the much more modest goal of defining appropriate arithmetical conditions for determining whether a syllogistic inference is logically valid. This task was guided by the idea that a term composed of concepts A and B gets assigned the product of the numbers assigned to the components:

For example, since ‘man’ is ‘rational animal’, if the number of ‘animal’, a, is 2, and the number of ‘rational’, r, is 3, then the number of ‘man’, m, will be the same as a*r, in this example 2*3 or 6. (LLP, 17).

Now a UA like ‘All gold is metal’ can be understood as maintaining that the concept ‘gold’ contains the concept ‘metal’ (because ‘gold’ can be defined as ‘the heaviest metal’). Therefore it seems obvious to postulate that in general ‘Every S is P’ is true if and only if s, the characteristic number assigned to S, contains p, the number assigned to P, as a prime factor; or, in other words, s must be divisible by p. In a first approach, Leibniz thought that the truth-conditions for the particular proposition ‘Some S are P’ might be construed similarly by requiring that either s can be divided by p or conversely p can be divided by s. But this was mistaken. After some trials and errors, Leibniz found the following more complicated solution:

(i)     To every term T, a pair of natural numbers <+t1;-t2> is assigned such that t1 and t2 are relatively prime, that is, they don’t have a common divisor.

(ii)    The UA ‘Every S is P’ is true (relative to the assignment (i)) if and only if +s1 is divisible by +p1 and -s2 is divisible by -p2.

(iii)   The UN ‘No S is P’ is true if and only if +s1 and -p2 have a common divisor or +p1 and -s2 have a common divisor.

(iv)   The PA ‘Some S is P’ is true if and only if condition (iii) is not satisfied.

(v)    The PN ‘Some S isn’t P’ is true if and only if condition (ii) is not satisfied.

(vi)   An inference from premises P1, P2 to the conclusion C is logically valid if and only if for each assignment of numbers satisfying condition (i), C becomes true whenever both P1 and P2 are true.

As was shown by Lukasiewicz (1951), this semantics satisfies the simple inferences of opposition, subalternation, and conversion, as well as all (and only) the syllogisms which are commonly regarded as valid. Leibniz tried to generalize this semantics for the entire algebra of concepts, but he never found a way to cope with negative concepts. This problem has only been solved by contemporary logicians; compare Sanchez-Mazas (1979), Sotirov (1999).

c. Linear Diagrams and Euler-circles

In the paper “De Formae Logicae Comprobatione per Linearum ductus” probably written after 1686 (Cout, 292-321), Leibniz elaborated two methods for representing the content of categorical propositions. The UA, for example, ‘Every man is an animal’, can be represented either by two nested circles or by two horizontal lines which symbolize that the extension of B is contained in the extension of C (the subsequent graphics are scans from Cout, 292-295):

leibniz_logic_graphic2

In the case of a UN like ‘No man is a stone’, one obtains the following diagrams which symbolize that the extension of B is set-theoretically disjoint from the extension of C:

leibniz_logic_graphic3

Similarly, the following circles and lines symbolize that, in the case of a PA like ‘Some men are wise’, the extensions of B and C overlap:

leibniz_logic_graphic4

Finally, in the case of a PN like ‘Some men are not ruffians’, the diagrams are meant to symbolize that the extension of B is partially disjoint from the extension of C,that is, that some elements of B are not elements of C:

leibniz_logic_graphic5

These diagrams may then be used to check whether a given inference is valid. Thus, for example, the validity of FERIO can be illustrated as follows:

leibniz_logic_graphic6

Here the conclusion ‘Some D is not B’ follows from the premises ‘No C is B’ and ‘Some D is C’ because the elements of D which are in C can’t be elements of B. On the other hand, invalid syllogisms as, for example, the mood “AOO” of the Fourth Figure, can be refuted as follows:

leibniz_logic_graphic7

As the diagram illustrates, the truth of the premises ‘Every B is C’ and ‘Some C is not D’ is compatible with a situation where the conclusion ‘Some D is not B’ is false, that is, where ‘Every D is B’ is true.

Of course, Leibniz’s diagrams which were re-discovered in the 18th century among others by Euler (1768) are not without problems. In particular, the circles for the PA and the PN are somewhat inaccurate because they basic­ally visualize one and the same state of affairs, namely that (i) some B are C, and (ii) some B are not C, and also (iii) some C are not B. The need to distinguish between different situations such as ((i) & (ii)) in contrast to ((i) & not (ii)) led to improvements of the method of "Euler-circles" as suggested by Venn (1881), Hamilton (1861), and others. Note, incidentally, that, in the GI, Leibniz himself improved the linear diagrams for the UA, PA and PN by drawing perpendicular lines symbolizing the “maximum”,that is, “the limits beyond which the terms cannot, and within which they can, be extended”. At the same time he used a double horizontal line to symbolize “the minimum, that is, that which cannot be taken away without affecting the relation of the terms” (LLP, 73-4, fn. 2).

3. Works on the Universal Calculus

In the period between, roughly, 1679 and 1690, Leibniz spent much effort to generalize the traditional logic to a “Universal Calculus”. At least three different calculi may be distinguished:

(a) The algebra of concepts which is provably equivalent to the Boolean algebra of sets;

(b)   A fragmentary quantificational system in which the quantifiers range over concepts but in which quantification over individuals may be introduced by definition;

(c) The so-called "Plus-Minus-calculus" which constitutes an abstract system of "real addition" and "subtraction". When this calculus is applied to concepts, it yields a weaker logic than the full algebra (a).

a. The Algebra of Concepts L1

The algebra of concepts grows out of the syllogistic framework by three achievements. First, Leibniz drops the informal quantifier expression ‘every’ and formulates the UA simply as “A is B” or, equivalently, as “A contains B”. This fundamental proposition shall here be symbolized as A∈B while its negation will be abbreviated as A∉B. Second, Leibniz introduces an operator of conceptual conjunction which combines two concepts A and B into AB (sometimes also written as “A+B”). Third, Leibniz allows the unrestricted use of conceptual negation which shall here be symbolized as ~A (“Not-A”). Hence, in particular, one can form the inconsistent concept A~A (“A Not-A”) and its tautological counterpart ~(A~A).

Identity or coincidence of concepts might be defined as mutual containment:

DEF 1            (A = B) =df (A∈B) ∧ (B∈A).

Alternatively, the algebra of concepts can be built up with ‘=’ as a primitive operator while ‘∈’ is defined by:

DEF 2            (A∈B) =df (A = AB).

Another important operator may be introduced by definition. Concept B is possible if B does not contain a contradiction like A~A:

DEF 3            P(B) =df (B∉A~A).

Leibniz uses many different locutions to express the self-consistency of a concept A. Instead of ‘A est possibile’ he often says ‘A est res’, ‘A est ens’; or simply ‘A est’. In the opposite case of an impossible concept he also calls A a "false term" (“terminus falsus”).

Identity can be axiomatized by the law of reflexivity in conjunction with the rule of substitutivity:

IDEN 1            A = A

IDEN 2            If A = B, then α[A] ↔ α[B].

By means of these principles, one easily derives the following corollaries:

IDEN 3            A = B → B = A

IDEN 4            A = B ∧ B = C → A = C

IDEN 5            A = B → ~A = ~B

IDEN 6            A = B → AC = BC.

The following laws express the reflexivity and the transitivity of the containment relation:

CONT 1          A∈A

CONT 2          A∈B ∧ B∈C → A∈C.

The most fundamental principle for the operator of conceptual conjunction says: “That A contains B and A contains C is the same as that A contains BC” (LLP, 58, fn. 4), that is,

CONJ 1          A∈BC ↔ A∈B ∧ A∈C.

Conjunction then satisfies the following laws:

CONJ 2          AA = A

CONJ 3          AB = BA

CONJ 4          AB∈A

CONJ 5          AB∈B.

The next operator is conceptual negation, ‘not’. Leibniz had serious problems with finding the proper laws governing this operator. From the tradition, he knew little more than the “law of double negation”:

CONJ 1            ~~A = A

One important step towards a complete theory of conceptual negation was to transform the informal principle of contraposition, ‘Every A is B, therefore Every Not-B is Not-A’ into the following principle:

NEG 2            A∈B ↔ ~B∈~A.

Furthermore Leibniz discovered various variants of the “law of consistency”:

NEG 3            A ≠ ~A

NEG 4            A = B → A ≠ ~B.

NEG 5*           A∉~A

NEG 6*           A∈B → A∉~B.

In the GI, these principles are formulated as follows: “A proposition false in itself is ‘A coincides with Not-A’” (§ 11); “If A = B, then A ≠ Not-B” (§ 171); “It is false that B contains Not-B, that is, B doesn’t contain Not-B” (§ 43); and “A is B, therefore A isn’t Not-B” (§ 91).

Principles NEG 5* and NEG 6* have been marked with a ‘*’ in order to indicate that the laws as stated by Leibniz are not absolutely valid but have to be restricted to self-consistent terms:

NEG 5            P(A) → A∉~A

NEG 6            P(A) → (A∈B → A∉~B).

The following two laws describe some characteristic relations between the possibility-operator P and the other operators of L1:

POSS 1           A∈B ∧ P(A) → P(B)

POSS 2           A∈B ↔ ¬P(A~B).

All these principles have been discovered by Leibniz himself who thus provided an almost complete axiomatization of L1. As a matter of fact, the "intensional" algebra of concept can be proven to be equivalent to Boole’s extensional algebra of sets provided that one adds the following counterpart of the “ex contradictorio quodlibet”:

NEG 7            (A~A)∈B.

As regards the relation of conceptual containment, A∈B, it is important to observe that Leibniz’s standard formulation ‘A contains B’ expresses the so-called "intensional" view of concepts as ideas, while we here want to develop an extensional interpretation in terms of the sets of individuals that fall under the concepts. Leibniz explained the mutual relationship between the "intensional" and the extensional point of view in the following passage from the “New Essays on Human understanding”:

The common manner of statement concerns individuals, whereas Aristotle’s refers rather to ideas or universals. For when I say Every man is an animal I mean that all the men are included among all the animals; but at the same time I mean that the idea of animal is included in the idea of man. ‘Animal’ comprises more individuals than ‘man’ does, but ‘man’ comprises more ideas or more attributes: one has more instances, the other more degrees of reality; one has the greater extension, the other the greater intension. (NE, Book IV, ch. XVII, § 8; compare the original French version in GP 5, 469).

If 'Int(A)’ and 'Ext(A)’ abbreviate the "intension" and the extension of a concept A, respectively, then the so-called law of reciprocity can be formalized as follows:

RECI               Int(A) ⊆ Int (B) ↔ Ext(A) ⊇ Ext(B).

From this it immediately follows that two concepts A, B have the same "intension" iff they have the same extension. This somewhat surprising result might seem to unveil an inadequacy of Leibniz’s conception. However, "intensionality" in the sense of traditional logic must not be mixed up with intensionality in the modern sense. Furthermore, in Leibniz’s view, the extension of a concept A is not just the set of actually existing individuals, but rather the set of all possible individuals that fall under concept A. Therefore one may define the concept of an extensional interpretation of L1 in accordance with Leibniz’s ideas as follows:

DEF 4      Let U be a non-empty set (the domain of all possible indi­viduals), and let ϕ be a function such that ϕ(A) ⊆ U for each concept-letter A. Then ϕ is an extensional interpretation of L1 if and only if:

(1) ϕ(A∈B) = true iff ϕ(A) ⊆ ϕ(B);

(2) ϕ(A=B) = true iff ϕ(A) = ϕ(B);

(3) ϕ(AB) = ϕ(A) ∩ ϕ(B);

(4) ϕ(~A) = complement of ϕ(A);

(5) ϕ(P(A)) = true iff ϕ(A) ≠ ∅.

Conditions (1) and (2) are straightforward consequences of RECI. Condition (3) also is trivial since it expresses that an individual x belongs to the extension of AB just in case that x belongs to the extension of both concepts (and hence to their intersection). According to condition (4), the extension of the negative concept ~A is just the set of all individuals which do not fall under the concept A. Condition (5) says that a concept A is possible if and only if it has a non-empty extension.

At first sight, this requirement appears inadequate, since there are certain concepts – such as that of a unicorn – which happen to be empty but which may nevertheless be regarded as possible, that is, not involving a contradiction. However, the universe of discourse underlying the extensional interpretation of L1 does not consist of actually existing objects only, but instead comprises all possible individuals. Therefore the non-emptiness of the extension of A is both necessary and sufficient for guaranteeing the self-consistency of A. Clearly, if A is possible, then there must be at least one possible individual x that falls under concept A.

It has often been noted that Leibniz’s logic of concepts lacks the operator of disjunction. Although this is by and large correct, it doesn’t imply any defect or any incompleteness of the system L1 because the operator A∨B may simply be introduced by definition:

DISJ 1            A∨B =df ~(~A ~B).

On the background of the above axioms of negation and conjunction, the standard laws for disjunction, for example

DISJ 2            A∈(A∨B)

DISJ 3            B∈(A∨B)

DISJ 4            A∈C ∧ B∈C → (A∨B)∈C,

then become provable (Lenzen (1984)).

b. The Quantificational System L2

Leibniz’s quantifier logic L2 emerges from L1 by the introduction of so-called “indefinite concepts”. These concepts are symbolized by letters from the end of the alphabet X, Y, Z ..., and they function as quantifiers ranging over concepts. Thus, in the GI, Leibniz explains:

(16) An affirmative proposition is ‘A is B’ or ‘A contains B’ [...]. That is, if we substitute the value for A, one obtains ‘A coincides with BY’. For example, ‘Man is an animal’, that is, ‘Man’ is the same as ‘a ... animal’ (namely, ‘Man’ is ‘rational animal’). For by the sign ‘Y’ I mean something undetermined, so that ‘BY’ is the same as ‘Some B’, or ‘A ... animal’ [...], or ‘A certain animal’. So ‘A is B’ is the same as ‘A coincides with some B’, that is, ‘A = BY’.

With the help of the modern symbol for the existential quantifier, the latter law can be expressed more precisely as follows:

CONT 3          A∈B ↔ ∃Y(A = BY).

As Leibniz himself noted, the formalization of the UA according to CONT 3 is provably equivalent to the simpler representation according to DEF 2:

It is noteworthy that for ‘A = BY’ one can also say ‘A = AB’ so that there is no need to introduce a new letter. (Cout, 366; compare also LLP, 56, fn. 1.)

On the one hand, according to the rule of existential generalization,

EXIST 1          If α[A], then ∃Yα[Y],

A = AB immediately entails ∃Y(A = YB). On the other hand, if there exists some Y such that A = YB, then according to IDEN 6, AB = YBB, that is, AB = YB and hence (by the premise A = YB) AB = A. (This proof incidentally was given by Leibniz himself in the important paper “Primaria Calculi Logic Fundamenta” of August 1690; Cout, 235).

Next observe that Leibniz often used to formalize the PA ‘Some A is B’ by means of the indefinite concept Y as ‘YA∈B’. In view of CONT 3, this repre­sentation might be transformed into the (elliptic) equation YA = ZB. However, both formalizations are somewhat inadequate because they are easily seen to be theorems of L2! According to CONJ 4, BA contains B, hence by EXIST 1:

CONJ 6          ∃Y(YA∈B).

Similarly, since, according to CONJ 3, AB = BA, a twofold application of EXIST 1 yields:

CONJ 7          ∃Y∃Z(YA = BZ).

These tautologies, of course, cannot adequately represent the PA which for an appropriate choice of concepts A and B may become false! In order to resolve these difficulties, consider a draft of a calculus probably written between 1686 and 1690 (compare Cout, 259-261, and the text-critical edition in AE, VI, 4, # 171), where Leibniz proved principle:

NEG 8*           A∉B ↔ ∃Y(YA∈~B).

On the one hand, it is interesting to see that after first formulating the right hand side of the equivalence, "as usual", in the elliptic way ‘YA is Not-B’, Leibniz later paraphrased it by means of the explicit quantifier expression “there exists a Y such that YA is Not-B”. On the other hand, Leibniz discovered that NEG 8* has to be improved by requiring more exactly that there exists a Y such that YA contains ~B and YA is possible, that is, Y must be compatible with A:

NEG 8            A∉B ↔ ∃Y(P(YA) ∧ YA∈~B).

Leibniz’s proof of this important law is quite remarkable:

(18) […] to say ‘A isn’t B’ is the same as to say ‘there exists a Y such that YA is Not-B’. If ‘A is B’ is false, then ‘A Not-B’ is possible by [POSS 2]. ‘Not-B’ shall be called ‘Y’. Hence YA is possible. Hence YA is Not-B. Therefore we have shown that, if it is false that A is B, then QA is Not-B. Conversely, let us show that if QA is Not-B, ‘A is B’ is false. For if ‘A is B’ would be true, ‘B’ could be substituted for ‘A’ and we would obtain ‘QB is Not-B’ which is absurd. (Cout, 261)

To conclude the sketch of L2, let us consider some of the rare passages where an indefinite concept functions as a universal quantifier. In the above quoted draft (Cout, 260), Leibniz put forward principle “(15) ‘A is B’ is the same as ‘If L is A, it follows that L is B’”:

CONT 4          A∈B ↔ ∀Y(Y∈A → Y∈B).

Furthermore, in § 32 GI, Leibniz at least vaguely recognized that just as A∈B (according to CONJ 6) is equivalent to ∃Y(A = YB), so the negation A∉B means that, for any indefinite concept Y, A ≠ BY:

CONT 5          A∉B ↔ ∀Y(A ≠ YB).

According to AE, VI, 4, 753, Leibniz had written: “(32) Propositio Negativa. A non continet B, seu A esse (continere) B falsum est, seu A non coincidit BY”. Unfortunately, the last passage ‘seu A non coincidit BY’ had been overlooked by Couturat and it is therefore also missing in Parkinson’s translation in LLP! Anyway, with the help of ‘∀’, one can formalize Leibniz’s conception of individual concepts as maximally-consistent concepts as follows:

IND 1             Ind(A) ↔df P(A) ∧ ∀Y(P(AY) → A∈Y).

Thus A is an individual concept iff A is "self-consistent and A contains every concept Y which is compatible with A. The underlying idea of the complete­ness of individual concepts had been formulated in § 72 GI as follows:

So if BY is ["being"], and the indefinite term Y is superfluous, that is, in the way that ‘a certain Alexander the Great’ and ‘Alexander the Great’ are the same, then B is an individual. If the term BA is ["being"] and if B is an individual, then A will be superfluous; or if BA=C, then B=C (LLP 65, § 72 + fn. 1; for a closer interpretation of this idea, see Lenzen (2004c)).

Note, incidentally, that IND 1 might be simplified by requiring that, for each concept Y, A either contains Y or contains ~Y:

IND 2             Ind(A) ↔ ∀Y(A∈~Y ↔ A∉Y).

As a corollary it follows that the invalid principle

NEG 9*          A∉B → A∈~B,

which Leibniz again and again had considered as valid, in fact holds only for individual concepts:

NEG 9            Ind(A) → (A∉B → A∈~B).

Already in the “Calculi Universalis Investigationes” of 1679, Leibniz had pointed out:

…If two propositions are given with exactly the same singular [!] subject, where the predicate of the one is contradictory to the predicate of the other, then necessarily one proposition is true and the other is false. But I say: exactly the same [singular] subject, for example, ‘This gold is a metal’, ‘This gold is a not-metal.’ (AE VI, 4, 217-218).

The crucial issue here is that NEG 9* holds only for an individual concept like, for example, ‘Apostle Peter’, but not for general concepts as, for example, ‘man’. The text-critical apparatus of AE reveals that Leibniz was somewhat diffident about this decisive point. He began to illustrate the above rule by the correct example “if I say ‘Apostle Peter was a Roman bishop’, and ‘Apostle Peter was not a Roman bishop’” and then went on, erroneously, to generalize this law for arbitrary terms: “or if I say ‘Every man is learned’ ‘Every man is not learned’.” Finally he noticed this error “Here it becomes evident that I am mistaken, for this rule is not valid.” The long story of Leibniz’s cardinal mistake of mixing up ‘A isn’t B’ and ‘A is not-B’ is analyzed in detail in Lenzen (1986).

There are many different ways to represent the categorical forms by formulas of L1 or L2. The most straightforward formalization would be the following "homogenous" schema in terms of conceptual containment:

UA   A∈B                                    UN   A∈~B

PA   A∉~B                                  PN   A∉B.

The "homogeneity" consists in two facts:

(a)   The formula for the UN is obtained from that of the UA by replacing the predicate B with its negation, ~B. This is the formal counterpart of the traditional principle of obversion according to which, for example, ‘No A is B’ is equivalent to ‘Every A is not-B’.

(b)  In accordance with the traditional laws of opposition, the formulas for the particular propositions are just taken as the negations of corresponding universal propositions.

In view of DEF 2, the first schema may be transformed into

UA   A = AB                                UN   A = A~B

PA   A ≠ A~B                               PN   A ≠ AB.

Similarly, by means of the fundamental law POSS 2, one obtains

UA   ¬P(A~B)                              UN   ¬P(AB)

PA   P(AB)                                   PN   P(A~B).

Furthermore, with the help of indefinite concepts, one can formulate, for example,

UA   ∃Y(A = YB)                          UN   ∃Y(A = Y~B)

PA   ∀Y(A ≠ Y~B)                        PN   ∀Y(A ≠ YB).

Leibniz used to work with various elements of these representations, often combining them into complicated inhomogeneous schemata such as:

“A = YB           is the UA, where the adjunct Y is like an additional unknown term: ‘Every man’ is the same as ‘A certain animal’.

YA = ZB           is the PA. ‘Some man’ or ‘Man of a certain kind’ is the same as ‘A certain learned’.

A = Y not-B      [is the UN] No man is a stone, that is, Every man is a not-stone, that is, ‘Man’ and ‘A certain not-stone’ coincide.

YA = Z not-B    [is the PN] A certain man isn’t learned or is not-learned, that is, ‘A certain man’ and ‘A certain not-learned’ coincide” (Cout, 233-234).

But the representations of PA and PN of this schema are inadequate because the formulas ‘[∃Y∃Z](YA = ZB)’ and ‘[∃Y∃Z](YA = Z~B)’ are theorems of L2! These conditions may, however, easily be corrected by adding the require­ment that YA is self-consistent:

UA   ∃Y(A = YB)                                  UN   ∃Y(A = Y~B)

PA   ∃Y∃Z(P(YA) ∧ YA = ZB)        PN   ∃Y∃Z(P(YA) ∧ YA = Z~B).

Already in the paper “De Formae Logicae Comprobatione per Linearum ductus”, Leibniz had made numerous attempts to prove the basic laws of syllogistic with the help of these schemata. He continued these efforts in two interesting fragments of August 1690 dealing with “The Primary Bases of a Logical Calculus” (LLP, 90 – 92 + 93-94; compare also the closely related essays “Principia Calculi rationalis” in Cout, 229-231 and the untitled fragments Cout, 259-261 + 261-264). In the end, however, Leibniz remained unsatisfied with his attempts.

To be sure, a complete proof of the theory of the syllogism could easily be obtained by drawing upon the full list of "axioms" for L1 and L2 as stated above. But Leibniz more ambitiously tried to find proofs which presuppose only a small number of "self-evident" laws for identity. In particular, he was not willing to adopt principle

(17) Not-B = not-B not-(AB), that is, Not-B contains Not-AB, or Not-B is not-AB

as a fundamental axiom which therefore needs not itself be demonstrated. Although Leibniz realized that (17) is equivalent to the law of contraposition repeated in the subsequent §

(19) ‘A = AB’ and ‘Not-B = Not-B Not-A’ are equivalent. This is conversion by contraposition (Cout, 422),

he still thought it necessary to prove this "axiom": “This remains to be demonstrated in our calculus”!

c. The Plus-Minus-Calculus

The so-called Plus-Minus-Calculus was mainly developed in the paper “Non inelegans specimen demonstrandi in abstractis” of around 1686/7 (compare GP 7, ## XIX, XX and the text-critical edition in AE VI, 4, ## 177, 178; English translations are provided in LLP, 122-130 + 131-144). Strictly speaking, the Plus-Minus-Calculus is not a logical calculus but rather a much more general calculus which admits of different applications and interpretations. In its abstract form, it should be regarded as a theory of set-theoretical containment, set-theoretical "addition", and set-theoretical "subtraction". Unlike modern systems of set-theory, however, Leibniz’s calculus has no counterpart of the relation ‘x is an element of A’; and it also lacks the operator of set-theoretical "negation", that is, set-theoretical complement! The complement of set A might, though, be defined with the help of the subtraction operator as (U-A) where the constant ‘U’ designates the universe of discourse. But, in Leibniz’s calculus, this additional logical element is lacking.

Leibniz’s drafts exhibit certain inconsistencies which result from the experi­mental character of developing the laws for "real" addition and subtraction in close analogy to the laws of arithmetical addition and subtraction. The genesis of this idea is described in detail in Lenzen (1989). The incon­sistencies might be removed basically in two ways. First, one might restrict A-B to the case where B is contained in A; such a conservative reconstruction of the Plus-Minus-Calculus has been developed in Dürr (1930). The second, more rewarding alternative consists in admitting the operation of "real subtraction" A-B also if B is not contained in A. In any case, however, one has to give up Leibniz’s idea that subtraction might yield "privative" entities which are "less than nothing".

In the following reconstruction, Leibniz’s symbols ‘+’ for the addition (that is, union) and ‘-’ for the subtraction of sets are adopted, while his informal expressions ‘Nothing’ (“nihil”) and ‘is in’ (“est in”) are replaced by the modern symbols ‘∅’ and ‘⊆’. Set-theoretical identity may be treated either as a primitive or as a defined operator. In the former case, inclusion can be defined either by A⊆B =df ∃Y(A+Y = B) or simpler as A⊆B =df (A+B = B). If, conversely, inclusion is taken as primitive, identity can be defined as mutual inclusion: A=B =df (A⊆B) ∧ (B⊆A) (see, for example, Definition 3, Propositions 13 +14 and Proposition 17 in LLP, 131-144).

Set-theoretical addition is symmetric, or, as Leibniz puts it, “transposition makes no difference here” (LLP, 132):

PLUS 1           A+B = B+A.

The main difference between arithmetical addition and "real addition" is that the addition of one and the same "real" thing (or set of things) doesn’t yield anything new:

PLUS 2           A+A = A.

As Leibniz puts it (LLP, 132): “A+A = A […] that is, repetition changes nothing. (For although four coins and another four coins are eight coins, four coins and the same four already counted are not)”.

The "real nothing", that is, the empty set ∅, is characterized as follows: “It does not matter whether Nothing is put or not, that is, A+Nih. = A” (Cout, 267):

NIHIL 1           A+∅ = A.

In view of the relation (A⊆B) ↔ (A+B = B), this law can be transformed into:

NIHIL 2           ∅⊆A.

"Real" subtraction may be regarded as the converse operation of addition: “If the same is put and taken away [...] it coincides with Nothing. That is, A [...] - A [...] = N” (LLP, 124, Axiom 2):

MINUS 1         A-A = ∅.

Leibniz also considered the following principles which in a stronger form express that negation is the converse of addition:

MINUS 2*       (A+B)-B = A

MINUS 3*       (A+B) = C → C-B = A.

But he soon recognized that these laws do not hold in general but only in the special case where the sets A and B are “uncommunicating” (Cout, 267, # 29: “Therefore if A+B = C, then A = C-B […] but it is necessary that A and B have nothing in common”.) The new operator of “communicating” sets has to be understood as follows:

If some term, M, is in A, and the same term is in B, this term is said to be ‘common’ to them, and they will be said to be ‘communicating’. (LLP, 123, Definition 4)

Hence two sets A and B have something in common if and only if there exists some set Y such that Y⊆A and Y⊆B. Now since, trivially, the empty set is included in every set A (NIHIL 2), one has to add the qualification that Y is not empty:

COMMON 1     Com(A,B) ↔df ∃Y(Y≠∅ ∧ Y⊆A ∧ Y⊆B).

The necessary restriction of MINUS 2* and MINUS 3* can then be formalized as follows:

MINUS 2         ¬Com(A,B) → ((A+B)-B = A)

MINUS 3         ¬Com(A,B) ∧ (A+B = C) → (C-B = A).

Similarly, Leibniz recognized (LLP, 130) that from an equation A+B = A+C, A may be subtracted on both sides provided that C is “uncommunicating” both with A and with B, that is,

MINUS 4         ¬Com(A,B) ∧ ¬Com(A,C) → (A+B = A+C → B=C).

Furthermore Leibniz discovered that the implication in MINUS 2 may be converted (and hence strengthened into a biconditional). Thus one obtains the following criterion: Two sets A, B are “uncommunicating” if and only if the result of first adding and then subtracting B coincides with A. Inserting negations on both sides of this equivalence one obtains:

COMMON 2     Com(A,B) ↔ ((A+B)-B) ≠ A.

Whenever two sets A, B are communicating or “have something in common”, the intersection of A and B, in modern symbols A∩B, is not empty (LLP, 127, Case 2 of Theorem IX: “Let us assume meanwhile that E is everything which A and G have in common – if they have something in common, so that if they have nothing in common, E = Nothing”), that is,

COMMON 3     Com(A,B) ↔ A∩B ≠ ∅.

Furthermore, “What has been subtracted and the remainder are un­communicating” (LLP, 128, Theorem X), that is,

COMMON 4     ¬Com(A-B,B).

Leibniz further discovered the following formula which allows one to "calculate" the intersection or “commune” of A and B by a series of additions and subtractions: A∩B = B-((A+B)-A). In a small fragment (Cout, 250) he explained:

Suppose you have A and B and you want to know if there exists some M which is in both of them. Solution: combine those two into one, A+B, which shall be called L […] and from L one of the constituents, A, shall be subtracted […] let the rest be N; then, if N coincides with the other constituent, B, they have nothing in common. But if they do not coincide, they have something in common which can be found by subtracting the rest N [...] from B […] and there remains M, the commune of A and B, which was looked for.

4. Leibniz’s Calculus of Strict Implication

It is a characteristic feature of Leibniz’s logic that when he states and proves the laws of concept logic, he takes the requisite rules and laws of propositional logic for granted. Once the former have been established, however, the latter can be obtained from the former by observing a strict analogy between concepts and propositions which allows one to re-interpret the conceptual connectives as propositional connectives. Note, incidentally, that in the 19th century George Boole in roughly the same way first presupposed propositional logic to develop his algebra of sets, and only afterwards derived the propositional calculus out of the set-theoretical calculus. While Boole thus arrived at the classical, two-valued propositional calculus, Leibniz’s approach instead yields a modal logic of strict implication.

Leibniz outlined a simple, ingenious method to transform the algebra of concepts into an algebra of propositions. Already in the “Notationes Generales” written between 1683 and 1685 (AE VI, 4, # 131), he pointed out to the parallel between the containment relation among concepts and the implication relation among propositions. Just as the simple proposition ‘A is B’ is true, “when the predicate [A] is contained in the subject” B, so a conditional proposition ‘If A is B, then C is D’ is true, “when the consequent is contained in the antecedent” (AE VI, 4, 551). In later works Leibniz compressed this idea into formulations such as “a proposition is true whose predicate is contained in the subject or more generally whose consequent is contained in the antecedent” (Cout, 401). The most detailed explanation of this idea was given in §§ 75, 137 and 189 of the GI:

If, as I hope, I can conceive all propositions as terms, and hypotheticals as categoricals and if I can treat all propositions universally, this promises a wonderful ease in my symbolism and analysis of concepts, and will be a discovery of the greatest importance […]

We have, then, discovered many secrets of great importance for the analysis of all our thoughts and for the discovery and proof of truths. We have discovered [...] how absolute and hypothetical truths have one and the same laws and are contained in the same general theorems […]

Our principles, therefore, will be these [...] Sixth, whatever is said of a term which contains a term can also be said of a proposition from which another proposition follows (LLP, 66, 78, and 85).

To conceive all propositions in analogy to concepts means in particular that the conditional ‘If a then b’ will be logically treated like the containment relation between concepts, ‘A contains B’. Furthermore, as Leibniz explained elsewhere, negations and conjunctions of propositions are to be conceived just as negations and conjunctions of concepts. Thus one obtains the following mapping of the primitive formulas of the algebra of concepts into formulas of the algebra of propositions:

A∈B              α → β

A=B               α ↔ β

~A                 ¬α

AB                 α∧β

P(A)              ◊α

As Leibniz himself explained, the fundamental law POSS 2 does not only hold for the containment-relation between concepts but also for the entailment relation between propositions:

‘A contains B’ is a true proposition if ‘A non-B’ entails a contradiction. This applies both to categorical and to hypothetical propositions (Cout, 407).

Hence A∈B ↔ ¬P(A~B) may be “translated” into (α→β) ↔ ¬◊(α∧¬β). This formula unmistakably shows that Leibniz’s conditional is not a material but rather a strict implication. As Rescher already noted in (1954: 10), Leibniz’s account provides a definition of “entailment in terms of negation, conjunction, and the notion of possibility”, which coincides with the modern definition of strict implication put forward, for example, in Lewis & Langford (1932: 124): “The relation of strict implication can be defined in terms of negation, possibility, and product [...] Thus ‘p implies q’ [...] is to mean ‘It is false that it is possible that p should be true and q false’”. This definition is almost identical with Leibniz’s explanation in “Analysis Particularum”: “Thus if I say ‘If L is true it follows that M is true’, this means that one cannot suppose at the same time that L is true and that M is false” (AE VI, 4, 656).

Given the above “translation”, the basic axioms and theorems of the algebra of concepts can be transformed into the following laws of the algebra of propositions:

IMPL 1            α → α

IMPL 2            (α → β) ∧ (β→γ) → (α→γ)

IMPL 3            (α → β) ↔ (α ↔ α∧β)

CONJ 1          (α → β∧γ) ↔ ((α→β) ∧ (α→γ))

CONJ 2          α∧β → α

CONJ 3          α∧β → β

CONJ 4          α∧α ↔ α

CONJ 5          α∧β ↔ β∧α

NEG 1            ¬¬α ↔ α

NEG 2            ¬(α ↔ ¬α)

NEG 3            (α → β) ↔ (¬β→ ¬α)

NEG 4            ¬α → ¬(α∧β)

NEG 5            ◊α → ((α → β) → ¬(α → ¬β))

NEG 6            (α ∧¬α) → β

POSS 1           (α → β) ∧ ◊α → ◊β

POSS 2           (α → β) ↔ ¬◊(α ∧ ¬β)

POSS 3           ¬◊(α ∧ ¬α)

5. Works on Modal Logic

When people credit Leibniz with having anticipated “Possible-worlds-seman­tics”, they mostly refer to his philosophical writings, in particular to the “Nouveaux Essais sur l’entendement humain” (NE) and to the metaphysical speculations of the “Essais de theodicée” (Theo) of 1710. Leibniz argues there that while there are infinitely many ways how God might have created the world, the real world that God finally decided to create is the best of all possible worlds. As a matter of fact, however, Leibniz has much more to offer than this over-optimistic idea (which was rightly criticized by Voltaire and, for example, in part 2 of chapter 8 of Hume’s “An Enquiry concerning Human Under­standing”). In what follows we briefly consider some of Leibniz’s early logical works where

(1)  the idea that a necessary proposition is true in each possible world (while a possible proposition is true in at least one possible world) is formally elaborated, and where

(2)  the close relation between alethic and deontic modalities is unveiled.

a. Possible-Worlds-Semantics for Alethic Modalities

The fundamental logical relations between necessity, ☐, possibility, ◊, and impossibility can be expressed, for example, by:

NEC 1            ☐(α) ↔ ¬◊(¬α)

NEC 2            ¬◊(α) ↔ ☐(¬α).

These laws were familiar already to logicians long before Leibniz. However, Leibniz "proved" these relations by means of an admirably clear analysis of modal operators in terms of “possible cases”, that is, possible worlds:

Possible is whatever can happen or what is true in some cases

Impossible is whatever cannot happen or what is true        in no […] case

Necessary is whatever cannot not happen or what is true in every […] case

Contingent is whatever can not happen or what is [not] true in some case. (AE VI, 1, 466).

As this quotation shows, Leibniz uses the notion of contingency not in the modern sense of ‘neither necessary nor impossible’ but as the simple negation of ‘necessary’. The quoted analysis of the truth-conditions for modal propositions entails the validity not only of NEC 1, 2, but also of:

NEC 3            ☐α → ◊(α)

NEC 4            ¬◊(α) → ¬(α).

Leibniz "proves" these laws by reducing them to corresponding laws for quantifiers such as: If α is true in each case, then α is true in at least one case. In the “Modalia et Elementa Juris Naturalis” of around 1679, Leibniz mentions NEC 3 and NEC 4 in passing: “Since everything which is necessary is possible, so everything that is impossible is contingent, that is, can fail to happen” (AE IV, 4, 2759). A very elliptic "proof" of these laws was already sketched in the “Elementa juris naturalis” of 1669/70 (AE VI, 1, 469).

It cannot be overlooked, however, that Leibniz’s semi-formal truth conditions, even when combined with his later views on possible worlds, fail to come up to the standards of modern possible worlds semantics, since nothing in Leibniz’s considerations corresponds to an accessibility relation among worlds.

b. Basic Principles of Deontic Logic

As has already been pointed out by Schepers (1972) and Kalinowski (1974), Leibniz saw very clearly that the logical relations between the deontic modalities obligatory, permitted and forbidden exactly mirror the corresponding relations between necessary, possible and impossible, and that therefore all laws and rules of alethic modal logic may be applied to deontic logic as well.

Just like ‘necessary’, ‘contingent’, ‘possible’ and ‘impossible’ are related to each other, so also are ‘obligatory’, ‘not obligatory’, ‘permitted’, and ‘forbidden’ (AE VI, 4, 2762).

This structural analogy goes hand in hand with the important discovery that the deontic notions can be defined by means of the alethic notions plus the additional “logical” constant of a morally perfect man (“vir bonus”). Such a virtuous man is characterized by the requirements that he strictly obeys all laws, always acts in such a way that he does no harm to anybody, and is benevolent to all other people. Given this understanding of a “vir bonus”, Leibniz explains:

Obligatory is what is necessary for the virtuous man as such.

Not obligatory is what is contingent for the virtuous man as such.

Permitted is what is possible for the virtuous man as such.

Forbidden is what is impossible for the virtuous man as such (Grua, 605).

If we express the restriction of the modal operators ☐ and ◊ to the virtuous man by means of a subscript 'v', these definitions can be formalized as follows (where the letter ‘E’ reminding of the German notion ‘erlaubt’ is taken instead of 'P' for 'permitted' in order to avoid confusions with the operator of possibility):

DEON 1          O(α) ↔ ☐v(α)

DEON 2          E(α) ↔ ◊v(α)

DEON 3          F(α) ↔ ¬◊v(α).

Now, as Leibniz mentioned in passing, all that is unconditionally necessary will also be necessary for the virtuous man:

NEC 5             ☐(α) → ☐v(α).

Hence (as was shown in more detail in Lenzen (2005)), Leibniz’s derivation of the fundamental laws for the deontic operators from corresponding laws of the alethic modal operators proceeds in much the same way as the modern reduction of deontic logic to alethic modal logic "rediscovered" almost 300 years after Leibniz by Anderson (1958).

6. References and Further Reading

a. Abbreviations for Leibniz’s works

  • AE       German Academy of Science (ed.), G. W. Leibniz, Sämtliche Schriften und Briefe, Series VI, „Philosophische Schriften“, Darmstadt 1930, Berlin 1962 ff.
  • Cout   Louis Couturat (ed.), Opuscules et fragments inédits de Leibniz, Paris (Presses universitaires de France) 1903, reprint Hildesheim (Olms) 1961.
  • GI      Generales Inquisitiones de Analysi Notionum et Veritatum; first edited in Cout, 356-399; text-critical edition in A, VI 4, 739-788; English trans­lation in LLP, 47-87.
  • GP     C. I. Gerhardt (ed.), Die philosophischen Schriften von G. W. Leibniz, seven volumes Berlin/Halle 1875-90, reprint Hildesheim (Olms) 1965.
  • Grua   Gaston Grua (ed.), G. W. Leibniz – Textes Inédits, two Volumes, Paris (Presses Universitaires de France) 1948.
  • LH       Eduard Bodemann (ed.), Die Leibniz-Handschriften der Königlichen Öffentlichen Bibliothek zu Hannover, Hannover 1895, reprint Hildesheim (Olms) 1966.
  • LLP   G. H. R. Parkinson (ed.), Leibniz Logical Papers – A Selection, Oxford (Clarendon Press), 1966.
  • NE      Nouveaux Essais sur l’entendement humain – Par l’Auteur du Système de l’Harmonie Preestablie, in GP 5, 41-509.
  • Theo  Essais de Theodicée sur la Bonté de Dieu, la Liberté de l’Homme et l’Origine du Mal, in GP 6, 21-436.

b. Secondary Literature

  • Anderson, Alan Ross (1958): “A Reduction of Deontic Logic to Alethic Modal Logic”, in Mind LXVII, 100-103.
  • Arnauld, Antoine & Nicole, Pierre (1683) : La Logique ou L’Art de Penser, 5th edition, reprint 1965 Paris (Presses universitaires de France).
  • Burkhardt, Hans (1980): Logik und Semiotik in der Philosophie von Leibniz, München (Philosophia Verlag).
  • Couturat, Louis (1901): La Logique de Leibniz d’après des documents inédits, Paris (Félix Alcan).
  • Dürr, Karl (1930): Neue Beleuchtung einer Theorie von Leibniz - Grundzüge des Logikkalküls, Darmstadt.
  • Euler, Leonhard (1768): Lettres à une princesse d'Allemagne sur quelques sujets de physique et de philosophie, St Petersburg, 1768–1772.
  • Hamilton, William (1861): Lectures on Metaphysics and Logic, ed. by H.L. Mansel & J. Veitch, Edinburgh, London (William Blackwood); reprint Stuttgart Bad Cannstadt 1969.
  • Ishiguro, Hidé (1972): Leibniz’s Philosophy of Logic and Language, London (Duckworth).
  • Kalinowski, George (1974): “Un logicien déontique avant la lettre: Gottfried Wilhelm Leibniz”, in Archiv für Rechts- und Sozialphilosophie 60, 79-98.
  • Kauppi, Raili (1960): Über die Leibnizsche Logik mit besonderer Berücksichti­gung des Problems der Intension und der Extension, Helsinki (Acta Philosophica Fennica).
  • Kneale, William and Martha (1962): The Development of Logic, Oxford (Clarendon).
  • Lenzen, Wolfgang (1984): “Leibniz und die Boolesche Algebra”, in Studia Leibnitiana 16, 187-203.
  • Lenzen, Wolfgang (1986): “‘Non est’ non est ‘est non’ – Zu Leibnizens Theorie der Negation”, in Studia Leibnitiana 18, 1-37.
  • Lenzen, Wolfgang (1989): “Arithmetical vs. 'Real' Addition – A Case Study of the Relation between Logic, Mathematics, and Metaphysics in Leibniz”, in N. Rescher (ed.), Leibnizian Inquiries – A Group of Essays, Lanham, 149-157.
  • Lenzen, Wolfgang (1990): Das System der Leibnizschen Logik, Berlin (de Gruyter).
  • Lenzen, Wolfgang (2004a): Calculus Universalis – Studien zur Logik von G. W. Leibniz, Paderborn (mentis).
  • Lenzen, Wolfgang (2004b): “Leibniz’s Logic”, in D. Gabbay & J. Woods (eds.) The Rise of Modern Logic – From Leibniz to Frege (Handbook of the History of Logic, vol. 3), Amsterdam (Elsevier), 1-83.
  • Lenzen, Wolfgang (2004c): “Logical Criteria for Individual(concept)s”, in M. Carrara, A. M. Nunziante & G. Tomasi (eds.), Individuals, Minds, and Bodies: Themes from Leibniz, Stuttgart (Steiner), 87-107.
  • Lenzen, Wolfgang (2005): “Leibniz on Alethic and Deontic Modal Logic”. In D. Berlioz & F. Nef (eds.), Leibniz et les Puissances du Langage, Paris (Vrin), 2005, 341-362.
  • Lewis, Clarence I. & Langford, Cooper H. (1932): Symbolic Logic, New York, 21959 (Dover Publications).
  • Liske M.-Th. (1994): "Ist eine reine Inhaltslogik möglich? Zu Leibniz’ Begriffs­theorie", in Studia Leibnitiana XXVI, 31-55.
  • Lukasiewicz, Jan (1951): Aristotle’s Syllogistic – From the Standpoint of Modern Formal Logic, Oxford (Clarendon Press).
  • Mugnai, Massimo (1992): Leibniz’s Theory of Relations, Stuttgart (Steiner).
  • Poser, Hans (1969): Zur Theorie der Modalbegriffe bei G. W. Leibniz, Wiesbaden (Steiner).
  • Rescher, Nicholas (1954): “Leibniz’s interpretation of his logical calculus”, in Journal of Symbolic Logic 19, 1-13.
  • Rescher, Nicholas (1979): Leibniz – An Introduction to his Philosophy, London (Billing & Sons).
  • Sanchez-Mazas, Miguel (1979): “Simplification de l’arithmétisation leibniti­enne de la syllogistique par l’expression arithmétique de la notion intensionelle du 'non ens'”, in Studia Leibnitiana Sonderheft 8, 46-58.
  • Schepers, Heinrich (1972): “Leibniz‘ Disputationen ‚De Conditionibus‘: An­sätze zu einer juristischen Aussagenlogik” in Studia Leibnitiana Supplementa XV, 1-17.
  • Schupp, Franz (ed.) (1982): G. W. Leibniz, Allgemeine Untersuchungen über die Analyse der Begriffe und Wahrheiten, Hamburg (Meiner).
  • Schupp, Franz (ed.) (2000): G. W. Leibniz, Die Grundlagen des logischen Kalküls, Hamburg (Meiner).
  • Swoyer, Chris (1995): “Leibniz on Intension and Extension”, in Noûs 29, 96-114.
  • Sotirov, Vladimir (1999): “Arithmetizations of Syllogistic à la Leibniz”, in Journal of Applied Non-Classical Logics 9, 387-405.
  • Venn, John (1881): Symbolic Logic, London (MacMillan).

 

Author Information

Wolfgang Lenzen
Email: lenzen@uos.de
University of Osnabrück
Germany

Infinite

The Infinite

Working with the infinite is tricky business. Zeno’s paradoxes first alerted philosophers to this in 450 B.C.E. when he argued that a fast runner such as Achilles has an infinite number of places to reach during the pursuit of a slower runner. Since then, there has been a struggle to understand how to use the notion of infinity in a coherent manner. This article concerns the significant and controversial role that the concepts of infinity and the infinite play in the disciplines of philosophy, physical science, and mathematics.

Philosophers want to know whether there is more than one coherent concept of infinity; which entities and properties are infinitely large, infinitely small, infinitely divisible, and infinitely numerous; and what arguments can justify answers one way or the other.

Here are four suggested examples of these different ways to be infinite. The density of matter at the center of a black hole is infinitely large. An electron is infinitely small. An hour is infinitely divisible. The integers are infinitely numerous. These four claims are ordered from most to least controversial, although all four have been challenged in the philosophical literature.

This article also explores a variety of other questions about the infinite. Is the infinite something indefinite and incomplete, or is it complete and definite? What does Thomas Aquinas mean when he says God is infinitely powerful? Was Gauss, who was one of the greatest mathematicians of all time, correct when he made the controversial remark that scientific theories involve infinities merely as idealizations and merely in order to make for easy applications of those theories, when in fact all physically real entities are finite? How did the invention of set theory change the meaning of the term “infinite”? What did Cantor mean when he said some infinities are smaller than others? Quine said the first three sizes of Cantor’s infinities are the only ones we have reason to believe in. Mathematical Platonists disagree with Quine. Who is correct? We shall see that there are deep connections among all these questions.

Table of Contents

  1. What “Infinity” Means
    1. Actual, Potential, and Transcendental Infinity
    2. The Rise of the Technical Terms
  2. Infinity and the Mind
  3. Infinity in Metaphysics
  4. Infinity in Physical Science
    1. Infinitely Small and Infinitely Divisible
    2. Singularities
    3. Idealization and Approximation
    4. Infinity in Cosmology
  5. Infinity in Mathematics
    1. Infinite Sums
    2. Infinitesimals and Hyperreals
    3. Mathematical Existence
    4. Zermelo-Fraenkel Set Theory
    5. The Axiom of Choice and the Continuum Hypothesis
  6. Infinity in Deductive Logic
    1. Finite and Infinite Axiomatizability
    2. Infinitely Long Formulas
    3. Infinitely Long Proofs
    4. Infinitely Many Truth Values
    5. Infinite Models
    6. Infinity and Truth
  7. Conclusion
  8. References and Further Reading

1. What “Infinity” Means

The term “the infinite” refers to whatever it is that the word “infinity” correctly applies to. For example, the infinite integers exist just in case there is an infinity of integers. We also speak of infinite quantities, but what does it mean to say a quantity is infinite? In 1851, Bernard Bolzano argued in The Paradoxes of the Infinite that, if a quantity is to be infinite, then the measure of that quantity also must be infinite. Bolzano’s point is that we need a clear concept of infinite number in order to have a clear concept of infinite quantity. This idea of Bolzano’s has led to a new way of speaking about infinity, as we shall see.

The term “infinite” can be used for many purposes. The logician Alfred Tarski used it for dramatic purposes when he spoke about trying to contact his wife in Nazi-occupied Poland in the early 1940s. He complained, “We have been sending each other an infinite number of letters. They all disappear somewhere on the way. As far as I know, my wife has received only one letter.” (Feferman 2004, p. 137) Although the meaning of a term is intimately tied to its use, we can tell only a very little about the meaning of the term from Tarski’s use of it to exaggerate for dramatic effect.

Looking back over the last 2,500 years of use of the term “infinite,” three distinct senses stand out: actually infinite, potentially infinite, and transcendentally infinite. These will be discussed in more detail below, but briefly the concept of potential infinity treats infinity as an unbounded or non-terminating process developing over time. By contrast, the concept of actual infinity treats the infinite as timeless and complete. Transcendental infinity is the least precise of the three concepts and is more commonly used in discussions of metaphysics and theology to suggest transcendence of human understanding or human capability. To give some examples, the set of integers is actually infinite, and so is the number of locations (points of space) between London and Moscow. The maximum length of grammatical sentences in English is potentially infinite, and so is the total amount of memory in a Turing machine, an ideal computer. An omnipotent being’s power is transcendentally infinite.

For purposes of doing mathematics and science, the actual infinite has turned out to be the most useful of the three concepts. Using the idea proposed by Bolzano that was mentioned above, the concept of the actual infinite was precisely defined in 1888 when Richard Dedekind redefined the term “infinity” for use in set theory and Georg Cantor made the infinite, in this sense, an object of mathematical study. Before this turning point, the philosophical community would have said that Aristotle’s concept of potential infinity should be the concept used in mathematics and science.

a. Actual, Potential, and Transcendental Infinity

The Ancient Greeks generally conceived of the infinite as formless, characterless, indefinite, indeterminate, chaotic, and unintelligible. The term had negative connotations and was especially vague, having no clear criteria for distinguishing the finite from the infinite. In his treatment of Zeno’s paradoxes about infinite divisibility, Aristotle (384-322 B.C.E.) made a positive step toward clarification by distinguishing two different concepts of infinity, potential infinity and actual infinity. The latter is also called complete infinity and completed infinity. The actual infinite is not a process in time; it is an infinity that exists wholly at one time. By contrast, Aristotle spoke of the potentially infinite as a never-ending process over time. The word “potential” is being used in a technical sense. A potential swimmer can learn to become an actual swimmer, but a potential infinity cannot become an actual infinity. Aristotle argued that all the problems involving reasoning with infinity are really problems of improperly applying the incoherent concept of actual infinity instead of the coherent concept of potential infinity. (See Aristotle’s Physics, Book III, for his account of infinity.)

For its day, this was a successful way of treating Zeno’s Achilles paradox since, if Zeno had confined himself to using only potential infinity, he would not have been able to develop his paradoxical argument. Here is why. Zeno said that to go from the start to the finish line, the runner Achilles must reach the place that is halfway-there, then after arriving at this place he still must reach the place that is half of that remaining distance, and after arriving there he again must reach the new place that is now halfway to the goal, and so on. These are too many places to reach because there is no end to these place since for any one there is another. Zeno made the mistake, according to Aristotle, of supposing that this infinite process needs completing when it really doesn’t; the finitely long path from start to finish exists undivided for the runner, and it is the mathematician who is demanding the completion of such a process. Without that concept of a completed infinite process there is no paradox.

Although today’s standard treatment of the Achilles paradox disagrees with Aristotle and says Zeno was correct to use the concept of a completed infinity and to imply the runner must go to an actual infinity of places in a finite time, Aristotle had so many other intellectual successes that his ideas about infinity dominated the Western world for the next two thousand years.

Even though Aristotle promoted the belief that “the idea of the actual infinite−of that whose infinitude presents itself all at once−was close to a contradiction in terms…,” (Moore 2001, 40) during those two thousand years others did not treat it as a contradiction in terms. Archimedes, Duns Scotus, William of Ockham, Gregory of Rimini, and Leibniz made use of it. Archimedes used it, but had doubts about its legitimacy. Leibniz used it but had doubts about whether it was needed.

Here is an example of how Gregory of Rimini argued in the fourteenth century for the coherence of the concept of actual infinity:

If God can endlessly add a cubic foot to a stone–which He can–then He can create an infinitely big stone. For He need only add one cubic foot at some time, another half an hour later, another a quarter of an hour later than that, and so on ad infinitum. He would then have before Him an infinite stone at the end of the hour. (Moore 2001, 53)

Leibniz envisioned the world as being an actual infinity of mind-like monads, and in (Leibniz 1702) he freely used the concept of being infinitesimally small in his development of the calculus in mathematics.

The term “infinity” that is used in contemporary mathematics and science is based on a technical development of this earlier, informal concept of actual infinity. This technical concept was not created until late in the 19th century.

b. The Rise of the Technical Terms

In the centuries after the decline of ancient Greece, the word “infinite” slowly changed its meaning in Medieval Europe. Theologians promoted the idea that God is infinite because He is limitless, and this at least caused the word “infinity” to lose its negative connotations. Eventually during the Medieval Period, the word had come to mean endless, unlimited, and immeasurable–but not necessarily chaotic. The question of its intelligibility and conceivability by humans was disputed.

Actual infinity is very different. There are actual infinities in the technical, post-1880s sense, which are neither endless, unlimited, nor immeasurable. A line segment one meter long is a good example. It is not endless because it is finitely long, and it is not a process because it is timeless. It is not unlimited because it is limited by both zero and one. It is not immeasurable because its length measure is one meter. Nevertheless, the one meter line is infinite in the technical sense because it has an actual infinity of sub-segments, and it has an actual infinity of distinct points. So, there definitely has been a conceptual revolution.

This can be very shocking to those people who are first introduced to the technical term “actual infinity.” It seems not to be the kind of infinity they are thinking about. The crux of the problem is that these people really are using a different concept of infinity. The sense of infinity in ordinary discourse these days is either the Aristotelian one of potential infinity or the medieval one that requires infinity to be endless, immeasurable, and perhaps to have connotations of perfection, inconceivability, and paradox. This article uses the name transcendental infinity for the medieval concept although there is no generally accepted name for the concept. A transcendental infinity transcends human limits and detailed knowledge and might be incapable of being described by a precise theory. It might also be a cluster of concepts rather than a single one.

Those people who are surprised when first introduced to the technical term “actual infinity” are probably thinking of either potential infinity or transcendental infinity, and that is why, in any discussion of infinity, some philosophers will say that an appeal to the technical term “actual infinity” is changing the subject. Another reason why there is opposition to actual infinities is that they have so many counter-intuitive properties. For example, consider a continuous line that has an actual infinity of points. A single point on this line has no next point! Also, a one-dimensional continuous curve can fill a two-dimensional area. Equally counterintuitive is the fact that some actually infinite numbers are smaller than other actually infinite numbers. Looked at more optimistically, though, most other philosophers will say the rise of this technical term is yet another example of how the discovery of a new concept has propelled civilization forward.

Resistance to the claim that there are actual infinities has had two other sources. One is the belief that actual infinities cannot be experienced. The second is the belief that use of the concept of actual infinity leads to paradoxes, such as Zeno’s. Because the standard solution to Zeno’s Paradoxes makes use of calculus, the birth of the new technical definition of actual infinity is intimately tied to the development of calculus and thus to properly defining the mathematician’s real line, the linear continuum. Briefly, the reason is that science needs calculus; calculus needs the continuum; the continuum needs a very careful definition; and the best definition requires there to be actual infinities (not merely potential infinities) in the micro-structure and the overall macro-structure of the continuum.

Defining the continuum involves defining real numbers because the linear continuum is the intended model of the theory of real numbers just as the plane is the intended model for the theory of ordinary two-dimensional geometry. It was eventually realized by mathematicians that giving a careful definition to the continuum and to real numbers requires formulating their definitions within set theory. As part of that formulation, mathematicians found a good way to define a rational number in the language of set theory; then they defined a real number to be a certain pair of actually infinite sets of rational numbers. The continuum’s eventual definition required it to be an actually infinite collection whose elements are themselves infinite sets each of whose elements in turn is an infinite sequence. The details are too complex to be presented here, but the curious reader can check any textbook in classical real analysis. The intuitive picture is that any interval or segment of the continuum is a continuum, and any continuum is a very special infinite set of points that are packed so closely together that there are no gaps. A continuum is perfectly smooth. This smoothness is reflected in there being a great many real numbers between any two real numbers.

Calculus is the area of mathematics that is more applicable to science than any other area. It can be thought of as a technique for treating a continuous change as being composed of an infinite number of infinitesimal changes. When calculus is applied to physical properties capable of change such as spatial location, ocean salinity or an electrical circuit’s voltage, these properties are represented with continuous variables that have real numbers for their values. These values are specific real numbers, not ranges of real numbers and not just rational numbers. Achilles’ location along the path to his goal is such a property.

It took many centuries to rigorously develop the calculus. A very significant step in this direction occurred in 1888 when Richard Dedekind re-defined the term “infinity” and when Georg Cantor used that definition to create the first set theory, a theory that eventually was developed to the point where it could be used for embedding all classical mathematical theories. See the example in the Zeno's Paradoxes article of how Dedekind used set theory and his new idea of "cuts" to define the real numbers in terms of infinite sets of rational numbers. In this way additional rigor was given to the concepts of mathematics, and it encouraged more mathematicians to accept the notion of actually infinite sets. What this embedding requires is first defining the terms of any mathematical theory in the language of set theory, then translating the axioms and theorems of the mathematical theory into sentences of set theory, and then showing that these theorems follow logically from the axioms. (The axioms of any theory, such as set theory, are the special sentences of the theory that can always be assumed during the process of deducing the other theorems of the theory.)

The new technical treatment of infinity that originated with Dedekind in 1888 and adopted by Cantor in his new set theory provided a definition of "infinite set" rather than simply “infinite.” Dedekind says an infinite set is a set that is not finite. The notion of a finite set can be defined in various ways. We might define it numerically as a set having n members, where n is some non-negative integer. Dedekind found an essentially equivalent definition of finite set (assuming the axiom of choice, which will be discussed later), but Dedekind’s definition does not require mentioning numbers:

A (Dedekind) finite set is a set for which there exists no one-to-one correspondence between it and one of its proper subsets.

By placing the finger-tips of your left hand on the corresponding finger-tips of your right hand, you establish a one-to-one correspondence between the set of fingers of each hand; in that way you establish that there are the same number of fingers on each of your hands, without your needing to count the fingers. More generally, there is a one-to-one correspondence between two sets when each member of one set can be paired off with a unique member of the other set, so that neither set has an unpaired member.

Here is a one-to-one correspondence between the natural numbers and the even, positive numbers:

1, 2, 3, 4, ...

↕   ↕   ↕  ↕

2, 4, 6, 8, ...

Informally expressed, any infinite set can be matched up to a part of itself; so the whole is equivalent to a part. This is a surprising definition because, before this definition was adopted, the idea that actually infinite wholes are equinumerous with some of their parts was taken as clear evidence that the concept of actual infinity is inherently paradoxical. For a systematic presentation of the many alternative ways to successfully define “infinite set” non-numerically, see (Tarski 1924).

Dedekind’s new definition of "infinite" is defining an actually infinite set, not a potentially infinite set because Dedekind appealed to no continuing operation over time. The concept of a potentially infinite set is then given a new technical definition by saying a potentially infinite set is a growing, finite subset of an actually infinite set. Cantor expressed the point this way:

In order for there to be a variable quantity in some mathematical study, the “domain” of its variability must strictly speaking be known beforehand through a definition. However, this domain cannot itself be something variable…. Thus this “domain” is a definite, actually infinite set of values. Thus each potential infinite…presupposes an actual infinite. (Cantor 1887)

The new idea is that the potentially infinite set presupposes an actually infinite one. If this is correct, then Aristotle’s two notions of the potential infinite and actual infinite have been redefined and clarified.

Two sets are the same if any member of one is a member of the other, and vice versa. Order of the members is irrelevant to the identity of the set, and to the size of the set. Two sets are the same size if there exists a one-to-one correspondence between them. This definition of same size was recommended by both Cantor and Frege. Cantor defined “finite” by saying a set is finite if it is in one-to-one correspondence with the set {1, 2, 3, …, n} for some positive integer n; and he said a set is infinite if it is not finite.

Cardinal numbers are measures of the sizes of sets. There are many definitions of what a cardinal number is, but what is essential for cardinal numbers is that two sets have the same cardinal just in case there is a one-to-one correspondence between them; and set A has a smaller cardinal number than a set B (and so set A has fewer members than B) provided there is a one-to-one correspondence between A and a subset of B, but B is not the same size as A. In this sense, the set of even integers does not have fewer members than the set of all integers, although intuitively you might think it does.

How big is infinity? This question does not make sense for either potential infinity or transcendental infinity, but it does for actual infinity. Finite cardinal numbers such as 0, 1, 2, and 3 are measures of the sizes of finite sets, and transfinite cardinal numbers are measures of the sizes of actually infinite sets. The transfinite cardinals are aleph-null, aleph-one, aleph-two, and so on, which we represent with the numerals ℵ0, ℵ1, ℵ2, .... The smallest infinite size is ℵ0 which is the size of the set of natural numbers, and it is called a countable infinity; the other alephs are measures of the uncountable infinities. However, these are somewhat misleading terms since no process of counting is involved. Nobody would have the time to count from 0 to any aleph.

The set of even integers, the set of natural numbers and the set of rational numbers all can be shown to have the same size, but surprisingly they all are smaller than the set of real numbers. Any set of size ℵ0 is said to be countably infinite (or denumerably infinite or enumerably infinite). The set of points in the continuum and in any interval of the continuum turns out to be larger than ℵ0, although how much larger is still an open problem, called the continuum problem. A popular but controversial suggestion is that a continuum is of size ℵ1, the next larger size.

When creating set theory, mathematicians did not begin with the belief that there would be so many points between any two points in the continuum nor with the belief that for any infinite cardinal there is a larger cardinal. These were surprising consequences discovered by Cantor. To many philosophers, this surprise is evidence that what is going on is not invention but rather is discovery about a mind-independent reality.

The intellectual community has always been wary of actually infinite sets. Before the discovery of how to embed calculus within set theory (a process that is also called giving calculus a basis in set theory), it could have been more easily argued that science does not need actual infinities. The burden of proof has now shifted, and the default position is that actual infinites are indispensable in mathematics and science, and anyone who wants to do without them must show that removing them does not do too much damage and has additional benefits. There are no known successful attempts to reconstruct the theories of mathematical physics without basing them on mathematical objects such as numbers and sets, but for one attempt to do so using second-order logic, see (Field 1980).

Here is why some mathematicians believe the set-theoretic basis is so important:

Just as chemistry was unified and simplified when it was realized that every chemical compound is made of atoms, mathematics was dramatically unified when it was realized that every object of mathematics can be taken to be the same kind of thing. There are now other ways than set theory to unify mathematics, but before set theory there was no such unifying concept. Indeed, in the Renaissance, mathematicians hesitated to add x2 to x3, since the one was an area and the other a volume. Since the advent of set theory, one can correctly say that all mathematicians are exploring the same mental universe. (Rucker 1982, p. 64)

But the significance of this basis can be exaggerated. The existence of the basis does not imply that mathematics is set theory.

However, paradoxes soon were revealed within set theory, by Cantor himself and then others, so the quest for a more rigorous definition of the mathematical continuum continued. Cantor’s own paradox surfaced in 1895 when he asked whether the set of all cardinal numbers has a cardinal number. Cantor showed that, if it does, then it doesn’t. Surely the set of all sets would have the greatest cardinal number, but Cantor showed that for any cardinal number there is a greater cardinal number.  [For more details about this and the other paradoxes, see (Suppes 1960).] The most famous paradox of set theory is Russell’s Paradox of 1901. He showed that the set of all sets that are not members of themselves is both a member of itself and not a member of itself. Russell wrote that the paradox “put an end to the logical honeymoon that I had been enjoying.”

These and other paradoxes were eventually resolved satisfactorily by finding revised axioms of set theory that permit the existence of enough well-behaved sets so that set theory is not crippled [that is, made incapable of providing a basis for mathematical theories] and yet the axioms do not permit the existence of too many sets, the ill-behaved sets such as Cantor’s set of all cardinals and Russell’s set of all sets that are not members of themselves. Finally, by the mid-20th century, it had become clear that, despite the existence of competing set theories, Zermelo-Fraenkel’s set theory (ZF) was the best way or the least radical way to revise set theory in order to avoid all the known paradoxes and problems while at the same time preserving enough of our intuitive ideas about sets that it deserved to be called a set theory, and at this time most mathematicians would have agreed that the continuum had been given a proper basis in ZF. See (Kleene 1967, pp. 189-191) for comments on this agreement about ZF’s success and for a list of the ZF axioms and for a detailed explanation of why each axiom deserves to be an axiom.

Because of this success, and because it was clear enough that the concept of infinity used in ZF does not lead to contradictions, and because it seemed so evident how to use the concept in other areas of mathematics and science where the term “infinity” was being used, the definition of the concept of "infinite set" within ZF was claimed by many philosophers to be the paradigm example of how to provide a precise and fruitful definition of a philosophically significant concept. Much less attention was then paid to critics who had complained that we can never use the word “infinity” coherently because infinity is ineffable or inherently paradoxical.

Nevertheless there was, and still is, serious philosophical opposition to actually infinite sets and to ZF's treatment of the continuum, and this has spawned the programs of constructivism, intuitionism, finitism and ultrafinitism, all of whose advocates have philosophical objections to actual infinities. Even though there is much to be said in favor of replacing a murky concept with a clearer, technical concept, there is always the worry that the replacement is a change of subject that hasn’t really solved the problems it was designed for. This discussion of the role of infinity in mathematics and science continues in later sections of this article.

2. Infinity and the Mind

Can humans grasp the concept of the infinite? This seems to be a profound question. Ever since Zeno, intellectuals have realized that careless reasoning about infinity can lead to paradox and perhaps “defeat” the human mind. Some critics of infinity argue that paradox is essential to, or inherent in, the use of the concept of infinity, so the infinite is beyond the grasp of the human mind. However, this criticism applies more properly to some forms of transcendental infinity rather than to either actual infinity or potential infinity.

A second reason to believe humans cannot grasp infinity is that the concept must contain an infinite number of parts or sub-ideas. A counter to this reason is to defend the psychological claim that if a person succeeds in thinking about infinity, it does not follow that the person needs to have an actually infinite number of ideas in mind at one time.

A third reason to believe the concept of infinity is beyond human understanding is that to have the concept one must have some accurate mental picture of infinity. Thomas Hobbes, who believed that all thinking is based on imagination, might remark that nobody could picture an infinite number of grains of sand at once. However, most contemporary philosophers of psychology believe mental pictures are not essential to having any concept. Regarding the concept of dog, you might have a picture of a brown dog in your mind and I might have a picture of a black dog in mine, but I can still understand you perfectly well when you say dogs frequently chase cats.

The main issue here is whether we can coherently think about infinity to the extent of being said to have the concept. Here is a simple argument that we can: If we understand negation and have the concept of finite, then the concept of infinite is merely the concept of not-finite. A second argument says the apparent consistency of set theory indicates that infinity in the technical sense of actual infinity is well within our grasp. And since potential infinity is definable in terms of actual infinity, it, too, is within our grasp.

Assuming that infinity is within our grasp, what is it that we are grasping? Philosophers disagree on the answer. In 1883, Cantor said

A set is a Many which allows itself to be thought of as a One.

Notice the dependence on thought. Cantor eventually clarified what he meant and was clear that he did not want set existence to depend on mental capability. What he really believed is that a set is a collection of well-defined and distinct objects that exists independently of being thought of, but that could be thought of by a powerful enough mind.

3. Infinity in Metaphysics

There is a concept which corrupts and upsets all others. I refer not to Evil, whose limited realm is that of ethics; I refer to the infinite. —Jorge Luis Borges.

Shakespeare declared, “The will is infinite.” Is he correct or just exaggerating? Critics of Shakespeare, interpreted literally, might argue that the will is basically a product of different brain states. Because a person’s brain contains approximately 1027 atoms, these have only a finite number of configurations or states, and so, regardless of whether we interpret Shakespeare’s remark as implying that the will is unbounded (is potentially infinite) or the will produces an infinite number of brain states (is actually infinite), the will is not infinite. But perhaps Shakespeare was speaking metaphorically and did not intend to be taken literally, or perhaps he meant to use some version of transcendental infinity that makes infinity be somehow beyond human comprehension.

Contemporary Continental philosophers often speak that way. Emmanuel Levinas says the infinite is another name for the Other, for the existence of other conscious beings besides ourselves whom we are ethically responsible for. We “face the infinite” in the sense of facing a practically incomprehensible and unlimited number of possibilities upon encountering another conscious being. (See Levinas 1961.) If we ask what sense of “infinite” is being used by Levinas, it may be yet another concept of infinity, or it may be some kind of transcendental infinity. Another interpretation is that he is exaggerating about the number of possibilities and should say instead that there are too many possibilities to be faced when we encounter another conscious being and that the possibilities are not readily predictable because other conscious beings make free choices, the causes of which often are not known even to the person making the choice.

Leibniz was one of the few persons in earlier centuries who believed in actually infinite sets, but he did not believe in infinite numbers. Cantor did. Referring to his own discovery of the transfinite cardinals ℵ0, ℵ1, ℵ2, .... and their properties, Cantor claimed his work was revealing God’s existence and that these mathematical objects were in the mind of God. He claimed God gave humans the concept of the infinite so that they could reflect on His perfection. Influential German neo-Thomists such as Constantin Gutberlet agreed with Cantor. Some Jesuit math instructors claim that by taking a calculus course and understanding infinity, students are getting closer to God. Their critics complain that these mystical ideas about infinity and God are too speculative.

When metaphysicians speak of infinity they use all three concepts: potential infinity, actual infinity, and transcendental infinity. But when they speak about God being infinite, they are usually interested in implying that God is beyond human understanding or that there is a lack of a limit on particular properties of God, such as God's goodness and knowledge and power.

The connection between infinity and God exists in nearly all of the world’s religions. It is prominent in Hindu, Muslim, Jewish, and Christian literature. For example, in chapter 11 of the Bhagavad Gita of Hindu scripture, Krishna says, “O Lord of the universe, I see You everywhere with infinite form....”

Plato did not envision God (the Demi-urge) as infinite because he viewed God as perfect, and he believed anything perfect must be limited and thus not infinite because the infinite was defined as an unlimited, unbounded, indefinite, unintelligible chaos.

But the meaning of the term “infinite” slowly began to change. Over six hundred years later, the Neo-Platonist philosopher Plotinus was one of the first important Greek philosophers to equate God with the infinite−although he did not do so explicitly. He said instead that any idea abstracted from our finite experience is not applicable to God. He probably believed that if God were finite in some aspect, then there could be something beyond God and therefore God wouldn’t be “the One.” Plotinus was influential in helping remove the negative connotations that had accompanied the concept of the infinite. One difficulty here, though, is that it is unclear whether metaphysicians have discovered that God is identical with the transcendentally infinite or whether they are simply defining “God” to be that way. A more severe criticism is that perhaps they are just defining “infinite” (in the transcendental sense) as whatever God is.

Augustine, who merged Platonic philosophy with the Christian religion, spoke of God “whose understanding is infinite” for “what are we mean wretches that dare presume to limit His knowledge?” Augustine wrote that the reason God can understand the infinite is that “...every infinity is, in a way we cannot express, made finite to God....” [City of God, Book XII, ch. 18] This is an interesting perspective. Medieval philosophers debated whether God could understand infinite concepts other than Himself, not because God had limited understanding, but because there was no such thing as infinity anywhere except in God.

The medieval philosopher Thomas Aquinas, too, said God has infinite knowledge. He definitely did not mean potentially infinite knowledge. The technical definition of actual infinity might be useful here. If God is infinitely knowledgeable, this can be understood perhaps as meaning that God knows the truth values of all declarative sentences and that the set of these sentences is actually infinite.

Aquinas argued in his Summa Theologia that, although God created everything, nothing created by God can be actually infinite. His main reason was that anything created can be counted, yet if an infinity were created, then the count would be infinite, but no infinite numbers exist to do the counting (as Aristotle had also said). In his day this was a better argument than today because Cantor created (or discovered) infinite numbers in the late 19th century.

René Descartes believed God was actually infinite, and he remarked that the concept of actual infinity is so awesome that no human could have created it or deduced it from other concepts, so any idea of infinity that humans have must have come from God directly. Thus God exists. Descartes is using the concept of infinity to produce a new ontological argument for God’s existence.

David Hume, and many other philosophers, raised the problem that if God has infinite power then there need not be evil in the world, and if God has infinite goodness, then there should not be any evil in the world. This problem is often referred to as "The Problem of Evil" and has been a long standing point of contention for theologians.

Spinoza and Hegel envisioned God, or the Absolute, pantheistically. If they are correct, then to call God infinite, is to call the world itself infinite. Hegel denigrated Aristotle’s advocacy of potential infinity and claimed the world is actually infinite. Traditional Christian, Muslim and Jewish metaphysicians do not accept the pantheistic notion that God is at one with the world. Instead they say God transcends the world. Since God is outside space and time, the space and time that he created may or may not be infinite, depending on God’s choice, but surely everything else he created is finite, they say.

The multiverse theories of cosmology in the early 21st century allow there to be an uncountable infinity of universes within a background space whose volume is actually infinite. The universe created by our Big Bang is just one of these many universes. Christian theologians balk at the notion of God choosing to create this multiverse because the theory implies that, although there are so many universes radically different from ours, there also are an actually infinite number of copies of ours, which implies there are an infinite number of Jesuses who have been crucified on the cross. The removal of the uniqueness of Jesus is apparently a removal of his dignity. Augustine had this worry when considering infinite universes, and he responded that "Christ died once for sinners...."

There are many other entities and properties that some metaphysician or other has claimed are infinite: places, possibilities, propositions, properties, particulars, partial orderings, pi’s decimal expansion, predicates, proofs, Plato’s forms, principles, power sets, probabilities, positions, and possible worlds. That is just for the letter p. Some of these are considered to be abstract objects, objects outside of space and time, and others are considered to be concrete objects, objects within, or part of, space and time.

For helpful surveys of the history of infinity in theology and metaphysics, see (Owen 1967) and (Moore 2001).

4. Infinity in Physical Science

From a metaphysical perspective, the theories of mathematical physics seem to be ontologically committed to objects and their properties. If any of those objects or properties are infinite, then physics is committed to there being infinity within the physical world.

Here are four suggested examples where infinity occurs within physical science. (1) Standard cosmology based on Einstein’s general theory of relativity implies the density of the mass at the center of a simple black hole is infinitely large (even though black hole’s total mass is finite). (2) The Standard Model of particle physics implies the size of an electron is infinitely small. (3) General relativity implies that every path in space is infinity divisible. (4) Classical quantum theory implies the values of kinetic energy of an accelerating, free electron are infinitely numerous. These four kinds of infinities are implied by theory and argumentation, and are not something that could be measured directly.

Objecting to taking scientific theories at face value, the 18th century British empiricists George Berkeley and David Hume denied the physical reality of even potential infinities on the empiricist grounds that such infinities are not detectable by our sense organs. Most philosophers of the 21st century would say that Berkeley’s and Hume’s empirical standards are too rigid because they are based on the mistaken assumption that our knowledge of reality must be a complex built up from simple impressions gained from our sense organs.

But in the spirit of Berkeley and Hume’s empiricism, instrumentalists also challenge any claim that science tells us the truth about physical infinities. The instrumentalists say that all theories of science are merely effective “instruments” designed for explanatory and predictive success. A scientific theory’s claims are neither true nor false. By analogy, a shovel is an effective instrument for digging, but a shovel is neither true nor false. The instrumentalist would say our theories of mathematical physics imply only that reality looks “as if” there are physical infinities. Some realists on this issue respond that to declare it to be merely a useful mathematical fiction that there are physical infinities is just as misleading as to say it is a mere fiction that moving planets actually have inertia or petunias actually contain electrons. We have no other tool than theory-building for accessing the existing features of reality that are not directly perceptible. If our best theories—those that have been well tested and are empirically successful and make novel predictions—use theoretical terms that refer to infinities, then infinities must be accepted. See (Leplin 2000) for more details about anti-realist arguments, such as those of instrumentalism and constructive empiricism.

a. Infinitely Small and Infinitely Divisible

Consider the size of electrons and quarks, the two main components of atoms. All scientific experiments so far have been consistent with electrons and quarks having no internal structure (components), as our best scientific theories imply, so the "simple conclusion" is that electrons are infinitely small, or infinitesimal, and zero-dimensional. Is this “simple conclusion” too simple? Some physicists speculate that there are no physical particles this small and that, in each subsequent century, physicists will discover that all the particles of the previous century have a finite size due to some inner structure. However, most physicists withhold judgment on this point about the future of physics.

A second reason to question whether the “simple conclusion” is too simple is that electrons, quarks, and all other elementary particles behave in a quantum mechanical way. They have a wave nature as well as a particle nature, and they have these simultaneously. When probing an electron’s particle nature it is found to have no limit to how small it can be, but when probing the electron’s wave nature, the electron is found to be spread out through all of space, although it is more probably in some places than others. Also, quantum theory is about groups of objects, not a single object. The theory does not imply a definite result for a single observation but only for averages over many observations, so this is why quantum theory introduces an inescapable randomness or unpredictability into claims about single objects and single experimental results. The more accurate theory of quantum electrodynamics (QED) that incorporates special relativity and improves on classical quantum theory for the smallest regions, also implies electrons are infinitesimal particles when viewed as particles, while they are wavelike or spread out when viewed as waves. When considering the electron’s particle nature, QED’s prediction of zero volume has been experimentally verified down to the limits of measurement technology. The measurement process is limited by the fact that light or other electromagnetic radiation must be used to locate the electron, and this light cannot be used to determine the position of the electron more accurately than the distance between the wave crests of the light wave used to bombard the electron. So, all this is why the “simple conclusion” mentioned at the beginning of this paragraph may be too simple. For more discussion, see the chapter “The Uncertainty Principle” in (Hawking 2001) or (Greene 1999, pp. 121-2).

If a scientific theory implies space is a continuum, with the structure of a mathematical continuum, then if that theory is taken at face value, space is infinitely divisible and composed of infinitely small entities, the so-called points of space. But should it be taken at face value? The mathematician David Hilbert declared in 1925, “A homogeneous continuum which admits of the sort of divisibility needed to realize the infinitely small is nowhere to be found in reality. The infinite divisibility of a continuum is an operation which exists only in thought.” Many physicists agree with Hilbert, but many others argue that, although Hilbert is correct that ordinary entities such as strawberries and cream are not continuous, he is ultimately incorrect, for the following reasons.

First, the Standard Model of particles and forces is one of the best tested and most successful theories in all the history of physics. So are the theories of relativity and quantum mechanics. All these theories imply or assume that, using Cantor’s technical sense of actual infinity, there are infinitely many infinitesimal instants in any non-zero duration, and there are infinitely many point places along any spatial path. So, time is a continuum, and space is a continuum.

The second challenge to Hilbert’s position is that quantum theory, in agreement with relativity theory, implies that for any possible kinetic energy of a free electron there is half that energy−insofar as an electron can be said to have a value of energy independent of being measured to have it. Although the energy of an electron bound within an atom is quantized, the energy of an unbound or free electron is not. If it accelerates in its reference frame from zero to nearly the speed of light, its energy changes and takes on all intermediate real-numbered values from its rest energy to its total energy. But mass is just a form of energy, as Einstein showed in his famous equation E = mc2, so in this sense mass is a continuum as well as energy.

How about non-classical quantum mechanics, the proposed theories of quantum gravity that are designed to remove the disagreements between quantum mechanics and relativity theory? Do these non-classical theories quantize all these continua we’ve been talking about? One such theory, the theory of loop quantum gravity, implies space consists of discrete units called loops. But string theory, which is the more popular of the theories of quantum gravity in the early 21st century, does not imply space is discontinuous. [See (Greene 2004) for more details.] Speaking about this question of continuity, the theoretical physicist Brian Greene says that, although string theory is developed against a background of continuous spacetime, his own insight is that

[T]he increasingly intense quantum jitters that arise on decreasing scales suggest that the notion of being able to divide distances or durations into ever smaller units likely comes to an end at around the Planck length (10-33centimeters) and Planck time (10-43 seconds). ...There is something lurking in the microdepths−something that might be called the bare-bones substrate of spacetime−the entity to which the familiar notion of spacetime alludes. We expect that this ur-ingredient, this most elemental spacetime stuff, does not allow dissection into ever smaller pieces because of the violent fluctuations that would ultimately be encountered.... [If] familiar spacetime is but a large-scale manifestation of some more fundamental entity, what is that entity and what are its essential properties? As of today, no one knows. (Greene 2004, pp. 473, 474, 477)

Disagreeing, the theoretical physicist Roger Penrose speaks about both loop quantum gravity and string theory and says:

...in the early days of quantum mechanics, there was a great hope, not realized by future developments, that quantum theory was leading physics to a picture of the world in which there is actually discreteness at the tiniest levels. In the successful theories of our present day, as things have turned out, we take spacetime as a continuum even when quantum concepts are involved, and ideas that involve small-scale spacetime discreteness must be regarded as ‘unconventional.’ The continuum still features in an essential way even in those theories which attempt to apply the ideas of quantum mechanics to the very structure of space and time.... Thus it appears, for the time being at least, that we need to take the use of the infinite seriously, particular in its role in the mathematical description of the physical continuum. (Penrose 2005, 363)

b. Singularities

There is a good reason why scientists fear the infinite more than mathematicians do. Scientists have to worry that some day we will have a dangerous encounter with a singularity, with something that is, say, infinitely hot or infinitely dense. For example, we might encounter a singularity by being sucked into a black hole. According to Schwarzschild’s solution to the equations of general relativity, a simple, non-rotating black hole is infinitely dense at its center. For a second example of where there may be singularities, there is good reason to believe that 13.8 billion years ago the entire universe was a singularity with infinite temperature, infinite density, infinitesimal volume, and infinite curvature of spacetime.

Some philosophers will ask: Is it not proper to appeal to our best physical theories in order to learn what is physically possible? Usually, but not in this case, say many scientists, including Albert Einstein. He believed that, if a theory implies that some physical properties might have or, worse yet, do have actually infinite values (the so-called singularities), then this is a sure sign of error in the theory. It’s an error primarily because the theory will be unable to predict the behavior of the infinite entity, and so the theory will fail. For example, even if there were a large, shrinking universe pre-existing the Big Bang, if the Big Bang were considered to be an actual singularity, then knowledge of the state of the universe before the Big Bang could not be used to predict events after the Big Bang, or vice versa. This failure to imply the character of later states of the universe is what Einstein’s collaborator Peter Bergmann meant when he said, “A theory that involves singularities...carries within itself the seeds of its own destruction.” The majority of physicists probably would agree with Einstein and Bergmann about this, but the critics of these scientists say this belief that we need to remove singularities everywhere is merely a hope that has been turned into a metaphysical assumption.

But doesn’t quantum theory also rule out singularities? Yes. Quantum theory allows only arbitrary large, finite values of properties such as temperature and mass-energy density. So which theory, relativity theory or quantum theory, should we trust to tell us whether the center of a black hole is or isn’t a singularity? The best answer is, “Neither, because we should get our answer from a theory of quantum gravity.” A principal attraction of string theory, a leading proposal for a theory of quantum gravity to replace both relativity theory and quantum theory, is that it eliminates the many singularities that appear in previously accepted physical theories such as relativity theory. In string theory, the electrons and quarks are not point particles but are small, finite loops of fundamental string. That finiteness in the loop is what eliminates the singularities.

Unfortunately, string theory has its own problems with infinity. It implies an infinity of kinds of particles. If a particle is a string, then the energy of the particle should be the energy of its vibrating string. Strings have an infinite number of possible vibrational patterns each corresponding to a particle that should exist if we take the theory literally. One response that string theorists make to this problem about too many particles is that perhaps the infinity of particles did exist at the time of the Big Bang but now they have all disintegrated into a shower of simpler particles and so do not exist today. Another response favored by string theorists is that perhaps there never were an infinity of particles nor a Big Bang singularity in the first place. Instead the Big Bang was a Big Bounce or quick expansion from a pre-existing, shrinking universe whose size stopped shrinking when it got below the critical Planck length of about 10-35 meters.

c. Idealization and Approximation

Scientific theories use idealization and approximation; they are "lies that help us to see the truth," to use a phrase from the painter Pablo Picasso (who was speaking about art, not science). In our scientific theories, there are ideal gases, perfectly elliptical orbits, and economic consumers motivated only by profit. Everybody knows these are not intended to be real objects. Yet, it is clear that idealizations and approximations are actually needed in science in order to promote genuine explanation of many phenomena. We need to reduce the noise of the details in order to see what is important. In short, approximations and idealizations can be explanatory. But what about approximations and idealizations that involve the infinite?

Although the terms “idealization” and “approximation” are often used interchangeably, John Norton (Norton 2012) recommends paying more attention to their difference by saying that, when there is some aspect of the world, some target system, that we are trying to understand scientifically, approximations should be considered to be inexact descriptions of the target system whereas idealizations should be considered to be new systems or parts of new systems that also are approximations to the target system but that contain reference to some novel object or property. For example, elliptical orbits are approximations to actual orbits of planets, but ideal gases are idealizations because they contain novel objects such as point particles that are part of a new system that is useful for approximating the target system of actual gases.

All very detailed physical theories are idealizations or approximations to reality that can fail if pushed too far, but some defenders of infinity ask whether all appeals to infinity can be known a priori to be idealizations or approximations. Our theory of the solar system justifies our belief that the Earth is orbited by a moon, not just an approximate moon. The speed of light in a vacuum really is constant, not just approximately constant. Why then should it be assumed, as it often is, that all appeals to infinity in scientific theory are approximations or idealizations? Must the infinity be an artifact of the model rather than a feature of actual physical reality?  Philosophers of science disagree on this issue. See (Mundy, 1990, p. 290).

There is an argument for believing some appeals to infinity definitely are neither approximations nor idealizations. The argument presupposes a realist rather than an antirealist understanding of science, and it begins with a description of the opponents’ position. Carl Friedrich Gauss (1777-1855) was one of the greatest mathematicians of all time. He said scientific theories involve infinities merely as approximations or idealizations and merely in order to make for easy applications of those theories, when in fact all real entities are finite. At the time, nearly everyone would have agreed with Gauss. Roger Penrose argues against Gauss’ position:

Nevertheless, as tried and tested physical theory stands today—as it has for the past 24 centuries—real numbers still form a fundamental ingredient of our understanding of the physical world. (Penrose 2004, 62)

Gauss’ position could be buttressed if there were useful alternatives to our physical theories that do not use infinities. There actually are alternative mathematical theories of analysis that do not use real numbers and do not use infinite sets and do not require the line to be dense. See (Ahmavaara 1965) for an example. Representing the majority position among scientists on this issue, Penrose says, “To my mind, a physical theory which depends fundamentally upon some absurdly enormous...number would be a far more complicated (and improbable) theory than one that is able to depend upon a simple notion of infinity” (Penrose 2005, 359). David Deutsch agrees. He says, “Versions of number theory that confined themselves to ‘small natural numbers’ would have to be so full of arbitrary qualifiers, workarounds and unanswered questions, that they would be very bad explanations until they were generalized to the case that makes sense without such ad-hoc restrictions: the infinite case.” (Deutsch 2011, pp. 118-9) And surely a successful explanation is the surest route to understanding reality.

In opposition to this position of Penrose and Deutsch, and in support of Gauss’ position, the physicist Erwin Schrödinger remarks, “The idea of a continuous range, so familiar to mathematicians in our days, is something quite exorbitant, an enormous extrapolation of what is accessible to us.” Emphasizing this point about being “accessible to us,” some metaphysicians attack the applicability of the mathematical continuum to physical reality on the grounds that a continuous human perception over time is not mathematically continuous. Wesley Salmon responds to this complaint from Schrödinger:

...The perceptual continuum and perceived becoming [that is, the evidence from our sense organs that the world changes from time to time] exhibit a structure radically different from that of the mathematical continuum. Experience does seem, as James and Whitehead emphasize, to have an atomistic character. If physical change could be understood only in terms of the structure of the perceptual continuum, then the mathematical continuum would be incapable of providing an adequate description of physical processes. In particular, if we set the epistemological requirement that physical continuity must be constructed from physical points which are explicitly definable in terms of observables, then it will be impossible to endow the physical continuum with the properties of the mathematical continuum. In our discussion..., we shall see, however, that no such rigid requirement needs to be imposed. (Salmon 1970, 20)

Salmon continues by making the point that calculus provides better explanations of physical change than explanations which accept the “rigid requirement” of understanding physical change in terms of the structure of the perceptual continuum, so he recommends that we apply Ockham’s Razor and eliminate that rigid requirement. But the issue is not settled.

d. Infinity in Cosmology

Let’s review some of the history regarding the volume of spacetime. Aristotle said the past is infinite because, for any past time we can imagine an earlier one. It is difficult to make sense of his belief about the past since he means it is potentially infinite. After all, the past has an end, namely the present, so its infinity has been completed and therefore is not a potential infinity. This problem with Aristotle’s reasoning was first raised in the 13th century by Richard Rufus of Cornwall. It was not given the attention it deserved because of the assumption for so many centuries that Aristotle couldn’t have been wrong about time, especially since his position was consistent with Christian, Jewish, and Muslim theology which implies the physical world became coherent or well-formed only a finite time ago. However Aquinas argued against Aristotle’s view that the past is infinite; Aquinas’ grounds were that Holy Scripture implies God created the world a finite time ago, and that Aristotle was wrong to put so much trust in what we can imagine.

Unlike time, Aristotle claimed space is finite. He said the volume of physical space is finite because it is enclosed within a finite, spherical shell of visible, fixed stars with the Earth at its center. On this topic of space not being infinite, Aristotle’s influence was authoritative to most scholars for the next eighteen hundred years.

The debate about whether the volume of space is infinite was rekindled in Renaissance Europe. The English astronomer and defender of Copernicus, Thomas Digges (1546–1595) was the first scientist to reject the ancient idea of an outer spherical shell and to declare that physical space is actually infinite in volume and filled with stars. The physicist Isaac Newton (1642–1727) at first believed the universe's material is confined to only a finite region while it is surrounded by infinite empty space, but in 1691 he realized that if there were a finite number of stars in a finite region, then gravity would require all the stars to fall in together at some central point. To avoid this result, he later speculated that the universe contains an infinite number of stars in an infinite volume. The notion of infinite time, however, was not accepted by Newton because of conflict with Christian orthodoxy, as influenced by Aquinas. We now know that Newton’s speculation about the stability of an infinity of stars in an infinite universe is incorrect. There would still be clumping so long as the universe did not expand. (Hawking 2001, p. 9)

Immanuel Kant (1724–1804) declared that space and time are both potentially infinite in extent because this is imposed by our own minds. Space and time are not features of “things in themselves” but are an aspect of the very form of any possible human experience, he said. We can know a priori even more about space than about time, he believed; and he declared that the geometry of space must be Euclidean. Kant’s approach to space and time as something knowable a priori went out of fashion in the early 20th century. It was undermined in large part by the discovery of non-Euclidean geometries in the 19th century, then by Beltrami’s and Klein’s proofs that these geometries are as logically consistent as Euclidean geometry, and finally by Einstein’s successful application to physical space of non-Euclidean geometry within his general theory of relativity.

The volume of spacetime is finite at present if we can trust the classical Big Bang theory. [But do not think of this finite space as having a boundary beyond which a traveler falls over the edge into nothingness, or a boundary that cannot be penetrated.] Assuming space is all the places that have been created since the Big Bang, then the volume of space is definitely finite at present, though it is huge and growing ever larger over time. Assuming this expansion will never stop, it follows that the volume of spacetime is potentially infinite but not actually infinite. However, if, as some theorists speculate on the basis of inflationary cosmology, everything that is a product of our Big Bang is just one “bubble” in a sea of bubbles in the infinite spacetime background of the Multiverse, then both space and time are actually infinite. For more discussion of the issue of the infinite volume of spacetime, see (Greene 2011).

In the late nineteenth century, Georg Cantor argued that the mathematical concept of potential infinity presupposes the mathematical concept of actual infinity. This argument was accepted by most later mathematicians, but it does not imply that, if future time were to be potentially infinite, then future time also would be actually infinite.

5. Infinity in Mathematics

The previous sections of this article have introduced the concepts of actual infinity and potential infinity and explored the development of calculus and set theory, but this section will probe deeper into the role of infinity in mathematics. Mathematicians always have been aware of the special difficulty in dealing with the concept of infinity in a coherent manner. Intuitively, it seems reasonable that if we have two infinities of things, then we still have an infinity of them. So, we might represent this intuition mathematically by the equation 2 ∞ = 1 ∞. Dividing both sides by ∞ will prove that 2 = 1, which is a good sign we were not using infinity in a coherent manner. In recommending how to use the concept of infinity coherently, Bertrand Russell said pejoratively:

The whole difficulty of the subject lies in the necessity of thinking in an unfamiliar way, and in realising that many properties which we have thought inherent in number are in fact peculiar to finite numbers. If this is remembered, the positive theory of infinity...will not be found so difficult as it is to those who cling obstinately to the prejudices instilled by the arithmetic which is learnt in childhood. (Salmon 1970, 58)

That positive theory of infinity that Russell is talking about is set theory, and the new arithmetic is the result of Cantor’s generalizing the notions of order and of size of sets into the infinite, that is, to the infinite ordinals and infinite cardinals. These numbers are also called transfinite ordinals and transfinite cardinals. The following sections will briefly explore set theory and the role of infinity within mathematics. The main idea, though, is that the basic theories of mathematical physics are properly expressed using the differential calculus with real-number variables, and these concepts are well-defined in terms of set theory which, in turn, requires using actual infinities or transfinite infinities of various kinds.

a. Infinite Sums

In the 17th century, when Newton and Leibniz invented calculus, they wondered what the value is of this infinite sum:

1/1 + 1/2 + 1/4 + 1/8 + ....

They believed the sum is 2. Knowing about the dangers of talking about infinity, most later mathematicians hoped to find a technique to avoid using the phrase “infinite sum.” Cauchy and Weierstrass eventually provided this technique two centuries later. They removed any mention of “infinite sum” by using the formal idea of a limit. Informally, the Cauchy-Weierstrass idea is that instead of overtly saying the infinite sum s1 + s2 + s3 + … is some number S, as Newton and Leibniz were saying, one should say that the sequence converges to S just in case the numerical difference between any pair of terms within the sequence is as small as one desires, provided the two terms are sufficiently far out in the sequence. More formally it is expressed this way: The series s1 + s2 + s3 + … converges to S if, and only if, for every positive number ε there exists a number δ such that |sn+h +  sn| < ε for all integers n > δ and all integers h > 0. In this way, reference to an actual infinity has been eliminated.

This epsilon-delta technique of talking about limits was due to Cauchy in 1821 and Weierstrass in the period from 1850 to 1871. The two drawbacks to this technique are that (1) it is unintuitive and more complicated than Newton and Leibniz’s intuitive approach that did mention infinite sums, and (2) it is not needed because infinite sums were eventually legitimized by being given a set-theoretic foundation.

b. Infinitesimals and Hyperreals

There has been considerable controversy throughout history about how to understand infinitesimal objects and infinitesimal changes in the properties of objects. Intuitively an infinitesimal object is as small as you please but not quite nothing. Infinitesimal objects and infinitesimal methods were first used by Archimedes in ancient Greece, but he did not mention them in any publication intended for the public because he did not consider his use of them to be rigorous. Infinitesimals became better known when Leibniz used them in his differential and integral calculus. The differential calculus can be considered to be a technique for treating continuous motion as being composed of an infinite number of infinitesimal steps. The calculus’ use of infinitesimals led to the so-called “golden age of nothing” in which infinitesimals were used freely in mathematics and science. During this period, Leibniz, Euler, and the Bernoullis applied the concept. Euler applied it cavalierly (although his intuition was so good that he rarely if ever made mistakes), but Leibniz and the Bernoullis were concerned with the general question of when we could, and when we could not, consider an infinitesimal to be zero. They were aware of apparent problems with these practices in large part because they had been exposed by Berkeley.

In 1734, George Berkeley attacked the concept of infinitesimal as ill-defined and incoherent because there were no definite rules for when the infinitesimal should be and shouldn’t be considered to be zero. Berkeley, like Leibniz, was thinking of infinitesimals as objects with a constant value--as genuinely infinitesimally small magnitudes--whereas Newton thought of them as variables that could arbitrarily approach zero. Either way, there were coherence problems. The scientists and results-oriented mathematicians of the golden age of nothing had no good answer to the coherence problem. As standards of rigorous reasoning increased over the centuries, mathematicians became more worried about infinitesimals. They were delighted when Cauchy in 1821 and Weierstrass in the period from 1850 to 1875 developed a way to use calculus without infinitesimals, and at this time any appeal to infinitesimals was considered illegitimate, and mathematicians soon stopped using infinitesimals.

Here is how Cauchy and Weierstrass eliminated infinitesimals with their concept of limit. Suppose we have a function f,  and we are interested in the Cartesian graph of the curve y = f(x) at some point a along the x axis. What is the rate of change of  f at a? This is the slope of the tangent line at a, and it is called the derivative f' at a. This derivative was defined by Leibniz to be

infinity-equation1

where h is an infinitesimal. Because of suspicions about infinitesimals, Cauchy and Weierstrass suggested replacing Leibniz’s definition of the derivative with

equation

That is,  f'(a) is the limit, as x approaches a, of the above ratio. The limit idea was rigorously defined using Cauchy’s well known epsilon and delta method. Soon after the Cauchy-Weierstrass’ definition of derivative was formulated, mathematicians stopped using infinitesimals.

The scientists did not follow the lead of the mathematicians. Despite the lack of a coherent theory of infinitesimals, scientists continued to reason with infinitesimals because infinitesimal methods were so much more intuitively appealing than the mathematicians’ epsilon-delta methods. Although students in calculus classes in the early 21st century are still taught the unintuitive epsilon-delta methods, Abraham Robinson (Robinson 1966) created a rigorous alternative to standard Weierstrassian analysis by using the methods of model theory to define infinitesimals.

Here is Robinson’s idea. Think of the rational numbers in their natural order as being gappy with real numbers filling the gaps between them. Then think of the real numbers as being gappy with hyperreals filling the gaps between them. There is a cloud or region of hyperreals surrounding each real number (that is, surrounding each real number described nonstandardly). To develop these ideas more rigorously, Robinson used this simple definition of an infinitesimal:

h is infinitesimal if and only if 0 < |h| < 1/n, for every positive integer n.

|h| is the absolute value of h.

Robinson did not actually define an infinitesimal as a number on the real line. The infinitesimals were defined on a new number line, the hyperreal line, that contains within it the structure of the standard real numbers from classical analysis. In this sense the hyperreal line is the extension of the reals to the hyperreals. The development of analysis via infinitesimals creates a nonstandard analysis with a hyperreal line and a set of hyperreal numbers that include real numbers. In this nonstandard analysis, 78+2h is a hyperreal that is infinitesimally close to the real number 78. Sums and products of infinitesimals are infinitesimal.

Because of the rigor of the extension, all the arguments for and against Cantor’s infinities apply equally to the infinitesimals. Sentences about the standardly-described reals are true if and only if they are true in this extension to the hyperreals. Nonstandard analysis allows proofs of all the classical theorems of standard analysis, but it very often provides shorter, more direct, and more elegant proofs than those that were originally proved by using standard analysis with epsilons and deltas. Objections by practicing mathematicians to infinitesimals subsided after this was appreciated. With a good definition of “infinitesimal” they could then use it to explain related concepts such as in the sentence, “That curve approaches infinitesimally close to that line.” See (Wolf 2005, chapter 7) for more about infinitesimals and hyperreals.

c. Mathematical Existence

Mathematics is apparently about mathematical objects, so it is apparently about infinitely large objects, infinitely small objects, and infinitely many objects. Mathematicians who are doing mathematics and are not being careful about ontology too easily remark that there are infinite dimensional spaces, the continuum, continuous functions, an infinity of functions, and this or that infinite structure. Do these infinities really exist? The philosophical literature is filled with arguments pro and con and with fine points about senses of existence.

When axiomatizing geometry, Euclid said that between any two points one could choose to construct a line. Opposed to Euclid’s constructivist stance, many modern axiomatizers take a realist philosophical stance by declaring simply that there exists a line between any two points, so the line pre-exists any construction process. In mathematics, the constructivist will recognize the existence of a mathematical object only if there is at present an algorithm (that is, a step by step “mechanical” procedure operating on symbols that is finitely describable, that requires no ingenuity and that uses only finitely many steps) for constructing or finding such an object. Assertions require proofs. The constructivist believes that to justifiably assert the negation of a sentence S is to prove that the assumption of S leads to a contradiction. So, legitimate mathematical objects must be shown to be constructible in principle by some mental activity and cannot be assumed to pre-exist any such construction process nor to exist simply because their non-existence would be contradictory. A constructivist, unlike a realist, is a kind of conceptualist, one who believes that an unknowable mathematical object is impossible. Most constructivists complain that, although potential infinites can be constructed, actual infinities cannot be.

There are many different schools of constructivism. The first systematic one, and perhaps the most well known version and most radical version, is due to L.E.J. Brouwer. He is not a finitist,  but his intuitionist school demands that all legitimate mathematics be constructible from a basis of mental processes he called “intuitions.” These intuitions might be more accurately called “clear mental procedures.” If there were no minds capable of having these intuitions, then there would be no mathematical objects just as there would be no songs without ideas in the minds of composers. Numbers are human creations. The number pi is intuitionistically legitimate because we have an algorithm for computing all its decimal digits, but the following number g is not legitimate: The following number g is illegitimate. It is the number whose nth digit is either 0 or 1, and it is 1 if and only if there are n consecutive 7s in the decimal expansion of pi. No person yet knows how to construct the decimal digits of g. Brouwer argued that the actually infinite set of natural numbers cannot be constructed (using intuitions) and so does not exist. The best we can do is to have a rule for adding more members to a set. So, his concept of an acceptable infinity is closer to that of potential infinity than actual infinity. Hermann Weyl emphasizes the merely potential character of these infinities:

Brouwer made it clear, as I think beyond any doubt, that there is no evidence supporting the belief in the existential character of the totality of all natural numbers…. The sequence of numbers which grows beyond any stage already reached by passing to the next number, is a manifold of possibilities open towards infinity; it remains forever in the status of creation, but is not a closed realm of things existing in themselves. (Weyl is quoted in (Kleene 1967, p. 195))

It is not legitimate for platonic realists, said Brouwer, to bring all the sets into existence at once by declaring they are whatever objects satisfy all the axioms of set theory. Brouwer believed realists accept too many sets because they are too willing to accept sets merely by playing coherently with the finite symbols for them when sets instead should be tied to our experience. For Brouwer this experience is our experience of time. He believed we should arrive at our concept of the infinite by noticing that our experience of a duration can be divided into parts and then these parts can be further divided, and so. This infinity is a potential infinity, not an actual infinity. For the intuitionist, there is no determinate, mind-independent mathematical reality which provides the facts to make mathematical sentences true or false. This metaphysical position is reflected in the principles of logic that are acceptable to an intuitionist. For the intuitionist, the sentence “For all x, x has property F” is true only if we have already proved constructively that each x has property F. And it is false only if we have proved that some x does not have property F. Otherwise, it is neither true nor false. The intuitionist does not accept the principle of excluded middle: For any sentence S, either S or the negation of S. Outraged by this intuitionist position, David Hilbert famously responded by saying, “To take the law of the excluded middle away from the mathematician would be like denying the astronomer the telescope or the boxer the use of his fists.” (quoted from Kleene 1967, p. 197) For a presentation of intuitionism with philosophical emphasis, see (Posy 2005) and (Dummett 1977).

Finitists, even those who are not constructivists, also argue that the actually infinite set of natural numbers does not exist. They say there is a finite rule for generating each numeral from the previous one, but the rule does not produce an actual infinity of either numerals or numbers. The ultrafinitist considers the classical finitist to be too liberal because finite numbers such as 2100 and 21000 can never be accessed by a human mind in a reasonable amount of time. Only the numerals or symbols for those numbers can be coherently manipulated. One challenge to ultrafinitists is that they should explain where the cutoff point is between numbers that can be accessed and numbers that cannot be. Ultrafinitsts have risen to this challenge. The mathematician Harvey Friedman says:

I raised just this objection [about a cutoff] with the (extreme) ultrafinitist Yessenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to start with 21 and asked him whether this is “real” or something to that effect. He virtually immediately said yes. Then I asked about 22, and he again said yes, but with a perceptible delay. Then 23, and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take 2100 times as long to answer yes to 2100 than he would to answering 21. There is no way that I could get very far with this. (Elwes 2010, 317)

This battle among competing philosophies of mathematics will not be explored in depth in this article, but this section will offer a few more points about mathematical existence.

Hilbert argued that, “If the arbitrarily given axioms do not contradict one another, then they are true and the things defined by the axioms exist.” But (Chihara 2008, 141) points out that Hilbert seems to be confusing truth with truth in a model. If a set of axioms is consistent, and so is its corresponding axiomatic theory, then the theory defines a class of models, and each axiom is true in any such model, but it does not follow that the axioms are really true. To give a crude, nonmathematical example, consider this set of two axioms {All horses are blue, all cows are green.}. The formal theory using these axioms is consistent and has a model, but it does not follow that either axiom is really true.

Quine objected to Hilbert's criterion for existence as being too liberal. Quine’s argument for infinity in mathematics begins by noting that our fundamental scientific theories are our best tools for helping us understand reality and doing ontology. Mathematical theories which imply the existence of some actually infinite sets are indispensable to all these scientific theories, and their referring to these infinities cannot be paraphrased away. All this success is a good reason to believe in some actual infinite sets and to say the sentences of both the mathematical theories and the scientific theories are true or approximately true since their success would otherwise be a miracle. But, he continues, of course it is no miracle. See (Quine 1960 chapter 7).

Quine believed that infinite sets exist only if they are indispensable in successful applications of mathematics to science; but he believed science so far needs only the first three alephs: ℵ0 for the integers, ℵ1 for the set of point places in space, and ℵ2 for the number of possible lines in space (including lines that are not continuous). The rest of Cantor’s heaven of transfinite numbers is unreal, Quine said, and the mathematics of the extra transfinite numbers is merely “recreational mathematics.” But Quine showed intellectual flexibility by saying that if he were to be convinced more transfinite sets were needed in science, then he’d change his mind about which alephs are real. To briefly summarize Quine’s position, his indispensability argument treats mathematical entities on a par with all other theoretical entities in science and says mathematical statements can be (approximately) true. Quine points out that reference to mathematical entities is vital to science, and there is no way of separating out the evidence for the mathematics from the evidence for the science. This famous indispensability argument has been attacked in many ways. Critics charge, “Quite aside from the intrinsic logical defects of set theory as a deductive theory, this is disturbing because sets are so very different from physical objects as ordinarily conceived, and because the axioms of set theory are so very far removed from any kind of empirical support or empirical testability…. Not even set theory itself can tell us how the existence of a set (e.g. a power set) is empirically manifested.” (Mundy 1990, pp. 289-90). See (Parsons 1980) for more details about Quine’s and other philosophers’ arguments about existence of mathematical objects.

d. Zermelo-Fraenkel Set Theory

Cantor initially thought of a set as being a collection of objects that can be counted, but this notion eventually gave way to a set being a collection that has a clear membership condition. Over several decades, Cantor’s naive set theory evolved into ZF, Zermelo-Fraenkel set theory, and ZF was accepted by most mid-20th century mathematicians as the correct tool to use for deciding which mathematical objects exist. The acceptance was based on three reasons. (1) ZF is precise and rigorous. (2) ZF is useful for defining or representing other mathematical concepts and methods. Mathematics can be modeled in set theory; it can be given a basis in set theory. (3) No inconsistency has been uncovered despite heavy usage.

Notice that one of the three reasons is not that set theory provides a foundation to mathematics in the sense of justifying the doing of mathematics or in the sense of showing its sentences are certain or necessary. Instead, set theory provides a basis for theories only in the sense that it helps to organize them, to reveal their interrelationships, and to provide a means to precisely define their concepts. The first program for providing this basis began in the late 19th century. Peano had given an axiomatization of the natural numbers. It can be expressed in set theory using standard devices for treating natural numbers and relations and functions and so forth as being sets. (For example, zero is the empty set, and a relation is a set of ordered pairs.) Then came the arithmetization of analysis which involved using set theory to construct from the natural numbers all the negative numbers and the fractions and real numbers and complex numbers. Along with this, the principles of these numbers became sentences of set theory. In this way, the assumptions used in informal reasoning in arithmetic are explicitly stated in the formalism, and proofs in informal arithmetic can be rewritten as formal proofs so that no creativity is required for checking the correctness of the proofs. Once a mathematical theory is given a set theoretic basis in this manner, it follows that if we have any philosophical concerns about the higher level mathematical theory, those concerns will also be concerns about the lower level set theory in the basis.

In addition to Dedekind’s definition, there are other acceptable definitions of "infinite set" and "finite set" using set theory. One popular one is to define a finite set as a set onto which a one-to-one function maps the set of all natural numbers that are less than some natural number n. That finite set contains n elements. An infinite set is then defined as one that is not finite. Dedekind, himself, used another definition; he defined an infinite set as one that is not finite, but defined a finite set as any set in which there exists no one-to-one mapping of the set into a proper subset of itself. The philosopher C. S. Peirce suggested essentially the same approach as Dedekind at approximately the same time, but he received little notice from the professional community. For more discussion of the details, see (Wilder 1965, p. 66f, and Suppes 1960, p. 99n).

Set theory implies quite a bit about infinity. First, infinity in ZF has some very unsurprising features. If a set A is infinite and is the same size as set B, then B also is infinite. If A is infinite and is a subset of B, then B also is infinite. Using the axiom of choice, it follows that a set is infinite just in case for every natural number n, there is some subset whose size is n.

ZF’s axiom of infinity declares that there is at least one infinite set, a so-called inductive set containing zero and the successor of each of its members (such as {0, 1, 2, 3, …}). The power set axiom (which says every set has a power set, namely a set of all its subsets) then generates many more infinite sets of larger cardinality, a surprising result that Cantor first discovered in 1874.

In ZF, there is no set with maximum cardinality, nor a set of all sets, nor an infinitely descending sequence of sets x0, x1, x2, ... in which x1 is in x0, and x2 is in x1, and so forth. There is however, an infinitely ascending sequence of sets x0, x1, x2, ... in which x0 is in x1, and x1 is in x2, and so forth. In ZF, a set exists if it is implied by the axioms; there is no requirement that there be some property P such that the set is the extension of P. That is, there is no requirement that the set be defined as {x| P(x)} for some property P. One especially important feature of ZF is that for any condition or property, there is only one set of objects having that property, but it cannot be assumed that for any property, there is a set of all those objects that have that property. For example, it cannot be assumed that, for the property of being a set, there is a set of all objects having that property.

In ZF, all sets are pure. A set is pure if it is empty or its members are sets, and its members' members are sets, and so forth. In informal set theory, a set can contain cows and electrons and other non-sets.

In the early years of set theory, the terms "set" and "class" and “collection” were used interchangeably, but in von Neumann–Bernays–Gödel set theory (NBG or VBG) a set is defined to be a class that is an element of some other class. NBG is designed to have proper classes, classes that are not sets, even though they can have members which are sets. The intuitive idea is that a proper class is a collection that is too big to be a set. There can be a proper class of all sets, but neither a set of all sets nor a class of all classes. A nice feature of NBG is that a sentence in the language of ZFC is provable in NBG only if it is provable in ZFC.

Are philosophers justified in saying there is more to know about sets than is contained within ZF set theory? If V is the collection or class of all sets, do mathematicians have any access to V independently of the axioms? This is an open question that arose concerning the axiom of choice and the continuum hypothesis.

e. The Axiom of Choice and the Continuum Hypothesis

Consider whether to believe in the axiom of choice. The axiom of choice is the assertion that, given any collection of non-empty and non-overlapping sets, there exists a ‘choice set’ which is composed of one element chosen from each set in the collection. However, the axiom does not say how to do the choosing. For some sets there might not be a precise rule of choice. If the collection is infinite and its sets are not well-ordered in any way that has been specified, then there is in general no way to define the choice set. The axiom is implicitly used throughout the field of mathematics, and several important theorems cannot be proved without it. Mathematical Platonists tend to like the axiom, but those who want explicit definitions or constructions for sets do not like it. Nor do others who note that mathematics’ most unintuitive theorem, the Banach-Tarski Theorem, requires the axiom of choice. The dispute can get quite intense with advocates of the axiom of choice saying that their opponents are throwing out invaluable mathematics, while these opponents consider themselves to be removing tainted mathematics. See (Wagon 1985) for more on the Banach-Tarski Theorem; see (Wolf 2005, pp. 226-8) for more discussion of which theorems require the axiom.

A set is always smaller than its power set. How much bigger is the power set? Cantor’s controversial continuum hypothesis says that the cardinality of the power set of ℵ0 is ℵ1, the next larger cardinal number, and not some higher cardinal. The generalized continuum hypothesis is more general; it says that, given an infinite set of any cardinality, the cardinality of its power set is the next larger cardinal and not some even higher cardinal. Cantor believed the continuum hypothesis is true, but he was frustrated that he could not prove it. The philosophical issue is whether we should alter the axioms to enable the hypotheses to be proved.

If ZF is formalized as a first-order theory of deductive logic, then both Cantor’s generalized continuum hypothesis and the axiom of choice are consistent with the other principles of set theory but cannot be proved or disproved from them, assuming that ZF is not inconsistent. In this sense, both the continuum hypothesis and the axiom of choice are independent of ZF. Gödel in 1940 and Cohen in 1964 contributed to the proof of this independence result.

So, how do we decide whether to believe the axiom of choice and continuum hypothesis, and how do we decide whether to add them to the principles of ZF or any other set theory? Most mathematicians do believe the axiom of choice is true, but there is more uncertainty about the continuum hypothesis. The independence does not rule out our someday finding a convincing argument that the hypothesis is true or a convincing argument that it is false, but the argument will need more premises than just the principles of ZF. At this point the philosophers of mathematics divide into two camps. The realists, who think there is a unique universe of sets to be discovered, believe that if ZF does not fix the truth values of the continuum hypothesis and the axiom of choice, then this is a defect within ZF and we need to explore our intuitions about infinity in order to uncover a missing axiom or two for ZF that will settle the truth values. These persons prefer to think that there is a single system of mathematics to which set theory is providing a foundation, but they would prefer not simply to add the continuum hypothesis itself as an axiom because the hope is to make the axioms "readily believable," yet it is not clear enough that the axiom itself is readily believable. The second camp of philosophers of mathematics disagree and say the concept of infinite set is so vague that we simply do not have any intuitions that will or should settle the truth values. According to this second camp, there are set theories with and without axioms that fix the truth values of the axiom of choice and the continuum hypothesis, and set theory should no more be a unique theory of sets than Euclidean geometry should be the unique theory of geometry.

Believing that ZFC’s infinities are merely the above-surface part of the great iceberg of infinite sets, many set theorists are actively exploring new axioms that imply the existence of sets that could not be proved to exist within ZFC. So far there is no agreement among researchers about the acceptability of any of the new axioms. See (Wolf 2005, pp. 226-8) and (Rucker 1982) pp. 252-3 for more discussion of the search for these new axioms.

6. Infinity in Deductive Logic

The infinite appears in many interesting ways in formal deductive logic, and this section presents an introduction to a few of those ways. Among all the various kinds of formal deductive logics, first-order logic (the usual predicate logic) stands out as especially important, in part because of the accuracy and detail with which it can mirror mathematical deductions. First-order logic also stands out because it is the strongest logic that has a proof for every one of its logically true sentences, and that is compact in the sense that if an infinite set of its sentences is inconsistent, then so is some finite subset.

But just what is first-order logic? To answer this and other questions, it is helpful to introduce some technical terminology. Here is a chart of what is ahead:

First-order language First-order theory First-order formal system First-order logic
Definition Formal language with quantifiers over objects but not over sets of objects. A set of sentences expressed in a first-order language. First-order theory plus its method for building proofs. First-order language with its method for building proofs.

A first-order theory is a set of sentences expressed in a first-order language (which will be defined below). A first-order formal system is a first-order theory plus its deductive structure (method of building proofs). The term “first-order logic” is ambiguous. It can mean a first-order language with its deductive structure, or it can mean simply the academic subject or discipline that studies first-order languages and theories.

Classical first-order logic is distinguished by its satisfying certain classically-accepted assumptions: that it has only two truth values; in an interpretation or valuation [note: the terminology is not standardized] , every sentence gets exactly one of the two truth values; no well-formed formula (wff) can contain an infinite number of symbols; a valid deduction cannot be made from true sentences to a false one; deductions cannot be infinitely long; the domain of an interpretation cannot be empty but can have any infinite cardinality; an individual constant (name) must name something in the domain; and so forth.

A formal language specifies the language’s vocabulary symbols and its syntax, primarily what counts as being a term or name and what are its well-formed formulas (wffs). A first-order language is a formal language whose symbols are the quantifiers (∃), connectives (↔), constants (a), variables (x), predicates or relations (R), and perhaps functions (f) and equality (=). It has a denumerable list of variables. (A set is denumerable or countably infinite if it has size ℵ0.) A first-order language has a countably finite or countably infinite number of predicate symbols and function symbols, but not a zero number of both. First-order languages differ from each other only in their predicate symbols or function symbols or constants symbols or in having or not having the equality symbol. See (Wolf 2005, p. 23) for more details. Every wff in a first-order language must contain only finitely many symbols. There are denumerably many terms, formulas, and sentences. Because there are uncountably many real numbers, a theory of real numbers in a first-order language does not have enough names for all the real numbers.

To carry out proofs or deductions in a first-order language, the language needs to be given a deductive structure. There are several different ways to do this (via axioms, natural deduction, sequent calculus), but the ways all are independent of which first-order language is being used, and they all require specifying rules such as modus ponens for how to deduce wffs from finitely many previous wffs in the deduction.

To give some semantics or meaning to its symbols, the first-order language needs a definition of valuation and of truth in a valuation and of validity of an argument. In a propositional logic, the valuation assigns to each sentence letter a single truth value; in predicate logic each term is given a denotation, and each predicate is given a set of objects in the domain that satisfy the predicate. The valuation rules then determine the truth values of all the wffs. The valuation’s domain is a set containing all the objects that the terms might denote and that the variables range over. The domain may be of any finite or transfinite size, but the variables can range only over objects in this domain, not over sets of those objects.

Because a first-order language cannot successfully express sentences that generalize over sets (or properties or classes or relations) of the objects in the domain, it cannot, for example, adequately express Leibniz’s Law that, “If objects a and b are identical, then they have the same properties.” A second-order language can do this. A language is second-order if in addition to quantifiers on variables that range over objects in the domain it also has quantifiers (such as œthe universal quantifier ∀P) on a second kind of variable P that ranges over properties (or classes or relations) of these objects. Here is one way to express Leibniz’s Law in second-order logic:

(a = b) --> ∀P(Pa ↔ Pb)

P is called a predicate variable or property variable. Every valid deduction in first-order logic is also valid in second-order logic. A language is third-order if it has quantifiers on variables that range over properties of properties of objects (or over sets of sets of objects), and so forth. A language is called higher-order if it is at least second-order.

The definition of first-order theory given earlier in this section was that it is any set of wffs in a first-order language. A more ordinary definition adds that it is closed under deduction. This additional requirement implies that every deductive consequence of some sentences of the theory also is in the theory. Since the consequences are countably infinite, all ordinary first-order theories are countably infinite.

If the language isn’t explicitly mentioned for a first-order theory, then it is generally assumed that the language is the smallest first-order language that contains all the sentences of the theory. Valuations of the language in which all the sentences of the theory are true are said to be models of the theory.

If the theory is axiomatized, then in addition to the logical axioms there are proper axioms (also called non-logical axioms); these axioms are specific to the theory (and so usually do not hold in other first-order theories). For example, Peano’s axioms when expressed in a first-order language are proper axioms for the formal theory of arithmetic, but they aren't logical axioms or logical truths. See (Wolf, 2005, pp. 32-3) for specific proper axioms of Peano Arithmetic and for proofs of some of its important theorems.

Besides the above problem about Leibniz’s Law, there is a related problem about infinity that occurs when Peano Arithmetic is expressed as a first-order theory. Gödel’s First Incompleteness Theorem proves that there are some bizarre truths which are independent of first-order Peano Arithmetic (PA), and so cannot be deduced within PA. None of these truths so far are known to lie in mainstream mathematics. But they might. And there is another reason to worry about the limitations of PA. Because the set of sentences of PA is only countable, whereas there are uncountably many sets of numbers in informal arithmetic, it might be that PA is inadequate for expressing and proving some important theorems about sets of numbers. See (Wolf 2005, pp. 33-4, 225).

It seems that all the important theorems of arithmetic and the rest of mathematics can be expressed and proved in another first-order theory, Zermelo-Fraenkel set theory with the axiom of choice (ZFC). Unlike first-order Peano Arithmetic, ZFC needs only a very simple first-order language that surprisingly has no undefined predicate symbol, equality symbol, relation symbol, or function symbol, other than a single two-place binary relation symbol intended to represent set membership. The domain is intended to be composed only of sets but since mathematical objects can be defined to be sets, the domain contains these mathematical objects.

a. Finite and Infinite Axiomatizability

In the process of axiomatizing a theory, any sentence of the theory can be called an axiom. When axiomatizing a theory, there is no problem with having an infinite number of axioms so long as the set of axioms is decidable, that is, so long as there is a finitely long computation or mechanical procedure for deciding, for any sentence, whether it is an axiom.

Logicians are curious as to which formal theories can be finitely axiomatized in a given formal system and which can only be infinitely axiomatized. Group theory is finitely axiomatizable in classical first-order logic, but Peano Arithmetic and ZFC are not. Peano Arithmetic is not finitely axiomatizable because it requires an axiom scheme for induction. An axiom scheme is a countably infinite number of axioms of similar form, and an axiom scheme for induction would be an infinite number of axioms of the form (expressed here informally): “If property P of natural numbers holds for zero, and also holds for n+1 whenever it holds for natural number n, then P holds for all natural numbers.” There needs to be a separate axiom for every property P, but there is a countably infinite number of these properties expressible in a first-order language of elementary arithmetic.

Assuming ZF is consistent, ZFC is not finitely axiomatizable in first-order logic, as Richard Montague discovered. Nevertheless ZFC is a subset of von Neumann–Bernays–Gödel (NBG) set theory, and the latter is finitely axiomatizable, as Paul Bernays discovered. The first-order theory of Euclidean geometry is not finitely axiomatizable, and the second-order logic used in (Field 1980) to reconstruct mathematical physics without quantifying over numbers also is not finitely axiomatizable. See (Mendelson 1997) for more discussion of finite axiomatizability.

b. Infinitely Long Formulas

An infinitary logic is a logic that makes one of classical logic’s necessarily finite features be infinite. In the languages of classical first-order logic, every formula is required to be only finitely long, but an infinitary logic might relax this. The original, intuitive idea behind requiring finitely long sentences in classical logic was that logic should reflect the finitude of the human mind. But with increasing opposition to psychologism in logic, that is, to making logic somehow dependent on human psychology, researchers began to ignore the finitude restrictions. Löwenheim in about 1915 was perhaps the pioneer here. In 1957, Alfred Tarski and Dana Scott explored permitting the operations of conjunction and disjunction to link infinitely many formulas into an infinitely long formula. Tarski also suggested allowing formulas to have a sequence of quantifiers of any transfinite length. William Hanf proved in 1964 that, unlike classical logics, these infinitary logics fail to be compact. See (Barwise 1975) for more discussion of these developments.

c. Infinitely Long Proofs

Classical formal logic requires proofs to contain a finite number of steps. In the mid-20th century with the disappearance of psychologism in logic, researchers began to investigate logics with infinitely long proofs as an aid to simplifying consistency proofs. See (Barwise 1975).

d. Infinitely Many Truth Values

One reason for permitting an infinite number of truth values is to represent the idea that truth is a matter of degree. The intuitive idea is that, say, depending on the temperature, the truth of “This cup of coffee is warm” might be definitely true, less true, even less true, and so forth

One of the simplest infinite-valued semantics uses a continuum of truth values. Its valuations assign to each basic sentence (a formal sentence that contains no connectives or quantifiers) a truth value that is a specific number in the closed interval of real numbers from 0 to 1. The truth value of the vague sentence “This water is warm” is understood to be definitely true if it has the truth value 1 and definitely false if it has the truth value 0. To sentences having main connectives, the valuation assigns to the negation ~P of any sentence P the truth value of one minus the truth value assigned to P. It assigns to the conjunction P & Q the minimum of the truth values of P and of Q. It assigns to the disjunction P v Q the maximum of the truth values of P and of Q, and so forth.

One advantage to using an infinite-valued semantics is that by permitting modus ponens to produce a conclusion that is slightly less true than either premise, we can create a solution to the paradox of the heap, the sorites paradox. One disadvantage is that there is no well-motivated choice for the specific real number that is the truth value of a vague statement. What is the truth value appropriate to “This water is warm” when the temperature is 100 degrees Fahrenheit and you are interested in cooking pasta in it? Is the truth value 0.635? This latter problem of assigning truth values to specific sentences without being arbitrary has led to the development of fuzzy logics in place of the simpler infinite-valued semantics we have been considering. Lofti Zadeh suggested that instead of vague sentences having any of a continuum of precise truth values we should make the continuum of truth values themselves imprecise. His suggestion was to assign a sentence a truth value that is a fuzzy set of numerical values, a set for which membership is a matter of degree. For more details, see (Nolt 1997, pp. 420-7).

e. Infinite Models

A countable language is a language with countably many symbols. The Löwenhim Skolem Theorem says:

If a first-order theory in a countable language has an infinite model, then it has a countably infinite model.

This is a surprising result about infinity. Would you want your theory of real numbers to have a countable model? Strictly speaking it is a puzzle and not a paradox because the property of being countably infinite is a property it has when viewed from outside the object language not within it. The theorem does not imply first-order theories of real numbers must have no more real numbers than there are natural numbers.

The Löwenhim-Skolem Theorem can be extended to say that if a theory in a countable language has a model of some infinite size, then it also has models of any infinite size. This is a limitation on first-order theories; they do not permit having a categorical theory of an infinite structure.  A formal theory is said to be categorical if any two models satisfying the theory are isomorphic. The two models are isomorphic if they have the same structure; and they can’t be isomorphic if they have different sizes. So, if you create a first-order theory intended to describe a single infinite structure of a certain size, the theory will end up having, for any infinite size, a model of that size. This frustrates the hopes of anyone who would like to have a first-order theory of arithmetic that has models only of size ℵ0, and to have a first-order theory of real numbers that has models only of size 20.  See (Enderton 1972, pp. 142-3) for more discussion of this limitation.

Because of this limitation, many logicians have turned to second-order logics. There are second-order categorical theories for the natural numbers and for the real numbers. Unfortunately, there is no sound and complete deductive structure for any second-order logic having a decidable set of axioms; this is a major negative feature of second-order logics.

To illustrate one more surprise regarding infinity in formal logic, notice that the quantifiers are defined in terms of their domain, the domain of discourse. In a first-order set theory, the expression ∃xPx says there exists some set x in the infinite domain of all the sets such that x has property P. Unfortunately, in ZF there is no set of all sets to serve as this domain. So, it is oddly unclear what the expression ∃xPx means when we intend to use it to speak about sets.

f. Infinity and Truth

According to Alfred Tarski’s Undefinability Theorem, in an arbitrary first-order language a global truth predicate is not definable. A global truth predicate is a predicate which is satisfied by all and only the names (via, say, Gödel numbering) of all the true sentences of the formal language. According to Tarski, since no single language has a global truth predicate, the best approach to expressing truth formally within the language is to expand the  language into an infinite hierarchy of languages, with each higher language (the metalanguage) containing a truth predicate that can apply to all and only the true sentences of languages lower in the hierarchy. This process is iterated into the transfinite to obtain Tarski's hierarchy of metalanguages. Some philosophers have suggested that this infinite hierarchy is implicit within natural languages such as English, but other philosophers, including Tarski himself, believe an informal language does not contain within it a formal language.

To handle the concept of truth formally, Saul Kripke rejects the infinite hierarchy of metalanguages in favor of an infinite hierarchy of interpretations (that is, valuations) of a single language, such as a first-order predicate calculus, with enough apparatus to discuss its own syntax. The language’s intended truth predicate T is the only basic (atomic) predicate that is ever partially-interpreted at any stage of the hierarchy. At the first step in the hierarchy, all predicates but the single predicate T(x) are interpreted. T(x) is completely uninterpreted at this level. As we go up the hierarchy, the interpretation of the other basic predicates are unchanged, but T is satisfied by the names of sentences that were true at lower levels. For example, at the second level, T is satisfied by the name of the sentence ∀œx(Fx v ~Fx). At each step in the hierarchy, more sentences get truth values, but any sentence that has a truth value at one level has that same truth value at all higher levels. T almost becomes a global truth predicate when the inductive interpretation-building reaches the first so-called fixed point level. At this countably infinite level, although T is a truth predicate for all those sentences having one of the two classical truth values, the predicate is not quite satisfied by the names of every true sentence because it is not satisfied by the names of some of the true sentences containing T. At this fixed point level, the Liar sentence (of the Liar Paradox) is still neither true nor false. For this reason, the Liar sentence is said to fall into a “truth gap” in Kripke’s theory of truth. See (Kripke, 1975).

(Yablo 1993) produced a semantic paradox somewhat like the Liar Paradox. Yablo claimed there is no way to coherently assign a truth value to any of the sentences in the countably infinite sequence of sentences of the form, “None of the subsequent sentences are true.” Ask yourself whether the first sentence in the sequence could be true. Notice that no sentence overtly refers to itself. There is controversy in the literature about whether the paradox actually contains a hidden appeal to self-reference, and there has been some investigation of the parallel paradox in which “true” is replaced by “provable.” See (Beall 2001).

7. Conclusion

There are many aspects of the infinite that this article does not cover. Here are some of them: renormalization in quantum field theory, supertasks and infinity machines, categorematic and syncategorematic uses of the word “infinity,” mereology, ordinal and cardinal arithmetic in ZF, the various non-ZF set theories, non-standard solutions to Zeno's Paradoxes, Cantor's arguments for the Absolute, Kant’s views on the infinite, quantifiers that assert the existence of uncountably many objects, and the detailed arguments for and against constructivism, intuitionism, and finitism. For more discussion of these latter three programs, see (Maddy 1992).

8. References and Further Reading

  • Ahmavaara, Y. (1965). “The Structure of Space and the Formalism of Relativistic Quantum Theory,” Journal of Mathematical Physics, 6, 87-93.
    • Uses finite arithmetic in mathematical physics, and argues that this is the correct arithmetic for science.
  • Barrow, John D. (2005). The Infinite Book: A Short Guide to the Boundless, Timeless and Endless. Pantheon Books, New York.
    • An informal and easy-to-understand survey of the infinite in philosophy, theology, science and mathematics. Says which Western philosopher throughout the centuries said what about infinity.
  • Barwise, Jon. (1975) “Infinitary Logics,” in Modern Logic: A Survey, E. Agazzi (ed.), Reidel, Dordrecht, pp. 93-112.
    • An introduction to infinitary logics that emphasizes historical development.
  • Beall, J.C. (2001). “Is Yablo’s Paradox Non-Circular?” Analysis 61, no. 3, pp. 176-87.
    • Discusses the controversy over whether the Yablo Paradox is or isn’t indirectly circular.
  • Cantor, Georg. (1887). "Über die verschiedenen Ansichten in Bezug auf die actualunendlichen Zahlen." Bihang till Kongl. Svenska Vetenskaps-Akademien Handlingar , Bd. 11 (1886-7), article 19. P. A. Norstedt & Sôner: Stockholm.
    • A very early description of set theory and its relationship to old ideas about infinity.
  • Chihara, Charles. (1973). Ontology and the Vicious-Circle Principle. Ithaca: Cornell University Press.
    • Pages 63-65 give Chihara’s reasons for why the Gödel-Cohen independence results are evidence against mathematical Platonism.
  • Chihara, Charles. (2008). “The Existence of Mathematical Objects,” in Proof & Other Dilemmas: Mathematics and Philosophy, Bonnie Gold & Roger A. Simons, eds., The Mathematical Association of America.
    • In chapter 7, Chihara provides a fine survey of the ontological issues in mathematics.
  • Deutsch, David. (2011). The Beginning of Infinity: Explanations that Transform the World. Penguin Books, New York City.
    • Emphasizes the importance of successful explanation in understanding the world, and provides new ideas on the nature and evolution of our knowledge.
  • Descartes, René. (1641). Meditations on First Philosophy.
    • The third meditation says, “But these properties [of God] are so great and excellent, that the more attentively I consider them the less I feel persuaded that the idea I have of them owes its origin to myself alone. And thus it is absolutely necessary to conclude, from all that I have before said, that God exists….”
  • Dummett, Michael. (1977). Elements of Intuitionism. Oxford University Press, Oxford.
    • A philosophically rich presentation of intuitionism in logic and mathematics.
  • Elwes, Richard. (2010). Mathematics 1001: Absolutely Everything That Matters About Mathematics in 1001 Bite-Sized Explanations, Firefly Books, Richmond Hill, Ontario.
    • Contains the quoted debate between Harvey Friedman and a leading ultrafinitist.
  • Enderton, Herbert B. (1972). A Mathematical Introduction to Logic. Academic Press: New York.
    • An introduction to deductive logic that presupposes the mathematical sophistication of an advanced undergraduate mathematics major. The corollary proved on p. 142 says that if a theory in a countable language has a model of some infinite size, then it also has models of any infinite size.
  • Feferman, Anita Burdman, and Solomon. (2004) Alfred Tarski: Life and Logic, Cambridge University Press, New York.
    • A biography of Alfred Tarski, the 20th century Polish and American logician.
  • Field, Hartry. (1980). Science Without Numbers: A Defense of Nominalism. Princeton: Princeton University Press.
    • Field’s program is to oppose the Quine-Putnam Indispensability argument which apparently implies that mathematical physics requires the existence of mathematical objects such as numbers and sets. Field tries to reformulate scientific theories so, when they are formalized in second-order logic, their quantifiers do not range over abstract mathematical entities. Field’s theory uses quantifiers that range over spacetime points. However, because it uses a second order logic, the theory is also committed to quantifiers that range over sets of spacetime points, and sets are normally considered to be mathematical objects.
  • Gödel, Kurt. (1947/1983). “What is Cantor’s Continuum Problem?” American Mathematical Monthly 54, 515-525. Revised and reprinted in Philosophy of Mathematics: Selected Readings, Paul Benacerraf and Hilary Putnam (eds.), Prentice-Hall, Inc. Englewood Cliffs, 1964.
    • Gödel argues that the failure of ZF to provide a truth value for Cantor’s continuum hypothesis implies a failure of ZF to correctly describe the Platonic world of sets.
  • Greene, Brian. (2004). The Fabric of Reality. Random House, Inc., New York.
    • Promotes the virtues of string theory.
  • Greene, Brian (1999). The Elegant Universe. Vintage Books, New York.
    • The quantum field theory called quantum electrodynamics (QED) is discussed on pp. 121-2.
  • Greene, Brian. (2011). The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos. Vintage Books, New York.
    • A popular survey of cosmology with an emphasis on string theory.
  • Hawking, Stephen. (2001). The Illustrated A Brief History of Time: Updated and Expanded Edition. Bantam Dell. New York.
    • Chapter 4 of Brief History contains an elementary and non-mathematical introduction to quantum mechanics and Heisenberg’s uncertainty principle.
  • Hilbert, David. (1925). “On the Infinite,” in Philosophy of Mathematics: Selected Readings, Paul Benacerraf and Hilary Putnam (eds.), Prentice-Hall, Inc. Englewood Cliffs, 1964. 134-151.
    • Hilbert promotes what is now called the Hilbert Program for solving the problem of the infinite by requiring a finite basis for all acceptable assertions about the infinite.
  • Kleene, (1967). Mathematical Logic. John Wiley & Sons: New York.
    • An advanced textbook in mathematical logic.
  • Kripke, Saul. (1975). "Outline of a Theory of Truth," Journal of Philosophy 72, pp. 690–716.
    • Describes how to create a truth predicate within a formal language that avoids assigning a truth value to the Liar Sentence.
  • Leibniz, Gottfried. (1702). "Letter to Varignon, with a note on the 'Justification of the Infinitesimal Calculus by that of Ordinary Algebra,'" pp. 542-6. In Leibniz Philosophical Papers and Letters. translated by Leroy E. Loemkr (ed.). D. Reidel Publishing Company, Dordrecht, 1969.
    • Leibniz defends the actual infinite in calculus.
  • Levinas, Emmanuel. (1961). Totalité et Infini. The Hague: Martinus Nijhoff.
    • In Totality and Infinity, the Continental philosopher Levinas describes infinity in terms of the possibilities a person confronts upon encountering other conscious beings.
  • Maddy, Penelope. (1992). Realism in Mathematics. Oxford: Oxford University Press.
    • A discussion of the varieties of realism in mathematics and the defenses that have been, and could be, offered for them. The book is an extended argument for realism about mathematical objects. She offers a set theoretic monism in which all physical objects are sets.
  • Maor, E. (1991). To Infinity and Beyond: A Cultural History of the Infinite. Princeton: Princeton University Press.
    • A survey of many of the issues discussed in this encyclopedia article.
  • Mendelson, Elliolt. (1997). An Introduction to Mathematical Logic, 4th ed. London: Chapman & Hall.
    • Pp. 225–86 discuss NBG set theory.
  • Mill, John Stuart. (1843). A System of Logic: Ratiocinative and Inductive. Reprinted in J. M. Robson, ed., Collected Works, volumes 7 and 8. Toronto: University of Toronto Press, 1973.
    • Mill argues for empiricism and against accepting the references of theoretical terms in scientific theories if the terms can be justified only by the explanatory success of those theories.
  • Moore, A. W. (2001). The Infinite. Second edition, Routledge, New York.
    • A popular survey of the infinite in metaphysics, mathematics, and science.
  • Mundy, Brent. (1990). “Mathematical Physics and Elementary Logic,” Proceedings of the Biennial Meeting of the Philosophy of Science Association. Vol. 1990, Volume 1. Contributed Papers (1990), pp. 289-301.
    • Discusses the relationships among set theory, logic and physics.
  • Nolt, John. Logics. (1997). Wadsworth Publishing Company, Belmont, California.
    • An undergraduate logic textbook containing in later chapters a brief introduction to non-standard logics such as those with infinite-valued semantics.
  • Norton, John. (2012). "Approximation and Idealization: Why the Difference Matters," Philosophy of Science, 79, pp. 207-232.
    • Recommends being careful about the distinction between approximation and idealization in science.
  • Owen, H. P. (1967). “Infinity in Theology and Metaphysics.” In Paul Edwards (Ed.) The Encyclopedia of Philosophy, volume 4, pp. 190-3.
    • This survey of the topic is still reliable.
  • Parsons, Charles. (1980). “Quine on the Philosophy of Mathematics.” In L. Hahn and P. Schilpp (Eds.) The Philosophy of W. V. Quine, pp. 396-403. La Salle IL: Open Court.
    • Argues against Quine’s position that whether a mathematical entity exists depends on the indispensability of the mathematical term denoting that entity in a true scientific theory.
  • Penrose, Roger. (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. New York: Alfred A. Knopf.
    • A fascinating book about the relationship between mathematics and physics. Many of its chapters assume sophistication in advanced mathematics.
  • Posy, Carl. (2005). “Intuitionism and Philosophy.” In Stewart Shapiro. Ed. (2005). The Oxford Handbook of Philosophy of Mathematics and Logic. Oxford: Oxford University Press.
    • The history of the intuitionism of Brouwer, Heyting and Dummett. Pages 330-1 explain how Brouwer uses choice sequences to develop “even the infinity needed to produce a continuum” non-empirically.
  • Quine, W. V. (1960). Word and Object. Cambridge: MIT Press.
    • Chapter 7 introduces Quine’s viewpoint that set theoretic objects exist because they are needed in the basis of our best scientific theories.
  • Quine, W. V. (1986). The Philosophy of W. V. Quine. Editors: Lewis Edwin Hahn and Paul Arthur Schilpp, Open Court, LaSalle, Illinois.
    • Contains the quotation saying infinite sets exist only insofar as they are needed for scientific theory.
  • Robinson, Abraham. (1966). Non-Standard Analysis. Princeton Univ. Press, Princeton.
    • Robinson’s original theory of the infinitesimal and its use in real analysis to replace the Cauchy-Weierstrass methods that use epsilons and deltas.
  • Rucker, Rudy. (1982). Infinity and the Mind: The Science and Philosophy of the Infinite. Birkhäuser: Boston.
    • A survey of set theory with much speculation about its metaphysical implications.
  • Russell, Bertrand. (1914). Our Knowledge of the External World as a Field for Scientific Method in Philosophy. Open Court Publishing Co.: Chicago.
    • Russell champions the use of contemporary real analysis and physics in resolving Zeno’s paradoxes. Chapter 6 is “The Problem of Infinity Considered Historically,” and that chapter is reproduced in (Salmon, 1970).
  • Salmon, Wesley, ed. (1970). Zeno's Paradoxes. The Bobbs-Merrill Company, Inc., Indianapolis.
    • A collection of the important articles on Zeno's Paradoxes plus a helpful and easy-to-read preface providing an overview of the issues.
  • Smullyan, Raymond. (1967). “Continuum Problem,” in Paul Edwards (ed.), The Encyclopedia of Philosophy, Macmillan Publishing Co. & The Free Press: New York.
    • Discusses the variety of philosophical reactions to the discovery of the independence of the continuum hypotheses from ZF set theory.
  • Suppes, Patrick. (1960). Axiomatic Set Theory. D. Van Nostrand Company, Inc.: Princeton.
    • An undergraduate-level introduction to set theory.
  • Tarski, Alfred. (1924). “Sur les Ensembles Finis,” Fundamenta Mathematicae, Vol. 6, pp. 45-95.
    • Surveys and evaluates alternative definitions of finitude and infinitude proposed by Zermelo, Russell, Sierpinski, Kuratowski, Tarski, and others.
  • Wagon, Stan. (1985). The Banach-Tarski Paradox. Cambridge University Press: Cambridge.
    • The unintuitive Banach-Tarski Theorem says a solid sphere can be decomposed into a finite number of parts and then reassembled into two solid spheres of the same radius as the original sphere. Unfortunately you cannot double your sphere of solid gold this way.
  • Wilder, Raymond L. (1965) Introduction to the Foundations of Mathematics, 2nd ed., John Wiley & Sons, Inc.: New York.
    • An undergraduate-level introduction to the foundation of mathematics.
  • Wolf, Robert S. (2005). A Tour through Mathematical Logic. The Mathematical Association of America: Washington, D.C.
    • Chapters 2 and 6 describe set theory and its historical development. Both the history of the infinitesimal and the development of Robinson’s nonstandard model of analysis are described clearly on pages 280-316.
  • Yablo, Stephen. (1993). “Paradox without Self-Reference.” Analysis 53: 251-52.
    • Yablo presents a Liar-like paradox involving an infinite sequence of sentences that, the author claims, is “not in any way circular,” unlike with the traditional Liar Paradox.

 

Author Information

Bradley Dowden
Email: dowden@csus.edu
California State University Sacramento
U. S. A.

Dynamic Epistemic Logic

Dynamic Epistemic Logic

This article tells the story of the rise of dynamic epistemic logic, which began with epistemic logic, the logic of knowledge, in the 1960s. Then, in the late 1980s, came dynamic epistemic logic, the logic of change of knowledge. Much of it was motivated by puzzles and paradoxes. The number of active researchers in these logics grows significantly every year, possibly because there are so many relations and applications to computer science, to multi-agent systems, to philosophy, and to cognitive science. The modal knowledge operators in epistemic logic are formally interpreted by employing binary accessibility relations in multi-agent Kripke models (relational structures), where these relations should be equivalence relations to respect the properties of knowledge.

The operators for change of knowledge correspond to another sort of modality, more akin to a dynamic modality. A peculiarity of this dynamic modality is that it is interpreted by transforming the Kripke structures used to interpret knowledge, and not, at least not on first sight, by an accessibility relation given with a Kripke model. Although called dynamic epistemic logic, this two-sorted modal logic applies to more general settings than the logic of merely S5 knowledge. The present article discusses in depth the early history of dynamic epistemic logic. It then mentions briefly a number of more recent developments involving factual change, one (of several) standard translations to temporal epistemic logic, and a relation to situation calculus (a well-known framework in artificial intelligence to represent change). Special attention is then given to the relevance of dynamic epistemic logic for belief revision, for speech act theory, and for philosophical logic. The part on philosophical logic pays attention to Moore sentences, the Fitch paradox, and the Surprise Examination.

For the main body of this article, go to Dynamic Epistemic Logic.

Author Information

Hans van Ditmarsch, LORIA, CNRS – University of Lorraine, France
Wiebe van der Hoek, The University of Liverpool, United Kingdom
Barteld Kooi, University of Groningen, Netherlands

 

Carnap: Modal Logic

Rudolf Carnap: Modal Logic

carnap02In two works, a paper in The Journal of Symbolic Logic in 1946 and the book Meaning and Necessity in 1947, Rudolf Carnap developed a modal predicate logic containing a necessity operator N, whose semantics depends on the claim that, where α is a formula of the language, Nα represents the proposition that α is logically necessary. Carnap’s view was that Nα should be true if and only if α itself is logically valid, or, as he put it, is L-true. In the light of the criticisms of modal logic developed by W.V. Quine from 1943 on, the challenge for Carnap was how to produce a theory of validity for modal predicate logic in a way which enables an answer to be given to these criticisms. This article discusses Carnap’s motivation for developing a modal logic in the first place; and it then looks at how the modal predicate logic developed in his 1946 paper might be adapted to answer Quine’s objections. The adaptation is then compared with the way in which Carnap himself tried to answer Quine’s complaints in the 1947 book. Particular attention is paid to the problem of how to treat the meaning of formulas which contain a free individual variable in the scope of a modal operator, that is, to the problem of how to handle what Quine called the third grade of ‘modal involvement’.

Table of Contents

  1. Introduction
  2. Carnap’s Propositional Modal Logic
  3. Carnap’s (Non-Modal) Predicate Logic
  4. Carnap’s 1946 Modal Predicate Logic
  5. De Re Modality
  6. Individual Concepts
  7. References and Further Reading

1. Introduction

In an important article (Carnap 1946) and in a book a year later, (Carnap 1947), Rudolf Carnap articulated a system of modal logic. Carnap took himself to be doing two things; the first was to develop an account of the meaning of modal expressions; the second was to extend it to apply to what he called “modal functional logic” — that is, what we would call modal predicate logic or modal first-order logic. Carnap distinguishes between a logic or a ‘semantical system’, and a ‘calculus’, which is an axiomatic system, and states on p. 33 of 1946 that  “So far, no forms of MFC [modal functional calculus] have been constructed, and the construction of such a system is our chief aim.” In fact, in the preceding issue of The Journal of Symbolic Logic, the first presentation of Ruth Barcan’s axiomatic systems of modal predicate logic had already appeared, although they contained only an axiomatic presentation. (Barcan 1946.) The principal importance of Carnap’s work is thus his attempt to produce a semantics for modal predicate logic, and it is that concern that this article will focus on.

Nevertheless, first-order logic is founded on propositional logic, and Carnap first looks at non-modal propositional logic and modal propositional logic. I shall follow Carnap in using ~ and ∨ for negation and disjunction, though I shall use ∧ in place of Carnap’s ‘.’ for conjunction. Carnap takes these as primitive together with ‘t’ which stands for an arbitrary tautologous sentence. He recognises that ∧ and t can be defined in terms of ~ and ∨, but prefers to take them as primitive because of the importance to his presentation of conjunctive normal form. Carnap adopts the standard definitions of ⊃ and ≡. I will, however, deviate from Carnap’s notation by using Greek in place of German letters for metalinguistic symbols. In place of ‘valid’ Carnap speaks of L-true, and in place of ‘unsatisfiable’, L-false. α L-implies β iff (if and only if) α ⊃ β is valid. α and β are L-equivalent iff α ≡ β is valid.

One might at this stage ask what led Carnap to develop a modal logic at all. The clue here seems to be the influence of Wittgenstein. In his philosophical autobiography Carnap writes:

For me personally, Wittgenstein was perhaps the philosopher who, besides Russell and Frege, had the greatest influence on my thinking. The most important insight I gained from his work was the conception that the truth of logical statements is based only on their logical structure and on the meaning of the terms. Logical statements are true under all conceivable circumstances; thus their truth is independent of the contingent facts of the world. On the other hand, it follows that these statements do not say anything about the world and thus have no factual content. (Carnap 1963, p. 25)

Wittgenstein’s account of logical truth depended on the view that every (cognitively meaningful) sentence has truth conditions. (Wittgenstein 1921, 4.024.) Carnap certainly appears to have taken Wittgenstein’s remark as endorsing the truth-conditional theory of meaning. (See for instance Carnap 1947 p. 9.) If all logical truths are tautologies, and all tautologies are contentless, then you don’t need metaphysics to explain (logical) necessity.

One of the features of Wittgenstein’s view was that any way the world could be is determined by a collection of particular facts, where each such fact occupies a definite position in logical space, and where the way that position is occupied is independent of the way any other position of logical space is occupied. Such a world may be described in a logically perfect language, in which each atomic formula describes how a position of logical space is occupied. So suppose that we begin with this language, and instead of asking whether it reflects the structure of the world, we ask whether it is a useful language for describing the world. From Carnap’s perspective, (Carnap 1950) one might describe it in such a way as this. Given a language £ we may ask whether £ is adequate, or perhaps merely useful, for describing the world as we experience it. It is incoherent to speak about what the world in itself is like without presupposing that one is describing it. What makes £ a Carnapian equivalent of a logically perfect language would be that each of its atomic sentences is logically independent of any other atomic sentence, and that every possible world can be described by a state-description.

2. Carnap’s Propositional Modal Logic

In (non-modal) propositional logic the truth value of any well-formed formula (wff) is determined by an assignment of truth values to the atomic sentences. For Carnap an assignment of truth values to the atomic sentences is represented by what he calls a ‘state-description’. This term, like much in what follows, is only introduced at the predicate level (1946, p. 50) but it is less confusing to present it first for the propositional case, where a state-description, which I will refer to as s, is a class consisting of atomic wff or their negations, such that for each atomic wff p, exactly one of p or ~p is in s. (Here we may think of p as a propositional variable, or as a metalinguistic variable standing for an atomic wff.) Armed with a state-description s we may determine the truth of a wff α at s in the usual way, where s ╞ α means that α is true according to s, and s ╡ α means that not s ╞ α:

If α is atomic, then s ╞ α if α ∈ s, and s ╡ α if ~α ∈ s

s ╞ ~α iff s ╡ α

s ╞ α ∨ β iff s ╞ α or s ╞ β

s ╞ α ∧ β iff s ╞ α and s ╞ β

s ╞ t

This is not the way Carnap describes it. Carnap speaks of the range of a wff (p. 50). In Carnap’s terms the truth rules would be written:

If α is atomic then the range of α is those state-descriptions s such that α ∈ s.

Where V is the set of all state-descriptions, the range of ~α is V minus the range of α, that is, it is the class of those state-descriptions which are not in the range of α.

The range of α ∨ β is the range of α ∪ the range of β, that is, the class of state-descriptions which are either in the range of α or the range of β.

The range of α ∧ β is the range of α ∩ the range of β, that is, the class of state-descriptions which are in both the range of α and the range of β.

The range of t is V.

It should I hope be easy to see, first that Carnap’s way of putting things is equivalent to my use of s ╞ α, and second that these are in turn equivalent to the standard definitions of validity in terms of assignments of truth values.

By a ‘calculus’ Carnap means an axiomatic system, and he uses ‘PC’ to indicate any axiomatic system which is closed under modus ponens (the ‘rule of implication’, p. 38) and contains “‘t’ and all sentences formed by substitution from Bernays’s four axioms [See Hilbert and Ackermann, 1950, p. 28f] of the propositional calculus”. (loc cit.) Carnap notes that the soundness of this axiom system may be established in the usual way, and then shows how the possibility of reduction to conjunctive normal form (a method which Carnap, p. 38, calls P-reduction) may be used to prove completeness.

Modal logic is obtained by the addition of the sentential operator N. Carnap notes that N is equivalent to Lewis’s ~◊~. (Note that the □ symbol was not used by Lewis, but was invented by F.B. Fitch in 1945, and first appeared in print in Barcan 1946. It was not then known to Carnap.) Carnap tells us early in his article that “the guiding idea in our construction of systems of modal logic is this: a proposition p is logically necessary if and only if a sentence expressing p is logically true.” When this is turned into a definition in terms of truth in a state-description we get the following:

sNα iff sʹ ╞ α for every state-description sʹ.

This is because L-truth, or validity, means truth in every state-description. I shall refer to validity when N is interpreted in this way, as Carnap-validity, or C-validity. This account enables Carnap to address what was an important question at the time — what is the correct system of modal logic? While Carnap is clear that different systems of modal logic can reflect different views of the meaning of the necessity operators he is equally clear that, as he understands it, principles like NpNNp and ~NpN~Np are valid. It is easy to see that the validity of both these formulae follows easily from Carnap’s semantics for N. From this it is a short step to establishing that Carnap’s modal logic includes the principles of Lewis’s system S5, provided one takes the atomic wff to be propositional variables. However, we immediately run into a problem. Suppose that p is an atomic wff. Then there will be a state-description sʹ such that ~psʹ. And this means that for every state-description s, sNp, and so s ╞ ~Np. But this means that ~Np will be L-true. One can certainly have a system of modal logic in which this is so. An axiomatic basis and a completeness proof for the logic of C-validity occurs in Thomason 1973. (For comments on this feature of C-validity see also Makinson 1966 and Schurz 2001.) However, Carnap is clear that his system is equivalent to S5 (footnote 8, p. 41, and on p. 46.); and ~Np is not a theorem of S5. Further, the completeness theorem that Carnap proves, using normal forms, is a completeness proof for S5, based on Wajsberg 1933.

How then should this problem be addressed? Part of the answer is to look at Carnap’s attitude to propositional variables:

We here make use of ‘p’, ‘q’, and so forth, as auxiliary variables; that is to say they are merely used (following Quine) for the description of certain forms of sentences. (1946, p.41)

Quine 1934 suggests that the theorems of logic are always schemata. If so then we can define a wff α as what we might call QC-valid (Quine/Carnap valid) iff every substitution instance of α is C-valid. Wffs which are QC-valid are precisely the theorems of S5.

3. Carnap’s (Non-Modal) Predicate Logic

In presenting Carnap’s 1946 predicate logic (or as he prefers to call it ‘functional logic’, FL or FC depending on whether we are considering it semantically or axiomatically) I shall use ∀x in place of (x), and ∃x in place of (∃x). FL contains a denumerable infinity of individual constants, which I will often refer to simply as ‘constants’. Carnap uses the term ‘matrix’ for wff, and the term ‘sentence’ for closed wff, that is wff with no free variables. A state-description is as for propositional logic in containing only atomic sentences or their negations. Each of these will be a wff of the form Pa1...an or ~Pa1...an, where P is an n-place predicate and a1,..., an are n individual constants, not necessarily distinct.

To define truth in such a state-description Carnap proceeds a little differently from what is now common. In place of relativising the truth of an open formula to an assignment to the variables of individuals from a domain, Carnap assumes that every individual is denoted by one and only one individual constant, and he only defines truth for sentences. If s is any state-description, and α and β are any sentences, the rules for propositional modal logic can be extended by adding the following:

sPa1...an if Pa1...ans and sPa1...an if ~Pa1...ans

sa = b iff a and b are the same constant

s ╞ ∀xα iff s ╞ α[a/x] for every constant a, where [α/x] is α with a replacing every free x.

Carnap produces the following axiomatic basis for first-order predicate logic, which he calls ‘FC’. In place of Carnap’s ( ) to indicate the universal closure of a wff, I shall use ∀, so that Carnap’s D8-1a (1946, p. 52) can be written as:

PC       ∀α where α is a PC-tautology

and so on. Carnap refers to axioms as ‘primitive sentences’ and in addition to PC, using more current names, we have:

       ∀(∀x(α ⊃ β) ⊃ (∀xα ⊃ ∀xβ))

VQ      ∀(α ⊃ ∀xα), where x is not free in α.

∀1a     ∀(∀x ⊃ α[y/x]), where α[y/x] is just like α except in having y in place of free x, where y is any variable for which x is free

∀1b     ∀(∀x ⊃ α[b/x]), where α[b/x] is just like α except in having b in place of free x, where b is any constant

I1         ∀x x = x

I2         ∀(x = y ⊃ (α ⊃ β)), where α and β are alike except that α has free x in 0 or more places where β has free y.

I3         ab where a and b are different constants.

The only transformation rule is modus ponens:

MP      ├ α, ├ α ⊃ β therefore ├ β

The only thing non-standard here, except perhaps for the restriction of theorems to closed wffs, is I3, which ensures that all state-descriptions are infinite, and, as Carnap points out on p. 53, validates ∃xy xy. It is possible to prove the completeness of this axiomatic system with respect to Carnap’s semantics.

4. Carnap’s 1946 Modal Predicate Logic

Perhaps the most important issue in Carnap’s modal logic is its connection with the criticisms of W.V. Quine. These criticisms were well known to Carnap who cites Quine 1943. Some years later, in Quine 1953b, Quine distinguishes three grades of what he calls ‘modal involvement’. The first grade he regards as innocuous. It is no more than the metalinguistic attribution of validity to a formula of non-modal logic. In the second grade we say that where α is any sentence then Nα is true iff α itself is valid — or logically true. On pp. 166-169 Quine argues that while such a procedure is possible it is unilluminating and misleading. The third grade applies to modal predicate logic, and allows free individual variables to occur in the scope of modal operators. It is this grade that Quine finds objectionable. One of the points at issue between Quine and Carnap arises when we introduce what are called definite descriptions into the language. Much of Carnap’s discussion in his other works — see especially Carnap 1947 — elevates descriptions to a central role, but in the 1946 paper these are not involved.

The extension of Carnap’s semantics to modal logic is exactly as in the propositional case:

sNα iff sʹ ╞ α for every state-description sʹ.

As before, a wff can be called C-valid iff it is true in every state-description, when ╞ satisfies the principle just stated. As in the propositional case if α is S5-valid then α is C-valid. However, also as in the propositional case, (quantified) S5 is not complete for C-validity. This is because, where Pa is an atomic wff, ~NPa is C-valid even though it is not a theorem of S5 — and similarly with any atomic wff. Unlike the propositional case it seems that this is a feature which Carnap welcomed in the predicate case, since he introduces some non-standard axioms.

The first set of axioms all form part of a standard basis for S5. They are as follows (p. 54, but with current names and notation):

LPCN  Where α is one of the LPC axioms PC-I3 then both α and Nα are axioms of MFC.

K         N∀(N(α ⊃ β) ⊃ (Nα ⊃ Nβ))

T         ∀(Nα ⊃ α)

5          N∀(Nα ∨ N~Nα)

BFC     N∀(Nxα ⊃ ∀xNα)

BF       N∀(∀xNα ⊃ Nxα)

The non-standard axioms, which show that he is attempting to axiomatise C-validity, are what Carnap calls ‘Assimilation’, ‘Variation and Generalization’ and ‘Substitution for Predicates’. (Carnap 1946, p. 54f.) In our notation these can be expressed as follows:

Ass      Nxyz1...∀zn((xz1 ∧ ... ∧ xzn) ⊃ (Nα ⊃ N α[y/x])), where α contains no free variables other than x, y, z1,..., zn, and no constants and no occurrences of =.

VG      Nxyz1...∀zn((xz1 ∧ ... ∧ xznyz1 ∧ ... ∧ yzn) ⊃ (Nα ⊃ N α[y/x]), where α contains no free variables other than x, y, z1,..., zn, and no constants.

SP       N∀(Nα ⊃ Nβ), where β is obtained from α by uniform substitution of a complex expression for a predicate.

None of these axiom schemata is easy to process, but it is not difficult to see what the simplest instances would look like. A very simple instance, which is of both Ass and VG is

AssP     Nxyz(xz ⊃ (NPxyzNPyyz))

To establish the validity of AssP it is sufficient to show that if a and c are distinct constants then NPabcNPbbc is valid. This is trivially so, since there is some s such that sPabc, and therefore for every s, sNPabc, and so, for every s, sNPabcPbbc. More telling is the case of SP. Let P be a one-place predicate and consider

SPP      Nx(NPxN(Px ∧ ~Px))

In this case α is Px, while β is Px ∧ ~Px, so that, in Carnap’s words, β ‘is formed from α by replacing every atomic matrix containing P by the current substitution form of β’. That is, where β is Px ∧ ~Px, it replaces α’s Px. If α had been more complex and contained Py as well as Px, then the replacement would have given Py ∧ ~ Py, and so on, where care needs to be taken to prevent any free variable being bound as a result of the replacement. In this case we have ├ ~N(Pa ∧ ~ Pa), and so ├ ~NPa.

In fact, although Carnap appears to have it in mind to axiomatise C-validity, it is easy to see that the predicate version is not recursively axiomatisable. For, where α is any LPC wff, α is not LPC-valid iff ~Nα is C-valid, and so, if C-validity were axiomatisable then LPC would be decidable. There is a hint on p. 57 that Carnap may have recognised this. He is certainly aware that the kind of reduction to normal form, with which he achieves the completeness of propositional S5, is unavailable in the predicate case, since it would lead to the decidability of LPC.

5. De Re Modality

What then can be said on the basis of Carnap 1946 to answer Quine’s complaints about modal predicate logic? Quine illustrates the problem in Quine 1943, pp. 119-121, and repeats versions of his argument many times, most famously perhaps in Quine 1953a, 1953b and 1960. The example  goes like this:

(1)                                9 is necessarily greater than 7

(2)                                The number of planets = 9

therefore

(3)                                The number of planets is necessarily greater than 7.

Carnap 1946 does not introduce definite descriptions into the language, so I shall present the argument in a formalisation which only uses the resources found there. I shall also simplify the discussion by using the predicate O, where Ox means ‘x is odd’, rather than the complex predicate ‘is greater than 7’. This will avoid reference to ‘7’, which is of no relevance to Quine’s argument. P means ‘is the number of the planets’, so that Px means ‘there are x-many planets’. With this in mind I take ‘9’ to be an individual constant, and use O and P to express (1) and (2) by

(4)                                NO9

(5)                                ∃x(Pxx = 9)

One could account for (4) by adding O9 as a meaning postulate in the sense of Carnap 1952, which would restrict the allowable state-descriptions to those which contain O9, though from some remarks on p. 201 of Carnap 1947 it seems that Carnap might have regarded both O and 9 as complex expressions defined by the resources of the Frege/Russell account of the natural numbers and their arithmetical properties. It also seems that he might have treated the numbers as higher-order entities referred to by higher-order expressions. If so then the necessity of arithmetical truths like (4) would derive from their analysis into logical truths. In my exposition I shall take the numerals as individual constants, and assume somehow that O9 is a logical truth, true in every state-description, and that therefore (4) is true.

In this formalisation I am ignoring the claim that the description ‘the number of the planets’ is intended to claim that there is only one such number. So much for the premises. But what about the conclusion? The problem is where to put the N. There are at least three possibilities:

(6)                                Nx(PxOx)

(7)                                ∃xN(PxOx)

(8)                                ∃x(PxNOx)

It is not difficult to show that (6) and (7) do not follow from (4) and (5). In contrast to (6) and (7), (8) does follow from (4) and (5), but there is no problem here, since (8) says that there is a necessarily odd number which is such that there happen to be that many planets. And this is true, because 9 is necessarily odd, and there are 9 planets. All of this should make clear how the phenomenon which upset Quine can be presented in the formal language of the 1946 article. Quine of course claims not to make sense of quantifying in. (See for instance the comments on Smullyan 1948 in Quine 1969, p. 338.)

6. Individual Concepts

Even if something like what has just been said might be thought to enable Carnap to answer Quine’s complaints about de re modality, it seems clear that Carnap had not availed himself of it in the 1947 book, and I shall now look at the modal logic presented in Carnap 1947. On p. 193f Carnap cites the argument (1)(2)(3) from Quine 1943 discussed above. He does not appear to recognise any potential ambiguity in the conclusion, and characterises (3) as false. Carnap doesn’t consider (8), and on p. 194 simply says:

“we obtain the false statement [(3)]”

In Carnap’s view the problem with Quine’s argument is that it assumes an unrestricted version of what is sometimes called ‘Leibniz’ Law’:

I2         ∀xy(x = y ⊃ (α ⊃ β)), where α and β differ only in that α has free x in 0 or more places where β has free y.

In the 1946 paper this law holds in full generality, as does a consequence of it which asserts the necessity of true identities.

LI        ∀xy(x = yNx = y)

For suppose LI fails. Then there would have to be a state-description ss in which for some constants a and b, sa = b but sNa = b. So there is a state-description sʹ such that sʹ ╡ a = b, but then, a and b are different constants, and so, sa = b, which gives a contradiction.

In the 1947 book Carnap holds that I2 must be restricted so that neither x nor y occur free in the scope of a modal operator. In particular the following would be ruled out as an allowable instance of I2:

(1)                                x = y ⊃ (NOxNOy)

In order to explain how this failure comes about, and solve the problems posed by co-referring singular terms, Carnap modifies the semantics of the 1946 paper. The principal difference from the modal logic of the 1946 paper, as Carnap tells us on p. 183, is that the domain of quantification for individual variables now consists of individual concepts, where an individual concept i is a function from state-descriptions to individual constants. Where s is a state-description, let is denote the constant which is the value of the function i for the state-description s. Carnap is clear that the quantifiers range over all individual concepts, not just those expressible in the language.

Using this semantics it is easy to see how (9) can fail. For let x have as its value the individual concept i, which is the function such that is is 9 for every state-description s, while the value of y is the function j such that, in any state-description s, js is the individual which is the number of the planets in s, that is, js is the (unique) constant a such that Pa is in s. (Assume that in each state-description there is a unique number, possibly 0, which satisfies P.) Assume that x = y is true in any state-description s iff, where i is the individual concept which is the value of x, and j is the individual concept which is the value of y, then is is the same individual constant as js. In the present example it happens that when s is the state-description which represents the actual world, is and js are indeed the same, for in s there are nine planets, making x = y true at s. Now NOx will be true if Ox is true in every state-description sʹ, which is to say if isʹ satisfies O in every sʹ. Since isʹ is 9 in every state-description then isʹ does satisfy O in every sʹ, and so NOx is true at s. But suppose sʹ represents a situation in which there are six planets. Then jsʹ will be 6 and so Oy will be false in sʹ, and for that reason NOy will be false in s, thus falsifying (9). (It is also easy to see that LI is not valid, since it is easy to have is = js even though ij.)

The difference between the modal semantics of Carnap 1946 and Carnap 1947 is that in the former the only individuals are the genuine individuals, represented by the constants of the language ℒ. In the proof of the invalidity of (9) it is essential that the semantics of identity require that when x is assigned an individual concept i and y is assigned an individual concept j that x = y be true at a state-description s iff is and js are the same individual. And now we come to Quine’s complaint (Quine 1953a, p. 152f). It is that Carnap replaces the domain of things as the range of the quantifiers with a domain of individual concepts. Quine then points out that the very same paradoxes arise again at the level of individual concepts. Thus for instance it might be that the individual concept which represents the number of planets in each state-description is identical with the first individual concept introduced on p. 193 of Meaning and Necessity. Carnap is alive to Quine’s criticism that ordinary individuals have been replaced in his ontology by individual concepts. In essence Carnap’s reply to Quine on pp. 198- 200 of Carnap 1947 is that if we restrict ourselves to purely extensional contexts then the entities which enter into the semantics are precisely the same entities as are the extensions of the intensions involved. What this amounts to is that although the domain of quantification consists of individual concepts, the arguments of the predicates are only the genuine individuals. For suppose, as Quine appears to have in mind, we permit predicates which apply to individual concepts. Then suppose that i and j are distinct individual concepts. Let P be a predicate which can apply to individual concepts, and let s be a state-description in which P applies to i but not to j but in which is and js are the same individual. We now have two options depending on how = is to be understood. If we take x = y to be true in s when is and js are the same individual then if x is assigned i and y is assigned j we would have that x = y and Px are both true in s, but Py is not. So that even the simplest instance of I2

I2P       x = y ⊃ (PxPy)

fails, and here there are no modal operators involved. The second option is to treat = as expressing a genuine identity. That is to say x = y is true only when the individual concept assigned to x is the same individual concept as the one assigned to y. In the example I have been discussing, since i and j are distinct individual concepts if i is assigned to x and j to y, then x = y will be false. But on this option the full version of I2 becomes valid even when α and β contain modal operators. This is just another version of Quine’s complaint that if an operator expresses identity then the terms of a true identity formula must be interchangeable in all contexts. Presumably Carnap thought that the use of individual concepts could address these worries. The present article makes no claims on whether or not an acceptable treatment of individual concepts is desirable, and if it is whether one can be developed.

7. References and Further Reading

This list contains all items referred to in the text, together with some other articles relevant to Carnap’s modal logic.

  • Barcan, (Marcus) R.C., 1946, A functional calculus of first order based on strict implication. The Journal of Symbolic Logic, 11, 1–16.
  • Burgess, J.P., 1999, Which modal logic is the right one? Notre Dame Journal of Formal Logic, 40, 81–93.
  • Carnap, R., 1937, The Logical Syntax of Language, London, Kegan Paul, Trench Truber.
  • Carnap, R., 1946, Modalities and quantification. The Journal of Symbolic Logic, 11, 33–64.
  • Carnap, R., 1947, Meaning and Necessity, Chicago, University of Chicago Press (Second edition 1956, references are to the second edition.).
  • Carnap, R., 1950, Empiricism, semantics and ontology. Revue Intern de Phil. 4, pp. 20–40 (Reprinted in the second edition of Carnap 1947, pp. 2052–2221. Page references are to this reprint.).
  • Carnap, R., 1952, Meaning postulates. Philosophical Studies, 3, pp. 65–73. (Reprinted in the second edition of Carnap 1947, pp. 222–229. Page references are to this reprint.)
  • Carnap, R., 1963, The Philosophy of Rudolf Carnap, ed P.A. Schilpp, La Salle, Ill., Open Court, pp. 3–84.
  • Church, A., 1973, A revised formulation of the logic of sense and denotation (part I). Noũs, 7, pp. 24–33.
  • Cocchiarella, N.B., 1975a, On the primary and secondary semantics of logical necessity. Journal of Philosophical Logic, 4, pp. 13–27..
  • Cocchiarella, N.B.,1975b, Logical atomism, nominalism, and modal logic. Synthese, 31, pp. 23−67.
  • Cresswell, M.J., 2013, Carnap and McKinsey: Topics in the pre–history of possible worlds semantics. Proceedings of the 12th Asian Logic Conference, J. Brendle, R. Downey, R. Goldblatt and B. Kim (eds), World Scientific, pp. 53-75.
  • Garson, J.W., 1980, Quantification in modal logic. Handbook of Philosophical Logic, ed. D.M. Gabbay and F. Guenthner, Dordrecht, Reidel, Vol. II, Ch. 5, 249-307
  • Gottlob, G., 1999, Remarks on a Carnapian extension of S5. In J. Wolenski, E. Köhler (eds.), Alfred Tarski and the Vienna Circle, Kluwer, Dordrecht, 243−259.
  • Hilbert, D., and W. Ackermann, 1950, Mathematical Logic, New York, Chelsea Publishing Co., (Translation of Grundzüge der Theoretischen Logik.).
  • Hughes, G.E., and M.J. Cresswell, 1996, A New Introduction to Modal Logic, London, Routledge.
  • Lewis, C.I., and C.H. Langford, 1932, Symbolic Logic, New York, Dover publications.
  • Makinson, D., 1966, How meaningful are modal operators? Australasian Journal of Philosophy, 44, 331−337.
  • Quine, W.V.O., 1934, Ontological remarks on the propositional calculus. Mind, 433, pp. 473– 476.
  • Quine, W.V.O., 1943, Notes on existence and necessity, The Journal of Philosophy, Vol 40, pp. 113-127.
  • Quine, W.V.O., 1953a, Reference and modality. From a Logical Point of View, Cambridge, Mass., Harvard University Press, second edition 1961, pp. 139–59.
  • Quine, W.V.O., 1953b, Three grades of modal involvement, The Ways of Paradox, Cambridge Mass., Harvard University Press, 1976, pp. 158–176.
  • Quine, W.V.O., 1960, Word and Object, Cambridge, Mass, MIT Press.
  • Quine, W.V.O., 1969, Reply to Sellars. Words and Objections, (ed D. Davidson and K.J.J. Hintikka), Dordrecht, Reidel, 1969, pp. 337–340.
  • Schurz, G., 2001, Carnap’s modal logic. In W. Stelzner and M. Stockler (eds.), Zwischen traditioneller und moderner Logik. Paderborn, Mentis, pp. 365–380.
  • Smullyan, A.F., 1948, Modality and description. The Journal of Symbolic Logic, 13, 31–7.
  • Thomason, S. K.,1973, New Representation of S5. Notre Dame Journal of Formal Logic, 14, 281−284.
  • Wajsberg, M., 1933, Ein erweiteter Klassenkalkül. Monatshefte für Mathematik und Physik, Vol. 40, 113–26.
  • Wittgenstein, L., 1921, Tractatus Logic-Philosophicus. (Translated by D.F.Pears and B.F.McGinness), 2nd printing 1963. London, Routledge and Kegan Paul.

 

Author Information

M. J. Cresswell
Email: max.cresswell@msor.vuw.ac.nz
Victoria University of Wellington
New Zealand

Argument

Argument

The word “argument” can be used to designate a dispute or a fight, or it can be used more technically. The focus of this article is on understanding an argument as a collection of truth-bearers (that is, the things that bear truth and falsity, or are true and false) some of which are offered as reasons for one of them, the conclusion. This article takes propositions rather than sentences or statements or utterances to be the primary truth bearers. The reasons offered within the argument are called “premises”, and the proposition that the premises are offered for is called the “conclusion”. This sense of “argument” diverges not only from the above sense of a dispute or fight but also from the formal logician’s sense according to which an argument is merely a list of statements, one of which is designated as the conclusion and the rest of which are designated as premises regardless of whether the premises are offered as reasons for believing the conclusion. Arguments, as understood in this article, are the subject of study in critical thinking and informal logic courses in which students usually learn, among other things, how to identify, reconstruct, and evaluate arguments given outside the classroom.

Arguments, in this sense, are typically distinguished from both implications and inferences. In asserting that a proposition P implies proposition Q, one does not thereby offer P as a reason for Q. The proposition frogs are mammals implies that frogs are not reptiles, but it is problematic to offer the former as a reason for believing the latter. If an arguer offers an argument in order to persuade an audience that the conclusion is true, then it is plausible to think that the arguer is inviting the audience to make an inference from the argument’s premises to its conclusion. However, an inference is a form of reasoning, and as such it is distinct from an argument in the sense of a collection of propositions (some of which are offered as reasons for the conclusion). One might plausibly think that a person S infers Q from P just in case S comes to believe Q because S believes that P is true and because S believes that the truth of P justifies belief that Q. But this movement of mind from P to Q is something different from the argument composed of just P and Q.

The characterization of argument in the first paragraph requires development since there are forms of reasoning such as explanations which are not typically regarded as arguments even though (explanatory) reasons are offered for a proposition. Two principal approaches to fine-tuning this first-step characterization of arguments are what may be called the structural and pragmatic approaches. The pragmatic approach is motivated by the view that the nature of an argument cannot be completely captured in terms of its structure. In what follows, each approach is described, and criticism is briefly entertained.  Along the way, distinctive features of arguments are highlighted that seemingly must be accounted for by any plausible characterization. The classification of arguments as deductive, inductive, and conductive is discussed in section 3.

Table of Contents

  1. The Structural Approach to Characterizing Arguments
  2. The Pragmatic Approach to Characterizing Arguments
  3. Deductive, Inductive, and Conductive Arguments
  4. Conclusion
  5. References and Further Reading

1. The Structural Approach to Characterizing Arguments

Not any group of propositions qualifies as an argument. The starting point for structural approaches is the thesis that the premises of an argument are reasons offered in support of its conclusion (for example, Govier 2010, p.1, Bassham, G., W. Irwin, H. Nardone, J. Wallace 2005, p.30, Copi and Cohen 2005, p.7; for discussion, see Johnson 2000, p.146ff ). Accordingly, a collection of propositions lacks the structure of an argument unless there is a reasoner who puts forward some as reasons in support of one of them. Letting P1, P2, P3, …, and C range over propositions and R over reasoners, a structural characterization of argument takes the following form.

 A collection of propositions, P1, …, Pn, C, is an argument if and only if there is a reasoner R who puts forward the Pi as reasons in support of C.

The structure of an argument is not a function of the syntactic and semantic features of the propositions that compose it. Rather, it is imposed on these propositions by the intentions of a reasoner to use some as support for one of them. Typically in presenting an argument, a reasoner will use expressions to flag the intended structural components of her argument. Typical premise indicators include: “because”, “since”, “for”, and “as”; typical conclusion indicators include “therefore”, “thus”, “hence”, and “so”. Note well: these expressions do not always function in these ways, and so their mere use does not necessitate the presence of an argument.

Different accounts of the nature of the intended support offered by the premises for the conclusion in an argument generate different structural characterizations of arguments (for discussion see Hitchcock 2007). Plausibly, if a reasoner R puts forward premises in support of a conclusion C, then (i)-(iii) obtain. (i) The premises represent R’s reasons for believing that the conclusion is true and R thinks that her belief in the truth of the premises is justified. (ii) R believes that the premises make C more probable than not. (iii) (a) R believes that the premises are independent of C ( that is, R thinks that her reasons for the premises do not include belief that C is true), and (b) R believes that the premises are relevant to establishing that C is true. If we judge that a reasoner R presents an argument as defined above, then by the lights of (i)-(iii) we believe that R believes that the premises justify belief in the truth of the conclusion.  In what immediately follows, examples are given to explicate (i)-(iii).

A: John is an only child.

B: John is not an only child; he said that Mary is his sister.

If B presents an argument, then the following obtain. (i) B believes that the premise ( that is, Mary is John’s sister) is true, B thinks this belief is justified, and the premise is B’s reason for maintaining the conclusion. (ii) B believes that John said that Mary is his sister makes it more likely than not that John is not an only child, and (iii) B thinks that that John said that Mary is his sister is both independent of the proposition that Mary is John’s sister and relevant to confirming it.

A: The Democrats and Republicans don’t seem willing to compromise.

B: If the Democrats and Republicans are not willing to compromise, then the U.S. will go over the fiscal cliff.

B’s assertion of a conditional does not require that B believe either the antecedent or consequent. Therefore, it is unlikely that B puts forward the Democrats and Republicans are not willing to compromise as a reason in support of the U.S. will go over the fiscal cliff, because it is unlikely that B believes either proposition. Hence, it is unlikely that B’s response to A has the structure of an argument, because (i) is not satisfied.

A: Doctor B, what is the reason for my uncle’s muscular weakness?

B: The results of the test are in. Even though few syphilis patients get paresis, we suspect that the reason for your uncle’s paresis is the syphilis he suffered from 10 years ago.

Dr. B offers reasons that explain why A’s uncle has paresis. It is unreasonable to think that B believes that the uncle’s being a syphilis victim makes it more likely than not that he has paresis, since B admits that having syphilis does not make it more likely than not that someone has (or will have) paresis. So, B’s response does not contain an argument, because (ii) is not satisfied.

A: I don’t think that Bill will be at the party tonight.

B: Bill will be at the party, because Bill will be at the party.

Suppose that B believes that Bill will be at the party. Trivially, the truth of this proposition makes it more likely than not that he will be at the party. Nevertheless, B is not presenting an argument.  B’s response does not have the structure of an argument, because (iiia) is not satisfied. Clearly, B does not offer a reason for Bill will be at the party that is independent of this. Perhaps, B’s response is intended to communicate her confidence that Bill will be at the party. By (iiia), a reasoner R puts forward [1] Sasha Obama has a sibling in support of [2] Sasha is not an only child only if R’s reasons for believing [1] do not include R’s belief that [2] is true. If R puts forward [1] in support of [2] and, say, erroneously believes that the former is independent of the latter, then R’s argument would be defective by virtue of being circular. Regarding (iiib), that Obama is U.S. President entails that the earth is the third planet from the sun or it isn’t, but it is plausible to suppose that the former does not support the latter because it is irrelevant to showing that the earth is the third planet from the sun or it isn’t is true.

Premises offered in support of a conclusion are either convergent or divergent. This difference marks a structural distinction between arguments.

[1] Tom is happy only if he is playing guitar.
[2] Tom is not playing guitar.
———————————————————————
[3] Tom is not happy.

Suppose that a reasoner R offers [1] and [2] as reasons in support of [3]. The argument is presented in what is called standard form; the premises are listed first and a solid line separates them from the conclusion, which is prefaced by “”. This symbol means “therefore”. Premises [1] and [2] are convergent because they do not support the conclusion independently of one another,  that is, they support the conclusion jointly. It is unreasonable to think that R offers [1] and [2] individually, as opposed to collectively, as reasons for [3]. The following representation of the argument depicts the convergence of the premises.

convergent
 

Combining [1] and [2] with the plus sign and underscoring them indicates that they are convergent. The arrow indicates that they are offered in support of [3]. To see a display of divergent premises, consider the following.

[1] Tom said that he didn’t go to Samantha’s party.
[2] No one at Samantha’s party saw Tom there.
——————————————————————————
[3] Tom did not attend Samantha’s party.

These premises are divergent, because each is a reason that supports [3] independently of the other. The below diagram represents this.

divergent
 

An extended argument is an argument with at least one premise that a reasoner attempts to support explicitly. Extended arguments are more structurally complex than ones that are not extended. Consider the following.

The keys are either in the kitchen or the bedroom. The keys are not in the kitchen. I did not find the keys in the kitchen. So, the keys must be in the bedroom. Let’s look there!

The argument in standard form may be portrayed as follows:

[1] I just searched the kitchen and I did not find the keys.
—————————————————————————————
[2] The keys are not in the kitchen.
[3] The keys are either in the kitchen or the bedroom.
————————————————————————————
[4] The keys are in the bedroom.

multiple

Note that although the keys being in the bedroom is a reason for the imperative, “Let’s look there!” (given the desirability of finding the keys), this proposition is not “truth apt” and so is not a component of the argument.

An enthymeme is an argument which is presented with at least one component that is suppressed.

A: I don’t know what to believe regarding the morality of abortion.

B: You should believe that abortion is immoral. You’re a Catholic.

That B puts forward [1] A is a Catholic in support of [2] A should believe that abortion is immoral suggests that B implicitly puts forward [3] all Catholics should believe that abortion is immoral in support of [2]. Proposition [3] may plausibly be regarded as a suppressed premise of B’s argument. Note that [2] and [3] are convergent. A premise that is suppressed is never a reason for a conclusion independent of another explicitly offered for that conclusion.

There are two main criticisms of structural characterizations of arguments. One criticism is that they are too weak because they turn non-arguments such as explanations into arguments.

A: Why did this metal expand?

B: It was heated and all metals expand when heated.

B offers explanatory reasons for the explanandum (what is explained): this metal expanded. It is plausible to see B offering these explanatory reasons in support of the explanandum. The reasons B offers jointly support the truth of the explanandum, and thereby show that the expansion of the metal was to be expected. It is in this way that B’s reasons enable A to understand why the metal expanded.

The second criticism is that structural characterizations are too strong. They rule out as arguments what intuitively seem to be arguments.

A: Kelly maintains that no explanation is an argument. I don’t know what to believe.

B: Neither do I. One reason for her view may be that the primary function of arguments, unlike explanations, is persuasion. But I am not sure that this is the primary function of arguments. We should investigate this further.

B offers a reason, [1] the primary function of arguments, unlike explanations, is persuasion, for the thesis [2] no explanation is an argument. Since B asserts neither [1] nor [2], B does not put forward [1] in support of [2]. Hence, by the above account, B’s reasoning does not qualify as an argument. A contrary view is that arguments can be used in ways other than showing that their conclusions are true. For example, arguments can be constructed for purposes of inquiry and as such can be used to investigate a hypothesis by seeing what reasons might be given to support a given proposition (see Meiland 1989 and Johnson and Blair 2006, p.10). Such arguments are sometimes referred to as exploratory arguments.  On this approach, it is plausible to think that B constructs an exploratory argument [exercise for the reader: identify B’s suppressed premise].

Briefly, in defense of the structuralist account of arguments one response to the first criticism is to bite the bullet and follow those who think that at least some explanations qualify as arguments (see Thomas 1986 who argues that all explanations are arguments). Given that there are exploratory arguments, the second criticism motivates either liberalizing the concept of support that premises may provide for a conclusion (so that, for example, B may be understood as offering [1] in support of [2]) or dropping the notion of support all together in the structural characterization of arguments (for example, a collection of propositions is an argument if and only if a reasoner offers some as reasons for one of them. See Sinnott-Armstrong and Fogelin 2010, p.3).

2. The Pragmatic Approach to Characterizing Arguments

The pragmatic approach is motivated by the view that the nature of an argument cannot be completely captured in terms of its structure. In contrast to structural definitions of arguments, pragmatic definitions appeal to the function of arguments. Different accounts of the purposes arguments serve generate different pragmatic definitions of arguments. The following pragmatic definition appeals to the use of arguments as tools of rational persuasion (for definitions of argument that make such an appeal, see Johnson 2000, p. 168; Walton 1996, p. 18ff; Hitchcock 2007, p.105ff)

A collection of propositions is an argument if and only if there is a reasoner R who puts forward some of them (the premises) as reasons in support of one of them (the conclusion) in order to rationally persuade an audience of the truth of the conclusion.

One advantage of this definition over the previously given structural one is that it offers an explanation why arguments have the structure they do. In order to rationally persuade an audience of the truth of a proposition, one must offer reasons in support of that proposition. The appeal to rational persuasion is necessary to distinguish arguments from other forms of persuasion such as threats. One question that arises is: What obligations does a reasoner incur by virtue of offering supporting reasons for a conclusion in order to rationally persuade an audience of the conclusion? One might think that such a reasoner should be open to criticisms and obligated to respond to them persuasively (See Johnson 2000 p.144 et al, for development of this idea). By appealing to the aims that arguments serve, pragmatic definitions highlight the acts of presenting an argument in addition to the arguments themselves. The field of argumentation, an interdisciplinary field that includes rhetoric, informal logic, psychology, and cognitive science, highlights acts of presenting arguments and their contexts as topics for investigation that inform our understanding of arguments (see Houtlosser 2001 for discussion of the different perspectives of argument offered by different fields).

For example, the acts of explaining and arguing—in sense highlighted here—have different aims.  Whereas the act of explaining is designed to increase the audience’s comprehension, the act of arguing is aimed at enhancing the acceptability of a standpoint. This difference in aim makes sense of the fact that in presenting an argument the reasoner believes that her standpoint is not yet acceptable to her audience, but in presenting an explanation the reasoner knows or believes that the explanandum is already accepted by her audience (See van Eemeren and Grootendorst 1992, p.29, and Snoeck Henkemans 2001, p.232). These observations about the acts of explaining and arguing motivate the above pragmatic definition of an argument and suggest that arguments and explanations are distinct things. It is generally accepted that the same line of reasoning can function as an explanation in one dialogical context and as an argument in another (see Groarke and Tindale 2004, p. 23ff for an example and discussion). Eemeren van, Grootendorst, and Snoeck Henkemans 2002 delivers a substantive account of how the evaluation of various types of arguments turns on considerations pertaining to the dialogical contexts within which they are presented and discussed.

Note that, since the pragmatic definition appeals to the structure of propositions in characterizing arguments, it inherits the criticisms of structural definitions. In addition, the question arises whether it captures the variety of purposes arguments may serve. It has been urged that arguments can aim at engendering any one of a full range of attitudes towards their conclusions (for example, Pinto 1991). For example, a reasoner can offer premises for a conclusion C in order to get her audience to withhold assent from C, suspect that C is true, believe that is merely possible that C is true, or to be afraid that C is true.

The thought here is that these are alternatives to convincing an audience of the truth of C. A proponent of a pragmatic definition of argument may grant that there are uses of arguments not accounted for by her definition, and propose that the definition is stipulative. But then a case needs to be made why theorizing about arguments from a pragmatic approach should be anchored to such a definition when it does not reflect all legitimate uses of arguments. Another line of criticism of the pragmatic approach is its rejecting that arguments themselves have a function (Goodwin 2007) and arguing that the function of persuasion should be assigned to the dialogical contexts in which arguments take place (Doury 2011).

3. Deductive, Inductive, and Conductive Arguments

Arguments are commonly classified as deductive or inductive (for example, Copi, I. and C. Cohen 2005, Sinnott-Armstrong and Fogelin 2010). A deductive argument is an argument that an arguer puts forward as valid. For a valid argument, it is not possible for the premises to be true with the conclusion false. That is, necessarily if the premises are true, then the conclusion is true. Thus we may say that the truth of the premises in a valid argument guarantees that the conclusion is also true. The following is an example of a valid argument: Tom is happy only if the Tigers win, the Tigers lost; therefore, Tom is definitely not happy.

A step-by-step derivation of the conclusion of a valid argument from its premises is called a proof. In the context of a proof, the given premises of an argument may be viewed as initial premises. The propositions produced at the steps leading to the conclusion are called derived premises. Each step in the derivation is justified by a principle of inference. Whether the derived premises are components of a valid argument is a difficult question that is beyond the scope of this article.   

An inductive argument is an argument that an arguer puts forward as inductively strong. In an inductive argument, the premises are intended only to be so strong that, if they were true, then it would be unlikely, although possible, that the conclusion is false. If the truth of the premises makes it unlikely (but not impossible) that the conclusion is false, then we may say that the argument is inductively strong. The following is an example of an inductively strong argument: 97% of the Republicans in town Z voted for McX, Jones is a Republican in town Z; therefore, Jones voted for McX.

In an argument like this, an arguer often will conclude "Jones probably voted for McX" instead of "Jones voted for McX," because they are signaling with the word "probably" that they intend to present an argument that is inductively strong but not valid.

In order to evaluate an argument it is important to determine whether or not it is deductive or inductive. It is inappropriate to criticize an inductively strong argument for being invalid. Based on the above characterizations, whether an argument is deductive or inductive turns on whether the arguer intends the argument to be valid or merely inductively strong, respectively. Sometimes the presence of certain expressions such as ‘definitely’ and ‘probably’ in the above two arguments indicate the relevant intensions of the arguer. Charity dictates that an invalid argument which is inductively strong be evaluated as an inductive argument unless there is clear evidence to the contrary.

Conductive arguments have been put forward as a third category of arguments (for example, Govier 2010). A conductive argument is an argument whose premises are divergent; the premises count separately in support of the conclusion. If one or more premises were removed from the argument, the degree of support offered by the remaining premises would stay the same. The previously given example of an argument with divergent premises is a conductive argument. The following is another example of a conductive argument. It most likely won’t rain tomorrow. The sky is red tonight. Also, the weather channel reported a 30% chance of rain for tomorrow.

The primary rationale for distinguishing conductive arguments from deductive and inductive ones is as follows. First, the premises of conductive arguments are always divergent, but the premises of deductive and inductive arguments are never divergent. Second, the evaluation of arguments with divergent premises requires not only that each premise be evaluated individually as support for the conclusion, but also the degree to which the premises support the conclusion collectively must be determined. This second consideration mitigates against treating conductive arguments merely as a collection of subarguments, each of which is deductive or inductive. The basic idea is that the support that the divergent premises taken together provide the conclusion must be considered in the evaluation of a conductive argument. With respect to the above conductive argument, the sky is red tonight and the weather channel reported a 30% chance of rain for tomorrow are offered together as (divergent) reasons for It most likely won’t rain tomorrow. Perhaps, collectively, but not individually, these reasons would persuade an addressee that it most likely won’t rain tomorrow.

4. Conclusion

A group of propositions constitutes an argument only if some are offered as reasons for one of them. Two approaches to identifying the definitive characteristics of arguments are the structural and pragmatic approaches. On both approaches, whether an act of offering reasons for a proposition P yields an argument depends on what the reasoner believes regarding both the truth of the reasons and the relationship between the reasons and P. A typical use of an argument is to rationally persuade its audience of the truth of the conclusion. To be effective in realizing this aim, the reasoner must think that there is real potential in the relevant context for her audience to be rationally persuaded of the conclusion by means of the offered premises. What, exactly, this presupposes about the audience depends on what the argument is and the context in which it is given. An argument may be classified as deductive, inductive, or conductive. Its classification into one of these categories is a prerequisite for its proper evaluation.

5. References and Further Reading

  • Bassham, G., W. Irwin, H. Nardone, and J. Wallace. 2005. Critical Thinking: A Student’s Introduction, 2nd ed. New York: McGraw-Hill.
  • Copi, I. and C. Cohen 2005. Introduction to Logic 12th ed. Upper Saddle River, NJ: Prentice Hall.
  • Doury, M. 2011. “Preaching to the Converted: Why Argue When Everyone Agrees?” Argumentation26(1): 99-114.
  • Eemeren F.H. van, R. Grootendorst, and F. Snoeck Henkemans. 2002. Argumentation: Analysis, Evaluation, Presentation. 2002. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Eemeren F.H. van and R. Grootendorst. 1992. Argumentation, Communication, and Fallacies: A Pragma-Dialectical Perspective. Hillsdale, NJ: Lawrence Erblaum Associates.
  • Goodwin, J. 2007. “Argument has no function.” Informal Logic 27 (1): 69–90.
  • Govier, T. 2010. A Practical Study of Argument, 7th ed. Belmont, CA: Wadsworth.
  • Govier, T. 1987. “Reasons Why Arguments and Explanations are Different.” In Problems in Argument Analysis and Evaluation, Govier 1987, 159-176. Dordrecht, Holland: Foris.
  • Groarke, L. and C. Tindale 2004. Good Reasoning Matters!: A Constructive Approach to Critical Thinking, 3rd ed. Oxford: Oxford University Press.
  • Hitchcock, D. 2007. “Informal Logic and The Concept of Argument.” In Philosophy of Logic. D. Jacquette 2007, 101-129. Amsterdam: Elsevier.
  • Houtlosser, P. 2001. “Points of View.” In Critical Concepts in Argumentation Theory, F.H. van Eemeren 2001, 27-50. Amsterdam: Amsterdam University Press.
  • Johnson, R. and J. A. Blair 2006. Logical Self-Defense. New York: International Debate Education Association.
  • Johnson, R. 2000. Manifest Rationality. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Kasachkoff, T. 1988. “Explaining and Justifying.” Informal Logic X, 21-30.
  • Meiland, J. 1989. “Argument as Inquiry and Argument as Persuasion.” Argumentation 3, 185-196.
  • Pinto, R. 1991. “Generalizing the Notion of Argument.” In Argument, Inference and Dialectic, R. Pinto (2010), 10-20. Dordrecht, Holland: Kluwer Academic Publishers. Originally published in van Eemeren, Grootendorst, Blair, and Willard, eds. Proceedings of the Second International Conference on Argumentation, vol.1A, 116-124. Amsterdam: SICSAT. Pinto, R.1995. “The Relation of Argument to Inference,” pp. 32-45 in Pinto (2010).
  • Sinnott-Armstrong, W. and R. Fogelin. 2010. Understanding Arguments: An Introduction to Informal Logic, 8th ed. Belmont, CA: Wadsworth.
  • Skyrms, B. 2000. Choice and Chance, 4th ed. Belmont, CA: Wadsworth.
  • Snoeck Henkemans, A.F. 2001. "Argumentation, explanation, and causality." In Text Representation: Linguistic and Psycholinguistic Aspects, T. Sanders, J. Schilperoord, and W. Spooren, eds. 2001, 231-246. Amsterdam: John Benjamins Publishing.
  • Thomas, S.N. 1986. Practical Reasoning in Natural Language. Englewood Cliffs, NJ: Prentice Hall.
  • Walton, D. 1996. Argument Structure: A Pragmatic Theory. Toronto: University of Toronto Press.

Author Information

Matthew McKeon
Email: mckeonm@msu.edu
Michigan State University
U. S. A.

Deductive and Inductive Arguments

Deductive and Inductive Arguments

deductive argument is an argument that is intended by the arguer to be (deductively) valid, that is, to provide a guarantee of the truth of the conclusion provided that the argument's premises (assumptions) are true. This point can be expressed also by saying that, in a deductive argument, the premises are intended to provide such strong support for the conclusion that, if the premises are true, then it would be impossible for the conclusion to be false. An argument in which the premises do succeed in guaranteeing the conclusion is called a (deductively) valid argument. If a valid argument has true premises, then the argument is said to be sound.

Here is a valid deductive argument: It's sunny in Singapore. If it's sunny in Singapore, he won't be carrying an umbrella. So, he won't be carrying an umbrella.

Here is a mildly strong inductive argument: Every time I've walked by that dog, he hasn't tried to bite me. So, the next time I walk by that dog he won't try to bite me.

An inductive argument is an argument that is intended by the arguer merely to establish or increase the probability of its conclusion. In an inductive argument, the premises are intended only to be so strong that, if they were true, then it would be unlikely that the conclusion is false. There is no standard term for a successful inductive argument. But its success or strength is a matter of degree, unlike with deductive arguments. A deductive argument is valid or else invalid.

The difference between the two kinds of arguments does not lie solely in the words used; it comes from the relationship the author or expositor of the argument takes there to be between the premises and the conclusion. If the author of the argument believes that the truth of the premises definitely establishes the truth of the conclusion (due to definition, logical entailment, logical structure, or mathematical necessity), then the argument is deductive. If the author of the argument does not think that the truth of the premises definitely establishes the truth of the conclusion, but nonetheless believes that their truth provides good reason to believe the conclusion true, then the argument is inductive.

Some analysts prefer to distinguish inductive arguments from conductive arguments; the latter are arguments giving explicit reasons for and against a conclusion, and requiring the evaluator of the argument to weigh these considerations, i.e., to consider the pros and cons. This article considers conductive arguments to be a kind of inductive argument.

The noun "deduction" refers to the process of advancing or establishing a deductive argument, or going through a process of reasoning that can be reconstructed as a deductive argument. "Induction" refers to the process of advancing an inductive argument, or making use of reasoning that can be reconstructed as an inductive argument.

Because deductive arguments are those in which the truth of the conclusion is thought to be completely guaranteed and not just made probable by the truth of the premises, if the argument is a sound one, then the truth of the conclusion is said to be "contained within" the truth of the premises; that is, the conclusion does not go beyond what the truth of the premises implicitly requires. For this reason, deductive arguments are usually limited to inferences that follow from definitions, mathematics and rules of formal logic. Here is a deductive argument:

John is ill. If John is ill, then he won't be able to attend our meeting today. Therefore, John won't be able to attend our meeting today.

That argument is valid due to its logical structure. If 'ill' were replaced with 'happy', the argument would still be valid because it would retain its special logical structure (called modus ponens). Here is the form of any argument having the structure of modus ponens:

P

If P then Q

So, Q

The capital letters stand for declarative sentences, or statements, or propositions. The investigation of these logical forms is called Propositional Logic.

The question of whether all, or merely most, valid deductive arguments are valid because of their structure is still controversial in the field of the philosophy of logic, but that question will not be explored further in this article.

Inductive arguments can take very wide ranging forms. Inductive arguments might conclude with some claim about a group based only on information from a sample of that group. Other inductive arguments draw conclusions by appeal to evidence or authority or causal relationships. Here is a somewhat strong inductive argument based on authority:

The police said John committed the murder. So, John committed the murder.

Here is an inductive argument based on evidence:

The witness said John committed the murder. So, John committed the murder.

Here is a stronger inductive argument based on better evidence:

Two independent witnesses claimed John committed the murder. John's fingerprints are the only ones on the murder weapon. John confessed to the crime. So, John committed the murder.

This last argument is no doubt good enough for a jury to convict John, but none of these three arguments about John committing the murder is strong enough to be called valid. At least itt is not valid in the technical sense of 'deductively valid'. However, some lawyers will tell their juries that these are valid arguments, so we critical thinkers need to be on the alert as to how people around us are using the term.

It is worth noting that some dictionaries and texts improperly define "deduction" as reasoning from the general to specific and define "induction" as reasoning from the specific to the general. These definitions are outdated and inaccurate. For example, according to the more modern definitions given above, the following argument from the specific to general is deductive, not inductive, because the truth of the premises guarantees the truth of the conclusion:

The members of the Williams family are Susan, Nathan and Alexander.
Susan wears glasses.
Nathan wears glasses.
Alexander wears glasses.
Therefore, all members of the Williams family wear glasses.

Moreover, the following argument, even though it reasons from the general to specific, is inductive:

It has snowed in Massachusetts every December in recorded history.
Therefore, it will snow in Massachusetts this coming December.

It is worth noting that the proof technique used in mathematics called "mathematical induction", is deductive and not inductive. Proofs that make use of mathematical induction typically take the following form:

Property P is true of the number 0.
For all natural numbers n, if P holds of n then P also holds of n + 1.
Therefore, P is true of all natural numbers.

When such a proof is given by a mathematician, it is thought that if the premises are true, then the conclusion follows necessarily. Therefore, such an argument is deductive by contemporary standards.

Because the difference between inductive and deductive arguments involves the strength of evidence which the author believes the premises to provide for the conclusion, inductive and deductive arguments differ with regard to the standards of evaluation that are applicable to them. The difference does not have to do with the content or subject matter of the argument. Indeed, the same utterance may be used to present either a deductive or an inductive argument, depening on the intentions of the person advancing it. Consider as an example.

Dom Perignon is a champagne, so it must be made in France.

It might be clear from context that the speaker believes that having been made in the Champagne area of France is part of the defining feature of "champagne" and so the conclusion follows from the premise by definition. If it is the intention of the speaker that the evidence is of this sort, then the argument is deductive. However, it may be that no such thought is in the speaker's mind. He or she may merely believe that nearly all champagne is made in France, and may be reasoning probabilistically. If this is his or her intention, then the argument is inductive.

It is also worth noting that, at its core, the distinction between deductive and inductive  has to do with the strength of the justification that the author or expositor of the argument intends that the premises provide for the conclusion. If the argument is logically fallacious, it may be that the premises actually do not provide justification of that strength, or even any justification at all. Consider, the following argument:

All odd numbers are integers.
All even numbers are integers.
Therefore, all odd numbers are even numbers.

This argument is logically fallacious because it is invalid. In actuality, the premises provide no support whatever for the conclusion. However, if this argument were ever seriously advanced, we must assume that the author would believe that the truth of the premises guarantees the truth of the conclusion. Therefore, this argument is still deductive. A bad deductive argument is not an inductive argument.

See also the articles on "Argument" and "Validity and Soundness" in this encyclopedia.

Author Information

IEP Staff

Aristotle: Logic

Aristotle: Logic

Aristotelian logic, after a great and early triumph, consolidated its position of influence to rule over the philosophical world throughout the Middle Ages up until the 19th Century.  All that changed in a hurry when modern logicians embraced a new kind of mathematical logic and pushed out what they regarded as the antiquated and clunky method of syllogisms.  Although Aristotle’s very rich and expansive account of logic differs in key ways from modern approaches, it is more than a historical curiosity.  It provides an alternative way of approaching logic and continues to provide critical insights into contemporary issues and concerns.  The main thrust of this article is to explain Aristotle’s logical system as a whole while correcting some prominent misconceptions that persist in the popular understanding and even in some of the specialized literature.  Before getting down to business, it is important to point out that Aristotle is a synoptic thinker with an over-arching theory that ties together all aspects and fields of philosophy.  He does not view logic as a separate, self-sufficient subject-matter, to be considered in isolation from other aspects of disciplined inquiry.  Although we cannot consider all the details of his encyclopedic approach, we can sketch out the larger picture in a way that illuminates the general thrust of his system.  For the purposes of this entry, let us define logic as that field of inquiry which investigates how we reason correctly (and, by extension, how we reason incorrectly).  Aristotle does not believe that the purpose of logic is to prove that human beings can have knowledge.  (He dismisses excessive scepticism.)  The aim of logic is the elaboration of a coherent system that allows us to investigate, classify, and evaluate good and bad forms of reasoning.

Table of Contents

  1. The Organon
  2. Categories
  3. From Words into Propositions
  4. Kinds of Propositions
  5. Square of Opposition
  6. Laws of Thought
  7. Existential Assumptions
  8. Form versus Content
  9. The Syllogism
  10. Inductive Syllogism
  11. Deduction versus Induction
  12. Science
  13. Non-Discursive Reasoning
  14. Rhetoric
  15. Fallacies
  16. Moral Reasoning
  17. References and Further Reading
    1. Primary Sources
    2. Secondary Sources

1. The Organon

To those used to the silver tones of an accomplished writer like Plato, Aristotle’s prose will seem, at first glance, a difficult read.  What we have are largely notes, written at various points in his career, for different purposes, edited and cobbled together by later followers.  The style of the resulting collection is often rambling, repetitious, obscure, and disjointed.  There are many arcane, puzzling, and perhaps contradictory passages.  This problem is compounded by the abstract, technical vocabulary logic sometimes requires and by the wide-ranging scope and the scattered nature of Aristotle’s observations.  Some familiarity with Greek terminology is required if one hopes to capture the nuances in his thought.  Classicists and scholars do argue, of course, about the precise Greek meaning of key words or phrases but many of these debates involve minor points of interpretation that cannot concern us here.  Aristotle’s logical vocabulary needs to be understood within the larger context of his system as a whole.  Many good translations of Aristotle are available.  (Parenthetical citations below include the approximate Bekker number (the scholarly notation for referring to Aristotelian passages according to page, column, and line number of a standard edition), the English title of the work, and the name of the translator.)

Ancient commentators regarded logic as a widely-applicable instrument or method for careful thinking.  They grouped Aristotle’s six logical treatises into a sort of manual they called the Organon (Greek for “tool”).  The Organon included the Categories, On Interpretation, the Prior Analytics, the Posterior Analytics, the Topics, and On Sophistical Refutations.  These books touch on many issues: the logical structure of propositions, the proper structure of arguments (syllogisms), the difference between induction and deduction, the nature of scientific knowledge, basic fallacies (forms of specious reasoning), debating techniques, and so on.  But we cannot confine our present investigations to the Organon.  Aristotle comments on the principle of non-contradiction in the Metaphysics, on less rigorous forms of argument in the Rhetoric, on the intellectual virtues in the Nicomachean Ethics, on the difference between truth and falsity in On the Soul, and so on.  We cannot overlook such important passages if we wish to gain an adequate understanding of Aristotelian logic.

2. Categories

The world, as Aristotle describes it in his Categories, is composed of substances—separate, individual things—to which various characterizations or properties can be ascribed.  Each substance is a unified whole composed of interlocking parts.  There are two kinds of substances.  A primary substance is (in the simplest instance) an independent (or detachable) object, composed of matter, characterized by form.  Individual living organisms—a man, a rainbow trout, an oak tree—provide the most unambiguous examples of primary substances.  Secondary substances are the larger groups, the species or genera, to which these individual organisms belong.  So man, horse, mammals, animals (and so on) would be examples of secondary substances.  As we shall see, Aristotle’s logic is about correctly attributing specific properties to secondary substances (and therefore, indirectly, about attributing these properties to primary substances or individual things).

Aristotle elaborates a logic that is designed to describe what exists in the world.  We may well wonder then, how many different ways can we describe something?  In his Categories (4.1b25-2a4), Aristotle enumerates ten different ways of describing something.  These categories (Greek=kategoria, deriving from the verb to indicate, signify) include (1) substance, (2) quantity, (3) quality, (4) relation, (5) where, (6) when, (7) being-in-a-position, (8) possessing, (9) doing or (10) undergoing something or being affected by something.  In the Topics (I.9, 103b20-25), he includes the same list, replacing “substance” (ousia) with “essence” (ti esti).  We can, along with Aristotle, give an example of each kind of description: (1) to designate something as a “horse” or a “man” is to identify it as a substance or to attribute an essence to it; (2) to say that the wall is four feet long is to describe it in terms of quantity; (3) to say that the roof  is “white” is to ascribe a quality to it; (4) to say that your weight is “double” mine is to describe a relation between the two; (5) to say something happened in the market-place is to explain where; (6) to say it happened last year is to explain when; (7) to say an old man is sitting is to describe his position; (8) to say the girl has shoes on is to describe what she possesses; (9) to say the head chef is cutting a carrot with a knife is to describe what he is doing; and finally, (10) to say wood is being burned in the fireplace is to describe what it means for the wood to undergo burning or to be affected by fire.  Commentators claim that these ten categories represent either different descriptions of being or different kinds of being.  (To be a substance is to be in a certain way; to possess quantity is to be in a certain way; to possess a quality is to be in a certain way, and so on.)  There is nothing magical about the number ten.  Aristotle gives shorter lists elsewhere. (Compare Posterior Analytics, I.22.83a22-24, where he lists seven predications, for example).  Whether Aristotle intends the longer lists as a complete enumeration of all conceivable types of descriptions is an open question.  Scholars have noticed that the first category, substance or essence, seems to be fundamentally different than the others; it is what something is in the most complete and perfect way.

3. From Words into Propositions

Aristotle does not believe that all reasoning deals with words.  (Moral decision-making is, for Aristotle, a form of reasoning that can occur without words.)  Still, words are a good place to begin our study of his logic.  Logic, as we now understand it, chiefly has to do with how we evaluate arguments.  But arguments are made of statements, which are, in turn, made of words.  In Aristotelian logic, the most basic statement is a proposition, a complete sentence that asserts something.  (There are other kinds of sentences—prayers, questions, commands—that do not assert anything true or false about the world and which, therefore, exist outside the purview of logic.)  A proposition is ideally composed of at least three words: a subject (a word naming a substance), a predicate (a word naming a property), and a connecting verb, what logicians call a copula (Latin, for “bond” or “connection”).  Consider the simple statement: “Socrates is wise.”  Socrates is the subject; the property of being wise is the predicate, and the verb “is” (the copula) links Socrates and wisdom together in a single affirmation.  We can express all this symbolically as “S is P” where “S” stands for the subject “Socrates” and “P” stands for the predicate “being wise.”  The sentence “Socrates is wise” (or symbolically, “S is P”) qualifies as a proposition; it is a statement that claims that something is true about the world.  Paradigmatically, the subject would be a (secondary) substance (a natural division of primary substances) and the predicate would be a necessary or essential property as in:  “birds are feathered,” or “triangles have interior angles equal to two right angles,” or “fire is upward-moving.”  But any overly restrictive metaphysical idea about what terms in a proposition mean seems to unnecessarily restrict intelligent discourse.  Suppose someone were to claim that “anger is unethical.”  But anger is not a substance; it is a property of a substance (an organism).  Still, it makes perfect sense to predicate properties of anger.  We can say that anger is unethical, hard to control, an excess of passion, familiar enough, and so on.  Aristotle himself exhibits some flexibility here.  Still, there is something to Aristotle’s view that the closer a proposition is to the metaphysical structure of the world, the more it counts as knowledge.  Aristotle has an all-embracing view of logic and yet believes that, what we could call “metaphysical correctness” produces a more rigorous, scientific form of logical expression.

Of course, it is not enough to produce propositions; what we are after is true propositions.  Aristotle believes that only propositions are true or false.  Truth or falsity (at least with respect to linguistic expression) is a matter of combining words into complete propositions that purport to assert something about the world.  Individual words or incomplete phrases, considered by themselves, are neither true or false.  To say, “Socrates,” or “jumping up and down,” or “brilliant red” is not to assert anything true or false about the world.  It is to repeat words without making any claim about the way things are.  In the Metaphysics, Aristotle provides his own definition of true and false: “to say of what is that it is, and of what is not that it is not, is true”; and “to say of what is that it is not, or of what is not that it is, is false.” (IV.7.1011b25, Ross.)  In other words, a true proposition corresponds to way things are.  But Aristotle is not proposing a correspondence theory of truth as an expert would understand it.  He is operating at a more basic level.  Consider the statement: “Spiders have eight legs.”  (Symbolically, “All S is P,” where S, the subject, is “spiders”; P, the predicate, is “the state of being eight-legged,” and the verb “is” functions as the copula.)  What does it mean to say that this claim is true?  If we observe spiders to discover how many legs they have, we will find that (except in a few odd cases) spiders do have eight legs, so the proposition will be true because what it says matches reality.  As we shall see, Aristotle’s logic is designed to produce just this kind of general statement.

4. Kinds of Propositions

Aristotle suggests that all propositions must either affirm or deny something.  Every proposition must be either an affirmation or a negation; it cannot be both.  He also points out that propositions can make claims about what necessarily is the case, about what possibly is the case, or even about what is impossible.  His modal logic, which deals with these further qualifications about possibility or necessity, presents difficulties of interpretation.  We will focus on his assertoric (or non-modal) logic here.  Still, many of Aristotle’s points about necessity and possibility seem highly intuitive.  In one famous example about a hypothetical sea battle, he observes that the necessary truth of a mere proposition does not trump the uncertainty of future events.  Because it is necessarily true that there will be or will not be a sea battle tomorrow, we cannot conclude that either alternative is necessarily true.  (De Interpretatione, 9.19a30ff.)  So the necessity that attaches to the proposition “there will or will not be a sea battle tomorrow” does not transfer over to the claim ‘that there will be a sea battle tomorrow” or to the claim “there will not be a sea battle tomorrow.”  Aristotle goes out of his way to emphasize the point that our personal beliefs about what will happen in the future do not determine whether the individual propositions are true.  (Note that we must not confuse the necessary truth of a proposition with the necessity that precipitates the conclusion of a deductively-valid argument. The former is sometimes called “material,” “non-logical,” or “metaphysical” necessity; the later, “formal,” “deductive,” or “logical” necessity.”  We discuss these issues further below.)

Aristotle claims that all propositions can be expressed using the “Subject copula Predicate” formula and that complex propositions are, on closer inspection, collections of simpler claims that display, in turn, this fundamental structure.  Having fixed the proper logical form of a proposition, he goes on to classify different kinds of propositions.  He begins by distinguishing between particular terms and universal terms.  (The term he uses for “universal” is the Greek “katholou.”)  Particular terms refer to individual things; universal terms refer to groups of things.  The name “Socrates” is a particular term because it refers to a single human being; the word “spiders” is a universal term for it universally applies to all members of the group “spiders.”  Aristotle realizes, of course, that universal terms may be used to refer to parts of a group as well as to entire groups.  We may claim that all spiders have eight legs or that only some spiders have book-lungs.  In the first case, a property, eight-leggedness, is predicated of the entire group referred to by the universal term; in the second case, the property of having book-lungs is predicated of only part of the group.  So, to use Aristotelian language, one may predicate a property universally or not universally of the group referred to by a universal term.

This brings us to Aristotle’s classification of the four different kinds of categorical propositions (called “categorical propositions” because they assert a relationship between two categories or kinds).  Each different categorical proposition possesses quantity insomuch as it represents a universal or a particular predication (referring to all or only some members of the subject class).  It also possesses a definite quality (positive or negative) insomuch as it affirms or denies the specified predication.  The adjectives “all,” “no,” and “some” (which is understood as meaning “at least one”) determine the quantity of the proposition; the quality is determined by whether the verb is in the affirmative or the negative.  Rather than going into the details of Aristotle’s original texts, suffice it to say that contemporary logicians generally distinguish between four logical possibilities:

1.  Universal Affirmation: All S are P (called A statements from the Latin, “AFFIRMO”: I affirm).

2.  Universal Negation: No S are P (called E statements from “NEGO”: I deny).

3.  Particular Affirmation: Some S are P (called I statements from AFFIRMO).

4.  Particular Negation: Some S are not P (called O statements from NEGO).

Note that these four possibilities are not, in every instance, mutually exclusive.  As mentioned above, particular statements using the modifier “some” refer to at least one member of a group.  To say that “some S are P” is to say that “at least one S is P”; to say that “some S are not P” is to say that “at least one S is not P.”  It must follow then (at least on Aristotle’s system) that universal statements require the corresponding particular statement.  If “All S are P,” at least one S must be P; that is, the particular statement “Some S are P” must be true.  Again, if “No S are P,” at least one S must not be P; that is, the particular statement “Some S are not P” must be true.  (More on this, with qualifications, below.)  Note also that Aristotle treats propositions with an individual subject such as “Socrates is wise” as universal propositions (as if the proposition was saying something like “all instances of Socrates” are wise.)  One caveat:  Although we cannot linger on further complications here, keep in mind that this is not the only way to divide up logical possibility.

5. Square of Opposition

Aristotle examines the way in which these four different categorical propositions are related to one another.  His views have come down to us as “the square of opposition,” a mnemonic diagram that captures, systematizes, and slightly extends what Aristotle says in De Interpretatione. (Cf. 6.17a25ff.)

Figure 1

The Traditional Square of Opposition

 

As it turns out, we can use a square with crossed interior diagonals (Fig. 1 above) to identify four kinds of relationships that hold between different pairs of categorical propositions.  Consider each relationship in turn.

1)  Contradictory propositions possess opposite truth-values.  In the diagram, they are linked by a diagonal line.  If one of two contradictories is true, the other must be false, and vice versa.  So the A proposition (All S are P) and the O proposition (Some S are not P) are contradictories.  Clearly, if it is true that “all S are P,” then the O statement that “some S are not P” must be false.  And if it is true that “some S are not P,” then the A statement that “all S are P” must be false.  The same relationship holds between E (No S are P) and I (Some S are P) propositions.  To use a simple example: If it is true that “all birds lay eggs,” then it must be false that “some birds do not lay eggs.”  And if it is true that “some birds do not fly,” then it must be false that “all birds fly.”

2)  Contrary propositions cannot both be true.  The top horizontal line in the square joining the A proposition (All S are P) to the E proposition (No S are P) represents this logical relationship.  Clearly, it cannot be true that “all S are P” and that “no S are P.”  The truth of one of these contrary propositions excludes the truth of the other.  It is possible, however, that both statements are false as in the case where some S are P and some (other) S are not P.  So, for example, the statements “all politicians tell lies” and “no politicians tell lies” cannot both be true.  They will, however, both be false if it is indeed the case that some politicians tell lies whereas some do not.

3)  The relationship of subalternation results when the truth of a universal proposition, “the superaltern,” requires the truth of a second particular proposition, “the subaltern.”  The vertical lines moving downward from the top to the bottom of the square in the diagram represent this condition.  Clearly, if all members of an existent group possess (or do not possess) a specific characteristic, it must follow that any smaller subset of that group must possess (or not possess) that specific characteristic.  If the A proposition (All S are P) is true, it must follow that the I proposition (“Some S are P”) must be true.  Again, if the E proposition (No S are P) is true, it must follow that the O proposition (Some S are not P) must be true.  Consider, for example, the statement, “all cheetahs are fast.”  If every member of the whole group of cheetahs is fast, then it must be the case that at least one member of the group of cheetahs is fast; that is, the statement “some cheetahs are fast” must be true.  And, to reformulate the same example as a negation, if it is true that “no cheetahs are slow,” then it must be the case that at least one member of the group of cheetahs is not slow; that is, the statement “some cheetahs are not slow” must be true.

Note that subalternation does not work in the opposite direction.  If “Some S are P,” it need not follow that “All S are P.”  And if “Some S are not P,” it need not follow that “No S are P.”  We should also point out that if the subaltern is false, it must follow that the superaltern is false.  If it is false to say that “Some S are P,” it must be false to say that “All S are P.”  And if it is false to say that “Some S are not P,” it must be false to say that “No S are P.”

4)  Subcontrary propositions cannot both be false.  The bottom horizontal line in the square joining the I proposition (Some S are P) to the O proposition (Some S are not P) represents this kind of subcontrary relationship.  Keeping to the assumptions implicit in Aristotle’s system, there are only three possibilities: (1) All S have property P; in which case, it must also be true (by subalternation) that “some S are P.”  (2) No S have property P; in which case it must also be true (by subalternation) that “some S are not P.”  (3)  Some S have and some S do not have property P; in which case it will be true that “some S are P” and that “some S are not P.”  It follows that at least one of a pair of subcontrary propositions must be true and that both will be true in cases where P is partially predicated of S.  So, for example, both members of the subcontrary pair “some men have beards” and “some men do not have beards” are true.  They are both true because having a beard is a contingent or variable male attribute.  In contrast, only one member of the subcontrary pair “some snakes are legless” and “some snakes are not legless” is true.  As all snakes are legless, the proposition “some snakes are not legless” must be false.

Traditional logicians, inspired by Aristotle’s wide-ranging comments, identified a series of “immediate inferences” as a way of deriving new propositions through a routine rearrangement of terms.  Subalternation is an obvious example of immediate inference.  From “All S are P” we can immediately infer—that is, without argument—that “some S are P.”  They also recognized conversion, obversion, and contraposition as immediate inferences.

In conversion, one interchanges the S and P terms.  If, for example, we know that “No S is P,” we can immediately infer that “No P is S.”  (Once we know that “no circles are triangles,” we know right away that “no triangles are circles.”)

In obversion, one negates the predicate term while replacing it with the predicate term of opposite quality.  If, for example, we know that “Some S are P,” we can immediately infer the obverse, “Some S are not non-P.”  (Once we know that “some students are happy,” we know right away that “some students are not unhappy.”)

Finally, in contraposition, one negates both terms and reverses their order.  If, for example, we know that “All S are P,” we can infer the contrapositive “All non-P are non-S.”  (Once we know that “all voters are adults,” we know right away that “all children are unable to vote.”)  More specific rules, restrictions, and details are readily available elsewhere.

6. Laws of Thought

During the 18th, 19th, and early 20th Century, scholars who saw themselves as carrying on the Aristotelian and Medieval tradition in logic, often pointed to the “laws of thought” as the basis of all logic.  One still encounters this approach in textbook accounts of informal logic.  The usual list of logical laws (or logical first principles) includes three axioms: the law of identity, the law of non-contradiction, and the law of excluded middle.  (Some authors include a law of sufficient reason, that every event or claim must have a sufficient reason or explanation, and so forth.)  It would be a gross simplification to argue that these ideas derive exclusively from Aristotle or to suggest (as some authors seem to imply) that he self-consciously presented a theory uniquely derived from these three laws.  The idea is rather that Aristotle’s theory presupposes these principles and/or that he discusses or alludes to them somewhere in his work.  Traditional logicians did not regard them as abstruse or esoteric doctrines but as manifestly obvious principles that require assent for logical discourse to be possible.

The law of identity could be summarized as the patently unremarkable but seemingly inescapable notion that things must be, of course, identical with themselves.  Expressed symbolically: “A is A,” where A is an individual, a species, or a genus.  Although Aristotle never explicitly enunciates this law, he does observe, in the Metaphysics, that “the fact that a thing is itself is [the only] answer to all such questions as why the man is man, or the musician musical.” (VII.17.1041a16-18, Ross.)  This suggests that he does accept, unsurprisingly, the perfectly obvious idea that things are themselves.  If, however, identical things must possess identical attributes, this opens the door to various logical maneuvers.  One can, for example, substitute equivalent terms for one another and, even more portentously, one can arrive at some conception of analogy and induction.  Aristotle writes, “all water is said to be . . .  the same as all water  . . .  because of a certain likeness.” (Topics, I.7.103a19-20, Pickard-Cambridge.)  If water is water, then by the law of identity, anything we discover to be water must possess the same water-properties.

Aristotle provides several formulations of the law of non-contradiction, the idea that logically correct propositions cannot affirm and deny the same thing:

“It is impossible for anyone to believe the same thing to be and not be.” (Metaphysics, IV.3.1005b23-24, Ross.)

“The same attribute cannot at the same time belong and not belong to the same subject in the same respect.” (Ibid., IV.3.1005b19-20.)

“The most indisputable of all beliefs is that contradictory statements are not at the same time true.” (Ibid., IV.6.1011b13-14.)

Symbolically, the law of non-contradiction is sometimes represented as “not (A and not A).”

The law of excluded middle can be summarized as the idea that every proposition must be either true or false, not both and not neither.  In Aristotle’s words, “It is necessary for the affirmation or the negation to be true or false.”  (De Interpretatione, 9.18a28-29, Ackrill.)  Symbolically, we can represent the law of excluded middle as an exclusive disjunction: “A is true or A is false,” where only one alternative holds.  Because every proposition must be true or false, it does not follow, of course, that we can know if a particular proposition is true or false.

Despite perennial challenges to these so-called laws (by intuitionists, dialetheists, and others), Aristotelians inevitably claim that such counterarguments hinge on some unresolved ambiguity (equivocation), on a conflation of what we know with what is actually the case, on a false or static account of identity, or on some other failure to fully grasp the implications of what one is saying.

7. Existential Assumptions

Before we move on to consider Aristotle’s account of the syllogism, we need to clear up some widespread misconceptions and explain a few things about Aristotle’s project as a whole.  Criticisms of Aristotle’s logic often assume that what Aristotle was trying to do coincides with the basic project of modern logic.  Begin with the usual criticism brought against the traditional square of opposition.  For reasons we will not explore, modern logicians assume that universal claims about non-existent objects (or empty sets) are true but that particular claims about them are false.  On this reading, the claim that “all fairy-god mothers are beautiful” is true, whereas the claim that “some fairy-god mothers are beautiful” is false.  Clearly, this clashes with the traditional square of opposition.  By simple subalternation, the truth of the proposition “all fairy-god mothers are beautiful” requires the truth of the proposition “some fairy-god mothers are beautiful.”  If the first claim is true, the second claim must also be true.  For this and similar reasons, some modern logicians dismiss the traditional square as inadequate, claiming that Aristotle made a mistake or overlooked relevant issues.  Aristotle, however, is involved in a specialized project.  He elaborates an alternative logic, specifically adapted to the problems he is trying to solve.

Aristotle devises a companion-logic for science.  He relegates fictions like fairy godmothers and mermaids and unicorns to the realms of poetry and literature.  In his mind, they exist outside the ambit of science.  This is why he leaves no room for such non-existent entities in his logic.  This is a thoughtful choice, not an inadvertent omission.  Technically, Aristotelian science is a search for definitions, where a definition is “a phrase signifying a thing’s essence.” (Topics, I.5.102a37, Pickard-Cambridge.)  To possess an essence—is literally to possess a “what-it-is-to-be” something (to ti ēn einai).  Because non-existent entities cannot be anything, they do not, in Aristotle’s mind, possess an essence.  They cannot be defined.  Aristotle makes this point explicitly in the Posterior Analytics.  He points out that a definition of a goat-stag, a cross between a goat and a deer (the ancient equivalent of a unicorn), is impossible.  He writes, “no one knows the nature of what does not exist—[we] can know the meaning of the phrase or name ‘goat-stag’ but not what the essential nature of a goat-stag is.” (II.7.92b6-8, Mure.)  Because we cannot know what the essential nature of a goat-stag is—indeed, it has no essential nature—we cannot provide a proper definition of a goat-stag.  So the study of goat-stags (or unicorns) is not open to scientific investigation.  Aristotle sets about designing a logic that is intended to display relations between scientific propositions, where science is understood as a search for essential definitions.  This is why he leaves no place for fictional entities like goat-stags (or unicorns).  Hence, the assumed validity of a logical maneuver like subalternation.

8. Form versus Content

However, this is not the only way Aristotle’s approach parts ways with more modern assumptions.  Some modern logicians might define logic as that philosophical inquiry which considers the form not the content of propositions.  Aristotle’s logic is unapologetically metaphysical.  We cannot properly understand what Aristotle is about by separating form from content.  Suppose, for example, I was to claim that (1) all birds have feathers and (2) that everyone in the Tremblay family wears a red hat.  These two claims possess the same very same propositional form, A.  We can represent the first claim as: “All S are P,” where S=birds, and P=being feathered.  And we can also represent the second claim as “All S are P,” where S=members of the Tremblay family, and P=wearing a red hat.  Considered from an Aristotelian point of view, however, these two “All S are P” propositions possess a very different logical status.  Aristotle would view the relationship between birds and feathers expressed in the first proposition as a necessary link, for it is of the essence of birds to be feathered.  Something cannot be a bird and lack feathers.  The link between membership in the Tremblay family and the practice of wearing a red hat described in the second proposition is, in sharp contrast, a contingent fact about the world.  A member of the Tremblay family who wore a green hat would still be a member of the Tremblay family.  The fact that the Tremblays only wear red hats (because it is presently the fashion in Quebec) is an accidental (or surface) feature of what a Tremblay is.  So this second relationship holds in a much weaker sense.  In Aristotle’s mind, this has important consequences not just for metaphysics, but for logic.

It is hard to capture in modern English the underlying metaphysical force in Aristotle’s categorical statements.  In the Prior Analytics Aristotle renders the phrase “S is P” as “P belongs to S.”  The sense of belonging here is crucial.  Aristotle wants a logic that tells us what belongs to what.  But there are different levels of belonging.  My billfold belongs to me but this is a very tenuous sort of belonging.  The way my billfold belongs to me pales in significance to, say, the way a bill belongs to a duck-billed platypus.  It is not simply that the bill is physically attached to the platypus.  Even if you were to cut off the bill of a platypus, this would just create a deformed platypus; it would not change the sense of necessary belonging that connects platypuses and bills.  The deep nature of a platypus requires—it necessitates—a bill.  In so much as logic is about discovering necessary relationships, it is not the mere arrangement of terms and symbols but their substantive meaning that is at issue.

As only one consequence of this “metaphysical attitude,” consider Aristotle’s attitude towards inductive generalizations. Aristotle would have no patience for the modern penchant for purely statistical interpretations of inductive generalizations.  It is not the number of times something happens that matters.  It is the deep nature of the thing that counts.  If the wicked boy (or girl) next door pulls three legs off a spider, this is just happenstance.  This five-legged spider does not (for Aristotle) present a serious counterexample to the claim that “all spiders are eight-legged.”  The fact that someone can pull legs off a spider does not change the fact that there is a necessary connection between spiders and having eight legs.  Aristotle is too keen a biologist not to recognize that there are accidents and monstrosities in the world, but the existence of these individual imperfections does not change the deep nature of things.  Aristotle recognizes then that some types of belonging are more substantial—that is, more real—than others.  But this has repercussions for the ways in which we evaluate arguments.  In Aristotle’s mind, the strength of the logical connection that ties a conclusion to the premises in an argument depends, decisively, on the metaphysical status of the claims we are making.  Another example may help.  Suppose I were to argue, first:  “Ostriches are birds; all birds have feathers, therefore, ostriches have feathers.”  Then, second, “Hélène is the youngest daughter of the Tremblay family; all members of the Tremblay family wear red hats; therefore, Hélène wears a red hat.”  These arguments possess the same form.  (We will worry about formal details later.)  But, to Aristotle’s way of thinking, the first argument is, logically, more rigorous than the second.  Its conclusion follows from the essential and therefore necessary features of birds.  In the second argument, the conclusion only follows from the contingent state of fashion in Quebec.  In Aristotelian logic, the strength of an argument depends, in some important way, on metaphysical issues.  We can’t simply say “All S are P; and so forth” and be done with it.  We have to know what “S” and “P” stand for.  This is very different than modern symbolic logic.  Although Aristotle does use letters to take the place of variable terms in a logical relation, we should not be misled into thinking that the substantive content of what is being discussed does not matter.

9. The Syllogism

We are now in a position to consider Aristotle’s theory of the syllogism.  Although one senses that Aristotle took great pride in these accomplishments, we could complain that the persistent focus on the mechanics of the valid syllogism has obscured his larger project.  We will only cover the most basic points here, largely ignoring hypothetical syllogisms, modal syllogisms, extended syllogisms (sorites), inter alia.  The syllogistic now taught in undergraduate philosophy departments represents a later development of Aristotle’s ideas, after they were reworked at the hands of Medieval and modern logicians.  We will begin with a brief account of the way syllogisms are presented in modern logic and then move on to discussion of Aristotle’s somewhat different account.

We can define a syllogism, in relation to its logical form, as an argument made up of three categorical propositions, two premises (which set out the evidence), and a conclusion (that follows logically from the premises).  In the standard account, the propositions are composed of three terms, a subject term, a predicate term, and a middle term: the subject term is the (grammatical) subject of the conclusion; the predicate term modifies the subject in the conclusion, and the middle term links the subject and predicate terms in the premises.  The subject and predicate terms appear in different premises; the middle term appears once in each premise.  The premise with the predicate term and the middle term is called the major premise; the premise with the subject term and the middle term is called the minor premise.  Because syllogisms depend on the precise arrangement of terms, syllogistic logic is sometimes referred to as term logic.  Most readers of this piece are already familiar with some version of a proverbial (non-Aristotelian) example: “All men are mortal; (all) Socrates, Plato, and Aristotle are men; therefore, Socrates, Plato and Aristotle are mortal.”  If we symbolize the three terms in this syllogism such that Middle Term, M=man; Subject Term, S=Socrates, Plato, Aristotle; Predicate Term, P=mortal; we can represent the argument as: Major Premise:  All M is P;  Minor Premise:  All S is M;  Conclusion:  So, All S is P.  In the Middle Ages, scholars came up with Latin names for valid syllogisms, using vowels to represent the position of each categorical proposition.  (Their list is readily available elsewhere.)  The precise arrangement of propositions in this syllogism goes by the Latin moniker “Barbara” because the syllogism is composed of three A propositions: hence, BArbArA: AAA.  A syllogism in Barbara is clearly valid where validity can be understood (in modern terms) as the requirement that if the premises of the argument are true, then the conclusion must be true.  Modern textbook authors generally prove the validity of syllogisms in two ways.  First, they use a number of different rules.  For example: “when major and minor terms are universal in the conclusion they must be universal in the premises”; “if one premise is negative, the conclusion must be negative”; “the middle term in the premises must be distributed (include every member of a class) at least once,” and so on.  Second, they use Venn diagrams, intersecting circles marked to indicate the extension (or range) of different terms, to determine if the information contained in the conclusion is also included in the premises.

Modern logicians, who still hold to traditional conventions, classify syllogisms according to figure and mood.  The four figure classification derives from Aristotle; the mood classification, from Medieval logicians.  One determines the figure of a syllogism by recording the positions the middle term takes in the two premises.  So, for Barbara above, the figure is MP-SM, generally referred to as Figure 1.  One determines the mood of a syllogism by recording the precise arrangement of categorical propositions.  So, for Barbara, the mood is AAA.  By tabulating figures and moods, we can make an inventory of valid syllogisms.  (Medieval philosophers devised a mnemonic poem for such purposes that begins with the line “Barbara, Celarent, Darii, Ferioque priorisis.”  Details can be found in many textbooks.)  Although traditional classroom treatments prefer to stick to this time-honoured approach, Fred Sommers and George Englebretsen have devised a more up-to-date term logic that uses equations with “+” and “−” operators and is more attuned to natural language reasoning than the usual predicate logic.  Turn then to a brief discussion of Aristotle’s own account of the syllogism.

As already mentioned, we need to distinguish between two kinds of necessity.  Aristotle believes in metaphysical or natural necessity.  Birds must have feathers because that is their nature.  So the proposition “All birds have feathers” is necessarily true.”  But Aristotle identifies the syllogistic form with the logical necessity that obtains when two separate propositions necessitate a third.  He defines a sullogismos as “a discourse [logos] in which, certain things being stated, something other than what is stated follows of necessity from them.” (Prior Analytics, I.1.24b18-20, Jenkinson.)  The emphasis here is on the sense of inevitable consequence that precipitates a conclusion when certain forms of propositions are added together.  Indeed, the original Greek term for syllogism is more rigorously translated as “deduction.”  In the Prior Analytics, Aristotle’s method is exploratory.  He searches for pairs of propositions that combine to produce a necessary conclusion.  He begins by accepting that a few syllogisms are self-evidently (or transparently) true.  Barbara, AAA-Fig.1, discussed above, is the best example of this kind of “perfect syllogism.”  Another example of a perfect syllogism is Celarent: EAE-Fig.1.  On seeing the arrangement of terms in such cases, one immediately understands that the conclusion follows necessarily from the premises.  In the case of imperfect syllogisms Aristotle relies on a method of proof that translates them, step-by-step, into perfect syllogisms through a careful rearrangement of terms.  He does this directly, through conversion, or indirectly, through the relationships of contradiction and contrariety outlined in the square of opposition.  To cite only one very simple example, consider a brief passage in the Prior Analytics (I.5.27a5ff) where Aristotle demonstrates that the propositions “No P are M,” and “All S are M” can be combined to produce a syllogism with the conclusion, “No S are P.”  If “No P are M,” it must follow that “No M are P” (conversion); but “No M are P” combined with the second premise, “All S are M” proves that “No S are P.”  (This is to reduce the imperfect syllogism Cesare to the perfect syllogism Celarent.)  This conversion of an imperfect syllogism into a perfect syllogism demonstrates that the original arrangement of terms is a genuine deduction.  In other cases, Aristotle proves that particular arrangements of terms cannot yield proper syllogisms by showing that, in these instances, true premises lead to obviously false or contradictory conclusions.  Alongside these proofs of logical necessity, Aristotle derives general rules for syllogisms, classifies them according to figure, and so on.

It is important to reiterate that Aristotelian syllogisms are not (primarily) about hypothetical sets, imaginary classes, or purely abstract mathematical entities.  Aristotle believes there are natural groups in the world—species and genera—made up of individual members that share a similar nature, and hence similar properties.   It is this sharing of individual things in a similar nature that makes universal statements possible.  Once we have universal terms, we can make over-arching statements that, when combined, lead inescapably to specific results.  In the most rigorous syllogistic, metaphysical necessity is added to logical necessity to produce an unassailable inference.  Seen in this Aristotelian light, syllogisms can be construed as a vehicle for identifying the deep, immutable natures that make things what they are.

Medieval logicians summarized their understanding of the rationale underlying the syllogism in the so-called dictum de omni et nullo (the maxim of all and none), the principle that whatever is affirmed or denied of a whole must be affirmed or denied of a part (which they alleged derived from a reading of Prior Analytics I.1.24b27-30).  Some contemporary authors have claimed that Aristotelian syllogistic is at least compatible with a deflationary theory of truth, the modern idea that truth-claims about propositions amount to little more than an assertion of the statement itself.  (To say that “S is P” is true, is just to assert “S is P.”)  Perhaps it would be better to say that one can trace the modern preoccupation with validity in formal logic to the distinction between issues of logical necessity and propositional truth implicit in Aristotle.  In Aristotle’s logic, arguments do not take the form: “this state of affairs is true/false,” “this state of affairs is true/false,” therefore this state of affairs is true/false.”  We do not argue “All S is M is true” but merely, “All S is M.”  When it comes to determining validity—that is, when it comes to determining whether we have discovered a true syllogism—the question of the truth or falsity of propositions is pushed aside and attention is focused on an evaluation of the logical connection between premises and conclusion.  Obviously, Aristotle recognizes that ascertaining the material truth of premises is an important part of argument evaluation, but he does not present a “truth-functional” logic.  The concept of a “truth value” does not play any explicit role in his formal analysis the way it does, for example, with modern truth tables.  Mostly, Aristotle wants to know what we can confidently conclude from two presumably true premises; that is, what kind of knowledge can be produced or demonstrated if two given premises are true.

10. Inductive Syllogism

Understanding what Aristotle means by inductive syllogism is a matter of serious scholarly dispute.  Although there is only abbreviated textual evidence to go by, his  account of inductive argument can be supplemented by his ampler account of its rhetorical analogues, argument from analogy and argument from example.  What is clear is that Aristotle thinks of induction (epagoge) as a form of reasoning that begins in the sense perception of particulars and ends in a understanding that can be expressed in a universal proposition (or even a concept).  We pick up mental momentum through a familiarity with particular cases that allows us to arrive at a general understanding of an entire species or genus.  As we discuss below, there are indications that Aristotle views induction, in the first instance, as a manifestation of immediate understanding and not as an argument form.  Nonetheless, in the Prior Analytics II.23 (and 24), he casts inductive reasoning in syllogistic form, illustrating the “syllogism that springs out of induction” (ho ex epagoges sullogismos) by an argument about the longevity of bileless animals.

Relying on old biological ideas, Aristotle argues that we can move from observations about the longevity of individual species of bileless animals (that is, animals with clean-blood) to the universal conclusion that bilelessness is a cause of longevity.  His argument can be paraphrased in modern English: All men, horses, mules, and so forth, are long-lived; all men, horses, mules, and so forth, are bileless animals; therefore, all bileless animals are long-lived.  Although this argument seems, by modern standards, invalid, Aristotle apparently claims that it is a valid deduction.  (Remember that the word “syllogism” means “deduction,” so an “inductive syllogism” is, literally, an “inductive deduction.”)  He uses a technical notion of “convertibility” to formally secure the validity of the argument.  According to this logical rule, terms that cover the same range of cases (because they refer to the same nature) are interchangeable (antistrepho).  They can be substituted for one another.  Aristotle believes that because the logical terms “men, horses, mules, etc” and “bileless animals” refer to the same genus, they are convertible.  If, however, we invert the terms in the proposition “all men, horses, mules, and so forth, are bileless animals” to “all bileless animals are men, horses, mules, and so forth,” we can then rephrase the original argument: All men, horses, mules, and so forth, are long-lived; all bileless animals are men, horses, mules, and so forth; therefore, all bileless animals are long-lived.  This revised induction possesses an obviously valid form (Barbara, discussed above).  Note that Aristotle does not view this inversion of terms as a formal gimmick or trick; he believes that it reflects something metaphysically true about shared natures in the world.  (One could argue that inductive syllogism operates by means of the quantification of the predicate term as well as the subject term of a categorical proposition, but we will not investigate that issue here.)

These passages pose multiple problems of interpretation.  We can only advance a general overview of the most important disagreements here.  We might identify four different interpretations of Aristotle’s account of the inductive syllogism.  (1)  The fact that Aristotle seems to view this as a valid syllogism has led many commentators (such as Ross, McKirahan, Peters) to assume that he is referring to what is known as “perfect induction,” a generalization that is built up from a complete enumeration of particular cases.  The main problem here is that it seems to involve a physical impossibility.  No one could empirically inspect every bileless animal (and/or species) to ascertain that the connection between bilelessness and longevity obtains in every case.  (2) Some commentators combine this first explanation with the further suggestion that the bileless example is a rare case and that Aristotle believes, in line with modern accounts, that most inductions only produce probable belief.  (Cf. Govier’s claim that there is a “tradition going back to Aristotle, which maintains that there are  . . .  only two broad types of argument: deductive arguments which are conclusive, and inductive arguments, which are not.”  (Problems in Argument Analysis, 52.))  One problem with such claims is that they overlook the clear distinction that Aristotle makes between rigorous inductions and rhetorical inductions (which we discuss below).  (3)  Some commentators claim that Aristotle (and the ancients generally) overlooked the inherent tenuousness of the inductive reasoning.  On this account, Empiricists such as Locke and Hume discovered something seriously wrong about induction that escaped the notice of an ancient author like Aristotle.  Philosophers in the modern Anglo-American tradition largely favor this interpretation.  (Cf. Garrett’s and Barbanell’s insistence that “Hume was the first to raise skeptical doubts about inductive reasoning, leaving a puzzle as to why the concerns he highlighted had earlier been so completely overlooked.”  (“Induction,” 172.)  Such allegations do not depend, however, on any close reading of a wealth of relevant passages in the Aristotelian corpus and in ancient philosophy generally.  (4) Finally, a minority contemporary view, growing in prominence, has argued that Aristotle did not conceive of induction as an enumerative process but as a matter of intelligent insight into natures.  (Cf. McCaskey, Biondi, Rijk , Groarke.)  On this account, Aristotle does not mean to suggest that inductive syllogism depends on an empirical inspection of every member of a group but on a universal act of understanding that operates through sense perception.  Aristotelian induction can best be compared to modern notions of abduction or inference to the best explanation.  This non-mathematical account has historical precedents in neo-Platonism, Thomism, Idealism, and in the textbook literature of traditionalist modern logicians that opposed the new formal logic.  This view has been criticized, however, as a form of mere intuitionism dependent on an antiquated metaphysics.

The basic idea that induction is valid will raise eyebrows, no doubt.  It is important to stave off some inevitable criticism before continuing.  Modern accounts of induction, deriving, in large part, from Hume and Locke, display a mania for prediction.  (Hence Hume’s question: how can we know that the future bread we eat will nourish us based on past experience of eating bread?)  But this is not primarily how Aristotle views the problem.  For Aristotle, induction is about understanding natural kinds.  Once we comprehend the nature of something, we will, of course, be able to make predictions about its future properties, but understanding its nature is the key.  In Aristotle’s mind, rigorous induction is valid because it picks out those necessary and essential traits that make something what it is.  To use a very simple example, understanding that all spiders have eight legs—that is, that all undamaged spiders have eight legs—is a matter of knowing something deep about the biological nature that constitutes a spider.  Something that does not have eight legs is not a spider.  (Fruitful analogies might be drawn here to the notion of “a posteriori necessity” countenanced by contemporary logicians such as Hilary Putnam and Saul Kripke or to the “revised” concept of a “natural kind” advanced by authors such as Hilary Kornblith or Brian Ellis.)

It is commonly said that Aristotle sees syllogisms as a device for explaining relationships between groups.  This is, in the main, true.  Still, there has to be some room for a consideration of individuals in logic if we hope to include induction as an essential aspect of reasoning.  As Aristotle explains, induction begins in sense perception and sense perception only has individuals as its object.  Some commentators would limit inductive syllogism to a movement from smaller groups (what Aristotle calls “primitive universals”) to larger groups, but one can only induce a generalization about a smaller group on the basis of a prior observation of individuals that compose that group.  A close reading reveals that Aristotle himself mentions syllogisms dealing with individuals (about the moon, Topics, 78b4ff; about the wall, 78b13ff; about the eclipse, Posterior Analytics, 93a29ff, and so on.)  If we treat individuals as universal terms or as representative of universal classes, this poses no problem for formal analysis.  Collecting observations about one individual or about individuals who belong to a larger group can lead to an accurate generalization.

11. Deduction versus Induction

We cannot fully understand the nature or role of inductive syllogism in Aristotle without situating it with respect to ordinary, “deductive” syllogism.  Aristotle’s distinction between deductive and inductive argument is not precisely equivalent to the modern distinction.  Contemporary authors differentiate between deduction and induction in terms of validity.  (A small group of informal logicians called “Deductivists” dispute this account.)  According a well-worn formula, deductive arguments are valid; inductive arguments are invalid.  The premises in a deductive argument guarantee the truth of the conclusion: if the premises are true, the conclusion must be true.  The premises in an inductive argument provide some degree of support for the conclusion, but it is possible to have true premises and a false conclusion.  Although some commentators attribute such views to Aristotle, this distinction between strict logical necessity and merely probable or plausible reasoning more easily maps onto the distinction Aristotle makes between scientific and rhetorical reasoning (both of which we discuss below).  Aristotle views inductive syllogism as scientific (as opposed to rhetorical) induction and therefore as a more rigorous form of inductive argument.

We can best understand what this amounts to by a careful comparison of a deductive and an inductive syllogism on the same topic.  If we reconstruct, along Aristotelian lines, a deduction on the longevity of bileless animals, the argument would presumably run: All bileless animals are long-lived; all men, horses, mules, and so forth, are bileless animals; therefore, all men, horses, mules, and so forth, are long-lived.  Defining the terms in this syllogism as: Subject Term, S=men, horses, mules, and so forth; Predicate Term, P=long-lived animals; Middle Term, M=bileless animals, we can represent this metaphysically correct inference as:  Major Premise: All M are P.  Minor Premise: All S are M.  Conclusion: Therefore all S are P.  (Barbara.)  As we already have seen, the corresponding induction runs: All men, horses, mules, and so forth, are long-lived; all men, horses, mules, and so forth, are bileless animals; therefore, all bileless animals are long-lived.  Using the same definition of terms, we are left with:  Major Premise: All S are P.  Minor Premise: All S are M (convertible to All M are S).  Conclusion: Therefore, all M are P.  (Converted to Barbara.)  The difference between these two inferences is the difference between deductive and inductive argument in Aristotle.

Clearly, Aristotelian and modern treatments of these issues diverge.  As we have already indicated, in the modern formalism, one automatically defines subject, predicate, and middle terms of a syllogism according to their placement in the argument.  For Aristotle, the terms in a rigorous syllogism have a metaphysical significance as well.  In our correctly formulated deductive-inductive pair, S represents individual species and/or the individuals that make up those species (men, horses, mules, and so forth); M represents the deep nature of these things (bilelessness), and P represents the property that necessarily attaches to that nature (longevity).  Here then is the fundamental difference between Aristotelian deduction and induction in a nutshell.  In deduction, we prove that a property (P) belongs to individual species (S) because it possesses a certain nature (M); in induction, we prove that a property (P) belongs to a nature (M) because it belongs to individual species (S).  Expressed formally, deduction proves that the subject term (S) is associated with a predicate term (P) by means of the middle term (M); induction proves that the middle term (M) is associated with the predicate term (P) by means of the subject term (S).  (Cf. Prior Analytics, II.23.68b31-35.)  Aristotle does not claim that inductive syllogism is invalid but that the terms in an induction have been rearranged.  In deduction, the middle term joins the two extremes (the subject and predicate terms); in induction, one extreme, the subject term, acts as the middle term, joining the true middle term with the other extreme.  This is what Aristotle means when he maintains that in induction one uses a subject term to argue to a middle term.  Formally, with respect to the arrangement of terms, the subject term becomes the “middle term” in the argument.

Aristotle distinguishes then between induction and deduction in three different ways.  First, induction moves from particulars to a universal, whereas deduction moves from a universal to particulars.  The bileless induction moves from particular species to a universal nature; the bileless deduction moves from a universal nature to particular species.  Second, induction moves from observation to language (that is, from sense perception to propositions), whereas deduction moves from language to language (from propositions to a new proposition).  The bileless induction is really a way of demonstrating how observations of bileless animals lead to (propositional) knowledge about longevity; the bileless deduction demonstrates how (propositional) knowledge of a universal nature leads (propositional) knowledge about particular species. Third, induction identifies or explains a nature, whereas deduction applies or demonstrates a nature.  The bileless induction provides an explanation of the nature of particular species: it is of the nature of bileless organisms to possess a long life.  The bileless deduction applies that finding to particular species; once we know that it is of the nature of bileless organisms to possess a long life, we can demonstrate or put on display the property of longevity as it pertains to particular species.

One final point needs clarification.  The logical form of the inductive syllogism, after the convertibility maneuver, is the same as the deductive syllogism.  In this sense, induction and deduction possess the same (final) logical form.  But, of course, in order to successfully perform an induction, one has to know that convertibility is possible, and this requires an act of intelligence which is able to discern the metaphysical realities between things out in the world.  We discuss this issue under non-discursive reasoning below.

12. Science

Aristotle wants to construct a logic that provides a working language for rigorous science as he understands it.  Whereas we have been talking of syllogisms as arguments, Aristotelian science is about explanation.  Admittedly, informal logicians generally distinguish between explanation and argument.  An argument is intended to persuade about a debatable point; an explanation is not intended to persuade so much as to promote understanding.  Aristotle views science as involving logical inferences that move beyond what is disputable to a consideration of what is the case.  Still, the “explanatory” syllogisms used in science possess precisely the same formal structures as “argumentative” syllogisms.  So we might consider them arguments in a wider sense.  For his part, Aristotle relegates eristic reason to the broad field of rhetoric.  He views science, perhaps naively, as a domain of established fact.  The syllogisms used in science are about establishing an explanation from specific cases (induction) and then applying or illustrating this explanation to specific cases (deduction).

The ancient Greek term for science, “episteme,” is not precisely equivalent to its modern counterpart.  In Aristotle’s worldview, science, as the most rigorous sort of discursive knowledge, is opposed to mere opinion (doxa); it is about what is universal and necessary as opposed to what is particular and contingent, and it is theoretical as opposed to practical.  Aristotle believes that knowledge, understood as justified true belief, is most perfectly expressed in a scientific demonstration (apodeixis), also known as an apodeitic or scientific syllogism.  He posits a number of specific requirements for this most rigorous of all deductions.  In order to qualify as a scientific demonstration, a syllogism must possess premises that are “true, primary, immediate, better known than, prior to, and causative of the conclusion.” (Posterior Analytics, I.2.71b20ff, Tredennick.)  It must yield information about a natural kind or a group of individual things.  And it must produce universal knowledge (episteme).  Specialists have disputed the meaning of these individual requirements, but the main message is clear.  Aristotle accepts, as a general rule, that a conclusion in an argument cannot be more authoritative than the premises that led to that conclusion.  We cannot derive better (or more reliable) knowledge from worse (or less reliable) knowledge.  Given that a scientific demonstration is the most rigorous form of knowledge possible, we must start with premises that are utterly basic and as certain as possible, which are “immediately” induced from observation, and which confirm to the necessary structure of the world in a way that is authoritative and absolutely incontrovertible.  This requires a reliance on first principles which we discuss below.

In the best case scenario, Aristotelian science is about finding definitions of species that, according to a somewhat bald formula, identify the genus (the larger natural group) and the differentia (that unique feature that sets the species apart from the larger group).  As Aristotle’s focus on definitions is a bit cramped and less than consistent (he himself spends a great deal of time talking about necessary rather than essential properties), let us broaden his approach to science to focus on ostensible definitions, where an ostensible definition is either a rigorous definition or, more broadly, any properly-formulated phrase that identifies the unique properties of something.  On this looser approach, which is more consistent with Aristotle’s actual practice, to define an entity is to identify the nature, the essential and necessary properties, that make it uniquely what it is.  Suffice it to say that Aristotle’s idealized account of what science entails needs to be expanded to cover a wide range of activities and that fall under what is now known as scientific practice.  What follows is a general sketch of his overall orientation.  (We should point out that Aristotle himself resorts to whatever informal methods seem appropriate when reporting on his own biological investigations without too much concern for any fixed ideal of formal correctness.  He makes no attempt to cast his own scientific conclusions in metaphysically-correct syllogisms.  One could perhaps insist that he uses enthymemes (syllogisms with unstated premises), but mostly, he just seems to record what seems appropriate without any deliberate attempt at correct formalization.  Note that most of Aristotle’s scientific work is “historia,” an earlier stage of observing, fact-collecting, and opinion-reporting that proceeds the principled theorizing of advanced science.)

For Aristotle, even theology is a science insomuch as it deals with universal and necessary principles.  Still, in line with modern attitudes (and in opposition to Plato), Aristotle views sense-perception as the proper route to scientific knowledge.  Our empirical experience of the world yields knowledge through induction.  Aristotle elaborates then an inductive-deductive model of science.  Through careful observation of  particular species, the scientist induces an ostensible definition to explain a nature and then demonstrates the consequences of that nature for particular species.  Consider a specific case.  In the Posterior Analytics (II.16-17.98b32ff, 99a24ff), Aristotle mentions an explanation about why deciduous plants lose their leaves in the winter.  The ancients apparently believed this happens because sap coagulates at the base of the leaf (which is not entirely off the mark).  We can use this ancient example of a botanical explanation to illustrate how the business of Aristotelian science is supposed to operate.  Suppose we are a group of ancient botanists who discover, through empirical inspection, why deciduous plants such as vines and figs lose their leaves.  Following Aristotle's lead, we can cast our discovery in the form of the following inductive syllogism:  “Vine, fig, and so forth, are deciduous.  Vine, fig, and so forth, coagulate sap.  Therefore, all sap-coagulators are deciduous.”  This induction produces the definition of “deciduous.”  (“Deciduous” is the definiendum; sap-coagulation, the definiens; the point being that everything that is a sap-coagulator is deciduous, which might not be the case if we turned it around and said “All deciduous plants are sap-coagulators.”)  But once we have a definition of “deciduous,” we can use it as the first premise in a deduction to demonstrate something about say, the genus “broad-leaved trees.”  We can apply, in other words, what we have learned about deciduous plants in general to the more specific genus of broad-leaved trees.  Our deduction will read:  “All sap-coagulators are deciduous.  All broad-leaved trees are sap-coagulators.  Therefore, all broad-leaved trees are deciduous.”  We can express all this symbolically.  For the induction, where S=vine, fig, and so forth, P=deciduous, M= being a sap-coagulator, the argument is: “All S is P; all S is M (convertible to all M is S); therefore, all M are P (converted to Barbara).  For the deduction, where S=broad-leafed trees, M=being a sap-coagulator, P=deciduous, the argument can be represented: “All M are P; all S is M; therefore, all S is P” (Barbara).  This is then the basic logic of Aristotelian science.

A simple diagram of how science operates follows (Figure 2).

Figure 2

The Inductive-Deductive Method of Aristotelian Science

Aristotle views science as a search for causes (aitia).  In a well-known example about planets not twinkling because they are close to the earth (Posterior Analytics, I.13), he makes an important distinction between knowledge of the fact and knowledge of the reasoned fact. The rigorous scientist aims at knowledge of the reasoned fact which explains why something is the way it is.  In our example, sap-coagulation is the cause of deciduous; deciduous is not the cause of sap-coagulation.  That is why “sap-coagulation” is featured here as the middle term, because it is the cause of the phenomenon being investigated.  The deduction “All sap-coagulators are deciduous; all broad-leaved trees are sap-coagulators; therefore, all broad-leaved trees are deciduous” counts as knowledge of the reasoned fact because it reveals the cause of broad-leafed deciduousness.

Aristotle makes a further distinction between what is more knowable relative to us and what is more knowable by nature (or in itself).  He remarks in the Physics, “The natural way of [inquiry] is to start from the things which are more knowable and obvious to us and proceed towards those which are clearer and more knowable by nature; for the same things are not ‘knowable relatively to us’ and ‘knowable’ without qualification.”  (I.184a15, Hardie, Gaye.)  In science we generally move from the effect to the cause, from what we see and observe around us to the hidden origins of things.  The outward manifestation of the phenomenon of “deciduousness” is more accessible to us because we can see the trees shedding their leaves, but sap-coagulation as an explanatory principle is more knowable in itself because it embodies the cause.  To know about sap-coagulation counts as an advance in knowledge; someone who knows this knows more than someone who only knows that trees shed their leaves in the fall.  Aristotle believes that the job of science is to put on display what best counts as knowledge, even if the resulting theory strays from our immediate perceptions and first concerns.

Jan Lukasiewicz, a modern-day pioneer in term logic, comments that “some queer philosophical prejudices which cannot be explained rationally” made early commentators claim that the major premise in a syllogism (the one with the middle and predicate terms) must be first.  (Aristotle’s Syllogistic, 32.)  But once we view the syllogism within the larger context of Aristotelian logic, it becomes perfectly obvious why these early commentators put the major premise first: because it constitutes the (ostensible) definition; because it contains an explanation of the nature of the thing upon which everything else depends.  The major premise in a scientific deduction is the most important part of the syllogism; it is scientifically prior in that it reveals the cause that motivates the phenomenon.  So it makes sense to place it first.  This was not an irrational prejudice.

13. Non-Discursive Reasoning

The distinction Aristotle draws between discursive knowledge (that is, knowledge through argument) and non-discursive knowledge (that is, knowledge through nous) is akin to the medieval distinction between ratio (argument) and intellectus (direct intellection).  In Aristotelian logic, non-discursive knowledge comes first and provides the starting points upon which discursive or argumentative knowledge depends.  It is hard to know what to call the mental power that gives rise to this type of knowledge in English.  The traditional term “intuition” invites misunderstanding.  When Aristotle claims that there is an immediate sort of knowledge that comes directly from the mind (nous) without discursive argument, he is not suggesting that knowledge can be accessed through vague feelings or hunches.  He is referring to a capacity for intelligent appraisal that might be better described as discernment, comprehension, or insight.  Like his later medieval followers, he views “intuition” as a species of reason; it is not prior to reason or outside of reason, it is—in the highest degree—the activity of reason itself.  (Cf. Posterior Analytics, II. 19; Nicomachean Ethics, IV.6.)

For Aristotle, science is only one manifestation of human intelligence.  He includes, for example, intuition, craft, philosophical wisdom, and moral decision-making along with science in his account of the five intellectual virtues.  (Nicomachean Ethics, VI.3-8.)  When it comes to knowledge-acquisition, however, intuition is primary.  It includes the most basic operations of intelligence, providing the ultimate ground of understanding and inference upon which everything else depends.  Aristotle is a firm empiricist.  He believes that knowledge begins in perception, but he also believes that we need intuition to make sense of perception.  In the Posterior Analytics (II.19.100a3-10), Aristotle posits a sequence of steps in mental development: sense perception produces memory which (in combination with intuition) produces human experience (empeiria), which produces art and science.  Through a widening movement of understanding (really, a non-discursive form of induction), intuition transforms observation and memory so as to produce knowledge (without argument).  This intuitive knowledge is even more reliable than science.  As Aristotle writes in key passages at the end of the Posterior Analytics, “no other kind of thought except intuition is more accurate than scientific knowledge,” and “nothing except intuition can be truer than scientific knowledge.” (100b8ff, Mure, slightly emended.)

Aristotelian intuition supplies the first principles (archai) of human knowledge: concepts, universal propositions, definitions, the laws of logic, the primary principles of the specialized science, and even moral concepts such as the various virtues.  This is why, according to Aristotle, intuition must be viewed as infallible.  We cannot claim that the first principles of human intelligence are dubious and then turn around and use those principles to make authoritative claims about the possibility (or impossibility) of knowledge.  If we begin to doubt intuition, that is, human intelligence at its most fundamental level of operation, we will have to doubt everything else that is built upon this universal foundation: science, philosophy, knowledge, logic, inference, and so forth.  Aristotle never tries to prove first principles.  He acknowledges that when it comes to the origins of human thought, there is a point when one must simply stop asking questions.  As he points out, any attempt at absolute proof would lead to an infinite regress.  In his own words: “It is impossible that there should be demonstration of absolutely everything; there would be an infinite regress, so that there would still be no demonstration.” (Metaphysics, 1006a6ff, Ross.)  Aristotle does make arguments, for example, that meaningful speech presupposes a logical axiom like the principle of non-contradiction, but that is not, strictly speaking, a proof of the principle.

Needless to say, Aristotle’s reliance on intuition has provoked a good deal of scholarly disagreement.  Contemporary commentators such as Joseph Owens, G. L. Owen, and Terrence Irwin have argued that Aristotelian first principles begin in dialectic.  On their influential account, we arrive at first principles through a weaker form of argument that revolves around a consideration of “endoxa,” the proverbial opinions of the many and/or the wise.  Robin Smith (and others) severely criticize their account.  The idea that mere opinion could somehow give rise to rigorous scientific knowledge conflicts with Aristotle’s settled view that less reliable knowledge cannot provide sufficient logical support for the more reliable knowledge.  As we discuss below, endoxa do provide a starting point for dialectical (and ethical) arguments in Aristotle’s system.  They are, in his mind, a potent intellectual resource, a library of stored wisdom and right opinion.  They may include potent expressions of first principles already discovered by other thinkers and previous generations.  But as Aristotle makes clear at the end of the Posterior Analytics and elsewhere, the recognition that something is a first principle depends directly on intuition.  As he reaffirms in the Nicomachean Ethics, “it is intuitive reason that grasps the first principles.”  (VI.6.1141a7, Ross.)

If Irwin and his colleagues seek to limit the role of intuition in Aristotle, authors such as Lambertus Marie de Rijk and D. W. Hamlyn go to an opposite extreme, denying the importance of the inductive syllogism and identifying induction (epagoge) exclusively with intuition.  De Rijk claims that Aristotelian induction is “a pre-argumentation procedure consisting in . . . [a] disclosure [that] does not take place by a formal, discursive inference, but is, as it were, jumped upon by an intuitive act of knowledge.” (Semantics and Ontology, I.2.53, 141-2.) Although this position seems extreme, it seems indisputable that inductive syllogism depends on intuition, for without intuition (understood as intelligent discernment), one could not recognize the convertibility of subject and middle terms (discussed above).  Aristotle also points out that one needs intuition to recognize the (ostensible) definitions so crucial to the practice of Aristotelian science.  We must be able to discern the difference between accidental and necessary or essential properties before coming up with a definition.  This can only come about through some kind of direct (non-discursive) discernment.  Aristotle proposes a method for discovering definitions called division—we are to divide things into smaller and smaller sub-groups—but this method depends wholly on nous.  (Cf. Posterior Analytics, II.13.)  Some modern Empiricist commentators, embarrassed by such mystical-sounding doctrines, warn that this emphasis on non-discursive reasoning collapses into pure rationalism (or Platonism), but this is a caricature.  What Aristotle means by rational “intuition” is not a matter of pure, disembodied thought.  One does not arrive at first principles by closing one’s eyes and retreating from the world (as with Cartesian introspection).  For Aristotle, first principles arise through a vigorous interaction of the empirical with the rational; a combination of rationality and sense experience produces the first seeds of human understanding.

Note that Aristotle believes that there are first principles (koinai archai) that are common to all fields of inquiry, such as the principle of non-contradiction or the law of excluded middle, and that each specialized science has its own first principles.  We may recover these first principles second-hand by a (dialectical) review of authorities.  Or, we can derive them first hand by analysis, by dividing the subject matter we are concerned with into its constituent parts.  At the beginning of the Physics, Aristotle explains, “What is to us plain and obvious at first is rather confused masses, the elements and principles of which become known to us later by analysis. Thus we must advance from generalities to particulars; for it is a whole that is best known to sense-perception, and a generality is a kind of whole, comprehending many things within it, like parts.  . . .  Similarly a child begins by calling all men ‘father,’ and all women ‘mother,’ but later on distinguishes each of them.”  (I.1.184a22-184b14, Hardie, Gaye.)  Just as children learn to distinguish their parents from other human beings, those who successfully study a science learn to distinguish the different natural kinds that make up the whole of a scientific phenomenon.  This precedes the work of induction and deduction already discussed. Once we have the parts (or the aspects), we can reason about them scientifically.

14. Rhetoric

Argumentation theorists (less aptly characterized as informal logicians) have critiqued the ascendancy of formal logic, complaining that the contemporary penchant for symbolic logic leaves one with an abstract mathematics of empty signs that cannot be applied in any useful way to larger issues.  Proponents of formal logic counter that their specialized formalism allows for a degree of precision otherwise not available and that any focus on the substantive meaning or truth of propositions is a distraction from logical issues per se.  We cannot readily fit Aristotle into one camp or the other.  Although he does provide a formal analysis of the syllogism, he intends logic primarily as a means of acquiring true statements about the world.  He also engages in an enthusiastic investigation of less rigorous forms of reasoning included in the study of dialectic and rhetoric.

Understanding precisely what Aristotle means by the term “dialectics” (dialektike) is no easy task.  He seems to view it as the technical study of argument in general or perhaps as a more specialized investigation into argumentative dialogue.  He intends his rhetoric (rhetorike), which he describes as the counterpart to dialectic, as an expansive study of the art of persuasion, particularly as it is directed towards a non-academic public.  Suffice it to say, for our purposes, that Aristotle reserves a place in his logic for a general examination of all arguments, for scientific reasoning, for rhetoric, for debating techniques of various sorts, for jurisprudential pleading, for cross-examination, for moral reasoning, for analysis, and for non-discursive intuition.

Aristotle distinguishes between what I will call, for convenience, rigorous logic and persuasive logic.  Rigorous logic aims at epistē, true belief about what is eternal, necessary, universal, and unchanging.  (Aristotle sometimes qualifies this to include “for the most part” scientific knowledge.)  Persuasive logic aims at acceptable, probable, or convincing belief (what we might call “opinion” instead of knowledge.)  It deals with approximate truth, with endoxa (popular or proverbial opinions), with reasoning that is acceptable to a particular audience, or with claims about accidental properties and contingent events.  Persuasive syllogisms have the same form as rigorous syllogisms but are understood as establishing their conclusions in a weaker manner.  As we have already seen, rigorous logic produces deductive and inductive syllogisms; Aristotle indicates that persuasive logic produces, in a parallel manner, enthymemes, analogies, and examples.  He defines an enthymeme as a deduction “concerned with things which may, generally speaking, be other than they are,” with matters that are “for the most part only generally true,”  or with “probabilities and signs”  (Rhetoric, I.2.1357a, Roberts).  He also mentions that the term “enthymeme” may refer to arguments with missing premises.  (Rhetoric, 1.2.1357a16-22.)  When it comes to induction, Aristotle’s presentation is more complicated, but we can reconstruct what he means in a more straightforward manner.

The persuasive counterpart to the inductive syllogism is the analogy and the example, but the example is really a composite argument formed from first, an analogy and second, an enthymeme.  Some initial confusion is to be expected as Aristotle’s understanding of analogies differs somewhat from contemporary accounts.  In contemporary treatments, analogies depend on a direct object(s)-to-object(s) comparison.  Aristotelian analogy, on the other hand, involves reasoning up to a general principle.  We are to conclude (1) that because individual things of a certain nature X have property z, everything that possesses nature X has property z.  But once we know that every X possesses property z, we can make a deduction (2) that some new example of nature X will also have property z.  Aristotle calls (1), the inductive movement up to the generalization, an analogy (literally, an argument from likeness=ton homoion); he calls (2), the deductive movement down to a new case, an enthymeme; and he considers (1) + (2), the combination of the analogy and the enthymeme together, an example (paradeigma).  He presents the following argument from example in the Rhetoric (I.2.1357b31-1358a1).  Suppose we wish to argue that Dionysus, the ruler, is asking for a bodyguard in order to set himself up as despot.  We can establish this by a two-step process.  First, we can draw a damning analogy between previous cases where rulers asked for a bodyguard and induce a general rule about such practices.  We can insist that Peisistratus, Theagenes, and other known tyrants, were scheming to make themselves despots, that Peisistratus, Theagenes, and other known tyrants also asked for a bodyguard, and that therefore, everyone who asks for a bodyguard is scheming to make themselves dictators.  But once we have established this general rule, we can move on to the second step in our argument, using this conclusion as a premise in an enthymeme.  We can argue that all people asking for a bodyguard are scheming to make themselves despots, that Dionysius is someone asking for a bodyguard, and that therefore, Dionysius must be scheming to make himself despot.  This is not, in Aristotle’s mind, rigorous reasoning.  Nonetheless, we can, in this way, induce probable conclusions and then use them to deduce probable consequences.  Although these arguments are intended to be persuasive or plausible rather than scientific, but the reasoning strategy mimics the inductive-deductive movement of science (without compelling, of course, the same degree of belief).

We should point out that Aristotle does not restrict himself to a consideration of purely formal issues in his discussion of rhetoric.  He famously distinguishes, for example, between three means of persuasion: ethos, pathos, and logos.  As we read, at the beginning of his Rhetoric: “Of the modes of persuasion furnished by the spoken word there are three kinds. . . . [Firstly,] persuasion is achieved by the speaker's personal character when the speech is so spoken as to make us think him credible. . . . Secondly, persuasion may come through the hearers, when the speech stirs their emotions. . . . Thirdly, persuasion is effected through the speech itself when we have proved [the point] by means of the persuasive arguments suitable to the case in question.”  (Rhetoric, I.2.1356a2-21, Roberts.)  Aristotle concludes that effective arguers must (1) understand morality and be able to convince an audience that they themselves are good, trustworthy people worth listening to (ethos); (2) know the general causes of emotion and be able to elicit them from specific audience (pathos); and (3) be able to use logical techniques to make convincing (not necessarily sound) arguments (logos).  Aristotle broaches many other issues we cannot enter into here.  He acknowledges that the goal of rhetoric is persuasion, not truth.  Such techniques may be bent to immoral or dishonest ends.  Nonetheless, he insists that it is in the public interest to provide a comprehensive and systematic survey of the field.

We might mention two other logical devices that have a place in Aristotle’s work: the topos and the aporia.  Unfortunately, Aristotle never explicitly explains what a topos is.  The English word “topic” does not do justice to the original notion, for although Aristotelian topoi may be organized around subject matter, they focus more precisely on recommended strategies for successful arguing.  (The technical term derives from a Greek word referring to a physical location.  Some scholars suggest a link to ancient mnemonic techniques that superimposed lists on familiar physical locations as a memory aid.)  In relevant discussions (in the Topics and the Rhetoric) Aristotle offers helpful advice about finding (or remembering) suitable premises, about verbally out-manoeuvring an opponent, about finding forceful analogies, and so on.  Examples of specific topoi would include discussions about how to argue which is the better of two alternatives, how to substitute terms effectively, how to address issues about genus and property, how to argue about cause and effect, how to conceive of sameness and difference, and so on.  Some commentators suggest that different topoi may have been used in a classroom situation in conjunction with student exercises and standardized texts, or with written lists of endoxa, or even with ready-made arguments that students were expected to memorize.

An aporia is a common device in Greek philosophy.  The Greek word aporia (plural, aporiai) refers to a physical location blocked off by obstacles where there is no way out; by extension, it means, in philosophy, a mental perplexity, an impasse, a paradox or puzzle that stoutly resists solution.  Aristotle famously suggests that philosophers begin with aporiai and complete their task by resolving the apparent paradoxes.  An attentive reader will uncover many aporiai in Aristotle who begins many of his treatises with a diaporia, a survey of the puzzles that occupied previous thinkers.  Note that aporiai cannot be solved through some mechanical rearrangement of symbolic terms.  Solving puzzles requires intelligence and discernment; it requires some creative insight into what is at stake.

15. Fallacies

In a short work entitled Sophistical Refutations, Aristotle introduces a theory of logical fallacies that has been remarkably influential.  His treatment is abbreviated and somewhat obscure, and there is inevitably scholarly disagreement about precise exegesis.  Aristotle thinks of fallacies as instances of specious reasoning; they are not merely errors but hidden errors.  A fallacy is an incorrect reasoning strategy that gives the illusion of being sound or somehow conceals the underlying problem.  Aristotle divides fallacies into two broad categories: those which depend on language (sometimes called verbal fallacies) and those that are independent of language (sometimes called material fallacies).  There is some scholarly disagreement about particular fallacies, but traditional English names and familiar descriptions follow.  Linguistic fallacies include: homonymy (verbal equivocation), ambiguity (amphiboly or grammatical equivocation), composition (confusing parts with a whole), division (confusing a whole with parts), accent (equivocation that arises out of mispronunciation or misplaced emphasis) and figure of speech (ambiguity resulting from the form of an expression).  Independent fallacies include accident (overlooking exceptions), converse accident (hasty generalization or improper qualification), irrelevant conclusion, affirming the consequent (assuming an effect guarantees the presence of one possible cause), begging the question (assuming the point), false cause, and complex question (disguising two or more questions as one).  Logicians, influenced by scholastic logic, often gave these characteristic mistakes Latin names: compositio for composition, divisio for division, secundum quid et simpliciter for converse accident, ignoranti enlenchi for nonrelevant conclusion, and petitio principii for begging the question.

Consider three brief examples of fallacies from Aristotle’s original text.  Aristotle formulates the following amphiboly (which admittedly sounds awkward in English): “I wish that you the enemy may capture.”  (Sophistical Refutations, 4.166a7-8, Pickard-Cambridge.)  Clearly, the grammatical structure of the statement leaves it ambiguous as to whether the speaker is hoping that the enemy or “you” be captured.  In discussing complex question, he supplies the following perplexing example: “Ought one to obey the wise or one’s father?”  (Ibid., 12.173a21.)  Obviously, from a Greek perspective, one ought to obey both.  The problem is that the question has been worded in such a way that anyone who answers will be forced to reject one moral duty in order to embrace the other.  In fact, there are two separate questions here—Should one obey the wise?  Should one obey one’s father?—that have been illegitimately combined to produce a single question with a single answer.  Finally, Aristotle provides the following time-honoured example of affirming the consequent: “Since after the rain the ground is wet, we suppose that if the ground is wet, it has been raining; whereas that does not necessarily follow”  (Ibid., 5.167b5-8.)  Aristotle’s point is that assuming that the same effect never has more than one cause misconstrues the true nature of the world.  The same effect may have several causes.  Many of Aristotle’s examples have to do with verbal tricks which are entirely unconvincing—for example, the person who commits the fallacy of division by arguing that the number “5” is both even and odd because it can be divided into an even and an odd number: “2” and “3.”  (Ibid., 4.166a32-33.)  But the interest here is theoretical: figuring out where an obviously-incorrect argument or proposition went wrong.  We should note that much of this text, which deals with natural language argumentation, does not presuppose the syllogistic form.  Aristotle does spend a good bit of time considering how fallacies are related to one another.  Fallacy theory, it is worth adding, is a thriving area of research in contemporary argumentation theory.  Some of these issues are hotly debated.

16. Moral Reasoning

In the modern world, many philosophers have argued that morality is a matter of feelings, not reason.  Although Aristotle recognizes the connative (or emotional) side of morality, he takes a decidedly different tack.  As a virtue ethicist, he does not focus on moral law but views morality through the lens of character.  An ethical person develops a capacity for habitual decision-making that aims at good, reliable traits such as honesty, generosity, high-mindedness, and courage.  To modern ears, this may not sound like reason-at-work, but Aristotle argues that only human beings—that is, rational animals—are able to tell the difference between right and wrong.  He widens his account of rationality to include a notion of practical wisdom (phronesis), which he defines as “a true and reasoned state of capacity to act with regard to the things that are good or bad for man.”  (Nicomachean Ethics, VI.5.1140b4-5, Ross, Urmson).  The operation of practical wisdom, which is more about doing than thinking, displays an inductive-deductive pattern similar to science as represented in Figure 3.  It depends crucially on intuition or nous.  One induces the idea of specific virtues (largely, through an exercise of non-discursive reason) and then deduces how to apply these ideas to particular circumstances.  (Some scholars make a strict distinction between “virtue” (areté) understood as the mental capacity which induces moral ideas and “phronesis” understood as the mental capacity which applies these ideas, but the basic structure of moral thinking remains the same however strictly or loosely we define these two terms.)

Figure 3

The Inductive-Deductive Method of Aristotelian Ethics

We can distinguish then between moral induction and moral deduction.  In moral induction, we induce an idea of courage, honesty, loyalty, and so on.  We do this over time, beginning in our childhood, through habit and upbringing.  Aristotle writes that the successful moral agent “must be born with an eye, as it were, by which to judge rightly and choose what is truly good.”  (Ibid., VI.7.1114b6ff.)  Once this intuitive capacity for moral discernment has been sufficiently developed—once the moral eye is able to see the difference between right and wrong,—we can apply moral norms to the concrete circumstances of our own lives.  In moral deduction, we go on to apply the idea of a specific virtue to a particular situation.  We do not do this by formulating moral arguments inside our heads, but by making reasonable decisions, by doing what is morally required given the circumstances.  Aristotle refers, in this connection, to the practical syllogism which results “in a conclusion which is an action.” (Movement of Animals, 701a10ff, Farquharson.)  Consider a (somewhat simplified) example.   Suppose I induce the idea of promise-keeping as a virtue and then apply it to question of whether I should pay back the money I borrowed from my brother.  The corresponding theoretical syllogism would be:  Promise-keeping is good; giving back the money I owe my brother is an instance of promise-keeping; so giving the back the money I owe my brother is good.”  In the corresponding practical syllogism, I do not conclude with a statement:  “this act is good.”  I go out and pay back the money I owe my brother.  The physical exchange of money counts as the conclusion.  In Aristotle’s moral system, general moral principles play the role of an ostensible definition in science.  One induces a general principle and deduces a corresponding action.  Aristotle does believe that moral reasoning is a less rigorous form of reasoning than science, but chiefly because scientific demonstrations deal with universals whereas the practical syllogism ends a single act that must be fitted to contingent circumstances.  There is never any suggestion that morality is somehow arbitrary or subjective.  One could set out the moral reasoning process using the moral equivalent of an inductive syllogism and a scientific demonstration.

Although Aristotle provides a logical blueprint for the kind of reasoning that is going on in ethical decision-making, he obviously does not view moral decision-making as any kind of mechanical or algorithmic procedure.  Moral induction and deduction represent, in simplified form, what is going on.  Throughout his ethics, Aristotle emphasizes the importance of context.  The practice of morality depends then on a faculty of keen discernment that notices, distinguishes, analyzes, appreciates, generalizes, evaluates, and ultimately decides.  In the Nicomachean Ethics, he includes practical wisdom in his list of five intellectual virtues.  (Scholarly commentators variously explicate the relationship between the moral and the intellectual virtues.)  Aristotle also discusses minor moral virtues such as good deliberation (eubulia), theoretical moral understanding (sunesis), and experienced moral judgement (gnome).  And he equates moral failure with chronic ignorance or, in the case of weakness of will (akrasia), with intermittent ignorance.

17. References and Further Reading

a. Primary Sources

  • Complete Works of Aristotle.  Edited by Jonathan Barnes.  Princeton, N.J.: Princeton University Press, 1984.
    • The standard scholarly collection of translations.
  • Aristotle in 23 Volumes.  Cambridge, M.A.: Harvard University Press; London: William Heinemann Ltd., 1944 and 1960.
    • A scholarly, bilingual edition.

b. Secondary Sources

This list is intended as a window on a diversity of approaches and problems.

  • Barnes, Jonathan, (Aristotle) Posterior Analytics. Oxford: Clarendon Press; New York : Oxford University Press, 1994.
  • Biondi, Paolo.  Aristotle: Posterior Analytics II.19.  Quebec, Q.C.: Les Presses de l’Universite Laval, 2004.
  • Ebbesen, Sten, Commentators and Commentaries on Aristotle’s Sophistici Elenchi, Vol. 1: The Greek Tradition. Leiden: Brill, 1981.
  • Engberg-Pedersen, Troels.  “More on Aristotelian Epagoge.” Phronesis, 24 (1979): 301-319.
  • Englebretsen, George.  Three Logicians: Aristotle, Leibnitz, and Sommers and the Syllogistic.  Assen, Netherlands: Van Gorcum, 1981.
    • See also Sommers, below.
  • Garrett, Dan, and Edward Barbanell.  Encyclopedia of Empiricism. Westport, Conn.: Greenwood Press, 1997.
  • Govier, Trudy.  Problems in Argument Analysis and Evaluation.  Providence, R.I.: Floris, 1987.
  • Groarke, Louis. “A Deductive Account of Induction,” Science et Esprit, 52 (December 2000), 353-369.
  • Groarke, Louis. An Aristotelian Account of Induction: Creating Something From Nothing.  Montreal & Kingston: McGill-Queen’s University Press, 2009.
  • Hamlyn, D. W.  Aristotle’s De Anima Books II and III.  Oxford: Clarendon Press, 1974.
  • Hamlyn, D. W. “Aristotelian Epagoge.”  Phronesis 21 (1976): 167-184.
  • Irwin, Terence.  Aristotle’s First Principles.  Oxford: Clarendon Press, 1988.
  • Keyt, David.  “Deductive Logic,” in A Companion to Aristotle, George Anaganostopoulos, London: Blackwell, 2009, pp. 31-50.
  • Łukasiewicz, Jan.  Aristotle's Syllogistic from the Standpoint of Modern Formal Logic. Oxford University Press, 1957.
  • McCaskey, John, “Freeing Aristotelian Epagôgê from Prior Analytics II 23,” Apeiron, 40:4 (December, 2007), pp. 345–74.
  • McKirahan, Richard Jr.  Principles and Proofs: Aristotle’s Theory of Demonstrative Species.  Princeton, N.J.: Princeton University Press, 1992.
  • Parry, William, and Edward Hacker. Aristotelian Logic. Albany, NY: State University of New York Press, 1991.
  • Peters, F. E., Greek Philosophical Terms: A Historical Lexicon.  New York: NYU Press, 1967.
  • Rijk, Lambertus Marie de.  Aristotle: Semantics and Ontology.  Boston, M.A.: Brill, 2002.
  • Smith, Robin.  “Aristotle on the Uses of Dialectic,” Synthese , Vol. 96, No. 3, 1993, 335-358.
  • Smith, Robin. Aristotle, Prior Analytics.  Indianapolis, IN: Hackett, 1989.
  • Smith, Robin. “Aristotle’s Logic,” Stanford Encyclopedia of Philosophy. E, Zalta. ed. Stanford, CA., 2000, 2007.
    • An excellent introduction to Aristotle’s logic (with a different focus).
  • Smith, Robin. “Aristotle’s Theory of Demonstration,” in A Companion to Aristotle, 52-65.
  • Sommers, Fred, and George Englebretsen, An Invitation to Formal Reasoning: The Logic of Terms. Aldershot UK: Ashgate, 2000.

 

Author Information

Louis F. Groarke
Email: lgroarke@stfx.ca
St. Francis Xavier University
Canada

Lambda Calculi

Lambda Calculi

Lambda calculi (λ-calculi) are formal systems describing functions and function application. One of them, the untyped version, is often referred to as the λ-calculus. This exposition will adopt this convention. At its core, the λ-calculus is a formal language with certain reduction rules intended to capture the notion of function application [Church, 1932, p. 352]. Through the primitive notion of function application, the λ-calculus also serves as a prototypical programming language. The philosophical significance of the calculus comes from the expressive power of a seemingly simple formal system. The λ-calculus is in fact a family of formal systems, all stemming from the same common root. The variety and expressiveness of these calculi yield results in formal logic, recursive function theory, the foundations of mathematics, and programming language theory. After a discussion of the history of the λ-calculus, we will explore the expressiveness of the untyped λ-calculus. Then we will investigate the role of the untyped lambda calculus in providing a negative answer to Hilbert’s Entscheidungsproblem and in forming the Church-Turing thesis. After this, we will take a brief foray into typed λ-calculi and discuss the Curry-Howard isomorphism, which provides a correspondence between these calculi (which again are prototypical programming languages) and systems of natural deduction.

Table of Contents

  1. History
    1. Main Developments
    2. Notes
  2. The Untyped Lambda Calculus
    1. Syntax
    2. Substitution
    3. α-equivalence
    4. β-equivalence
    5. η-equivalence
    6. Notes
  3. Expressiveness
    1. Booleans
    2. Church Numerals
    3. Notes
  4. Philosophical Importance
    1. The Church-Turing Thesis
    2. An Answer to Hilbert’s Entscheidungsproblem
    3. Notes
  5. Types and Programming Languages
    1. Overview
    2. Notes
  6. References and Further Reading

1. History

a. Main Developments

Alonzo Church first introduced the λ-calculus as “A set of postulates for the foundation of logic” in two papers of that title published in 1932 and 1933. Church believed that “the entities of formal logic are abstractions, invented because of their use in describing and systematizing facts of experience or observation, and their properties, determined in rough outline by this intended use, depend for their exact character on the arbitrary choice of the inventor” [Church, 1932, p. 348]. The intended use of the formal system Church developed was, as mentioned in the introduction, function application. Intuitively, the expression (later called a term) λx.x2 corresponds to an anonymous definition of the mathematical function f(x) = x2. An “anonymous definition” of a function refers to a function defined without a name; in the current case, instead of defining a function “f”, an anonymous definition corresponds to the mathematician’s style of defining a function as a mapping, such as “xx2”. Do note that the operation of squaring is not yet explicitly defined in the λ-calculus. Later, it will be shown how it can be. For our present purposes, the use of squaring is pedagogical. By limiting the use of free variables and the law of excluded middle in his system in certain ways, Church hoped to escape paradoxes of transfinite set theory [Church, 1932, p. 346–8]. The original formal system of 1932–1933 turned out, however, not to be consistent. In it, Church defined many symbols besides function definition and application: a two-place predicate for extensional equality, an existential quantifier, negation, conjunction, and the unique solution of a function. In 1935, Church’s students Kleene and Rosser, using this full range of symbols and Gödel’s method of representing the syntax of a language numerically so that a system can express statements about itself, proved that any formula is provable in Church’s original system. In 1935, Church isolated the portion of his formal system dealing solely with functions and proved the consistency of this system. One idiosyncratic feature of the system of 1935 was eliminated in Church’s 1936b paper, which introduced what is now known as the untyped λ-calculus. This 1936 paper also provides the first exposition of the Church-Turing thesis and a negative answer to Hilbert’s famous problem of determining whether a given formula in a formal system is provable. We will discuss these results later.

b. Notes

For a thorough history of the λ-calculus, including many modern developments, with a plethora of pointers to primary and secondary literature, see Cardone and Hindley [2006]. Church [1932] is very accessible and provides some insight into Church’s thinking about formal logic in general. Though the inconsistency proof of Kleene and Rosser [1935] is a formidable argument, the first page of their paper provides an overview of their proof strategy which should be comprehensible to anyone with some mathematical/logical background and rough understanding of the arithmetization of syntax á la Gödel. According to Barendregt [1997], Church originally intended to use the notation ˆx.x2. His original typesetter could not typeset this, and so moved the hat in front of the x, which another typesetter than read as λx.x2. In an unpublished letter, Church writes that he placed the hat in front, not a typesetter. According to Cardone and Hindley [2006, p. 7], Church later contradicted his earlier account, telling enquirers that he was in need of a symbol and just happened to pick λ.

2. The Untyped Lambda Calculus

At its most basic level, the λ-calculus is a formal system with a concrete syntax and distinct reduction rules. In order to proceed properly, we must define the alphabet and syntax of our language and then the rules for forming and manipulating well-formed formulas in this language. In the process of this exposition, formal definitions will be given along with informal description of the intuitive meaning behind the expressions.

a. Syntax

The language of the λ-calculus consists of the symbols (, ), λ and a countably infinite set of variables x, y, z, … We use upper-case variables M, N, … to refer to formulas of the λ-calculus. These are not symbols in the calculus itself, but rather convenient metalinguistic abbreviations. The terms (formulas, expressions; these three will be used interchangeably) of the calculus are defined inductively as follows:
  • x is a term for any variable.
  • If M, N are terms, then (MN) is a term.
  • If x is a variable and M a term, then (λx.M) is a term.
The latter two rules of term formation bear most of the meat. The second bullet corresponds to function application. The last bullet is called “abstraction” and corresponds to function definition. Intuitively, the notation λx.M corresponds to the mathematical notation of an anonymous function xM. As you can see from this language definition, everything is a function. The functions in this language can “return” other functions and accept other functions as their input. For this reason, they are often referred to as higher-order functions or first-class functions in programming language theory. We provide a few examples of λ terms before examining evaluation rules and the expressive power of this simple language.
  • λx.x: this represents the identity function, which just returns its argument.
  • λf.(λx.(fx)): this function takes in two arguments and applies the first to the second.
  • λf.(λx.(f(xx)))(λx.(f(xx))): this term is called a “fixed-point combinator” and usually denoted by Y. We will discuss its importance later, but introduce it here to show how more complex terms can be built.
Though I referred to the second term above as accepting two arguments, this is actually not the case. Every function in the λ-calculus takes only one argument as its input (as can be seen by a quick examination of the language definition). We will see later, however, that this does not in any way restrict us and that we can often think of nested abstractions as accepting multiple arguments. Before we proceed, a few notational conventions should be mentioned:
  • We always drop the outermost parentheses. This was already adopted in the three examples above.
  • Application associates to the left: (MNP) is shorthand for ((MN)P).
  • Conversely, abstraction associates to the right: (λx.λy.M) is short for (λx.(λy.M))
  • We can drop λs from repeated abstractions: λx.λy.M can be written λxy.M and similarly for any length sequence of abstractions.

b. Substitution

Because it will play a pivotal role in determining the equivalence between λ-terms, we must define a notion of variable substitution. This depends on the notion of free and bound variables, which we will define only informally. In a term λx.P, x is a bound variable, as are similarly bound variables in the subterm P; variables which are not bound are free. In other words, free variables are those that occur outside the scope of a λ abstraction on that variable. A term containing no free variables is called a combinator. We can then define the substitution of N for x in term M, denoted M[x := N], as follows:
  • x[x := N] = N
  • y[x := N] = y (provided that x is a variable different than y)
  • (PQ)[x := N] = P[x := N]Q[x := N]
  • (λx.P)[x := N] = λx.P
  • (λy.P)[x := N] = λy.P[x := N] (provided that x is different than y and y is not free in N or x is not free in P)
Intuitively, these substitution rules allow us to replace all the free occurrences of a variable (x) with any term (N). We cannot replace the bound occurrences of the variable (or allow for a variable to become bound by a substitution, as accounted for in the last rule) because that would change the “meaning” of the term. The scare quotes are included because the formal semantics of the λ-calculus falls beyond the scope of the present article.

c. α-equivalence

The α-equivalence relation, denoted ≡α, captures the idea that two terms are equivalent up to a change of bound variables. For instance, though I referred to λx.x as “the” identity function earlier, we also want to think of λy.y, λz.z, et cetera as identity functions. We can define α-equivalence formally as follows:
  • xα x
  • MNα PQ if Mα P and Nα Q
  • λx.Mα λx.N (provided that Mα N)
  • λx.Mα λy.M[x := y] (provided that y is not free in M)
It will then follow as a theorem that ≡α is an equivalence relation. It should therefore be noted that what was defined earlier as a λ-term is in fact properly called a pre-term. Equipped with the notion of α-equivalence, we can then define a λ-term as the α-equivalence class of a pre-term. This allows us to say that λx.x and λy.y are in fact the same term since they are α-equivalent.

d. β-equivalence

β-reduction, denoted →β, is the principle rewriting rule in the λ-calculus. It captures the intuitive notion of function application. So, for instance, (λx.x)Mβ M, which is what we would expect from an identity function. Formally, →β is the least relation of λ-terms satisfying:
  • If Mβ N then λx.Mβ λx.N
  • If Mβ N then MZβ NZ for any terms M, N, Z
  • If Mβ N then ZMβ ZN for any terms M, N, Z
  • (λx.M)Nβ M[x := N]
The first three conditions define what’s referred to as a compatible relation. →β is then the least compatible relation of terms satisfying the fourth condition, which encodes the notion of function application as variable substitution. We will use the symbol ↠β to denote the iterated (zero or more times) application of →β and =β to denote the smallest equivalence relation containing →β. Two terms M and N are said to be convertible when M =αβ N (=αβ refers to the union of =α and =β). A term of the form (λx.M)N is called a β-redux (or just a redux) because it can be reduced to M[x := N]. A term M is said to be in β-normal form (henceforth referred to simply as normal form) iff there is no term N such that Mβ N. That is to say that a term M is in normal form iff M contains no β-redexes. A term M is called normalizing (or weakly normalizing) iff there is an N in normal form such that Mβ N. A term M is strongly normalizing if every β-reduction sequence starting with M is finite. A term which is not normalizing is called divergent. Regarding β-reduction as the execution of a program, this definition says that divergent programs never terminate since normal form terms correspond to terminated programs. Although we do not prove the result here, it is worth nothing that the untyped λ-calculus is neither normalizing nor strongly normalizing. For instance, Y as defined earlier does not normalize. The interested reader may try an exercise: show that Yβ Y1β Y2β Y3β …, where YiYj for all ij. Iterated β reduction allows us to express functions of multiple variables in a language that only has one-argument functions. Consider the example before of the term λf.λx.(fx). Letting F, X denote arbitrary terms, we can evaluate:
(λf.(λx.(fx)))FX β (λx.(fx)[f := F])X
β (Fx)[x := X]
= FX
This result is exactly what we expected. The process of treating a function of multiple arguments as iterated single-argument functions is generally referred to as Currying. Moreover, ↠β satisfies an important property known either as confluence or as the Church-Rosser property. For any term M, if Mβ M1 and Mβ M2, then there exists a term M3 such that M1β M3 and M2β M3. Although a proof of this statement requires machinery that lies beyond the scope of the present exposition, it is both an important property and one that is weaker than either normalization or strong normalization. In other words, although the untyped λ-calculus is neither normalizing nor strongly normalizing, it does satisfy the Church-Rosser property.

e. η-equivalence

Another common notion of reduction is →η which is the least compatible relation (i.e. a relation satisfying the first three items in the definition of →β) such that
  • λx.Mxη M (provided that x is not free in M)
As before, =η is the least equivalence relation containing →η. The notion of η-equivalence captures the idea of extensionality, which can be broadly stated as the fact that two objects (of whatever kind) are equal if they share all their external properties, even if they may differ on internal properties. (The notion of external and internal properties used here is meant to be intuitive only.) η-equivalence is a kind of extensionality statement for the untyped λ-calculus because it states that any function is equal to the function which takes an argument and applies the original function to this argument. In other words, two functions are equivalent when they yield the same output on the same input. This assumption is an extensionality one because it ignores any differences in how the two functions compute that output.

f. Notes

In its original formulation, Church allowed abstraction to occur only over free variables in the body of the function. While this makes intuitive sense (i.e. the body of the function should depend on its input), the requirement does not change any properties of the calculus and has generally been dropped. Barendregt [1984] is the most thorough work on the untyped λ-calculus. It is also very formal and potentially impenetrable to a new reader. Introductions with learning curves that are not quite as steep include Hindley and Seldin [2008] and Hankin [2004]. The first chapter of Sørensen and Urzyczyn [2006] contains a quick, concise tour through the material, with some intuitive motivation. Other less formal expositions include the entry on Wikipedia and assorted unpublished lecture notes to be found on the web. Not every aspect of the untyped λ-calculus was covered in this section. Moreover, notions such as η-equivalence and the Church-Rosser property were introduced only briefly. These are important philosophically. For more discussion and elaboration on the important technical results, the interested reader can learn more from the above cited sources. We also omit any discussion of the formal semantics of the untyped λ-calculus, originally developed in the 1960s by Dana Scott and Christopher Strachey. Again, Barendregt’s book contains a formal exposition of this material and extensive pointers to further reading on it.

3. Expressiveness

Though the language and reduction rules of the untyped λ-calculus may at first glance appear somewhat limiting, we will see in this section how expressive the calculus can be. Because everything is a function, our general strategy will be to define some particular terms to represent common objects like booleans and numbers and then show that certain other terms properly codify functions of booleans and numbers in the sense that they β reduce as one would expect. Instead, however, of continuing these general remarks, the methodology will be used in the particular cases of booleans and natural numbers. Throughout this exposition, bold face will be used to name particular terms.

a. Booleans

We first introduce the basic definitions:
true = λxy.x
false = λxy.y
In other words, true is a function of two arguments which returns its first argument and false is one which returns the second argument. To see how these two basic definitions can be used to encode the behavior of the booleans, we first introduce the following definition:
ifthenelse = λxyz.xyz
The statement ifthenelsePQR should be read as “if P then Q else R”. It should also be noted that ifthenelsePQRβ PQR; often times, only this reduct is presented as standing for the linguistic expression quoted above. Note that ifthenelse actually captures the desired notion. When P = true, we have that
ifthenelsetrueQR β trueQR
= (λxy.x)QR
β (λy.Q)R
β Q
An identical argument will show that ifthenelse false QRβ R. The above considerations will also hold for any P such that Pβ true or Pβ false. We will now show how a particular boolean connective (and) can be encoded in the λ-calculus and then list some further examples, leaving it to the reader to verify their correctness. We define
and = λpq.pqp
To see this definition’s adequacy, we can manually check the four possible truth-values of the “propositional variables” p and q. Although any λ-term can serve as an argument for a boolean function, these functions encode the truth tables of the connectives in the sense that they behave as one would expect when acting on boolean expressions.
and true true = (λpq.pqp)(λxy.x)(λxy.x)
β (λq.pqp)[p := (λxy.x)](λxy.x)
= (λq.((λxy.x)q(λxy.x)))(λxy.x)
β ((λxy.x)q(λxy.x))[q := λxy.x]
= (λxy.x)(λxy.x)(λxy.x)
β λxy.x
= true
Similarly, we see that
and true false = (λpq.pqp)(λxy.x)(λxy.y)
β (λq.pqp)[p := (λxy.x)](λxy.y)
= (λq.((λxy.x)q(λxy.x)))(λxy.y)
β (λxy.x) (λxy.y)(λxy.x)
β λxy.y
= false
We leave it to the reader to verify that and false trueβ false and that and false falseβ false. Together, these results imply that our definition of and encodes the truth table for “and” just as we would expect it to do. The following list of connectives similarly encode their respective truth tables. Because we think it fruitful for the reader to work out the details to gain a fuller understanding, we simply list these without examples:
or = λpq.ppq
not = λpxy.pyx
xor = λpq.p(q false true)q

b. Church Numerals

At the very end of his 1933 paper, Church introduced an encoding for numbers in the untyped λ-calculus that allows arithmetic to be carried out in this purely functional setting. Because the ramifications were not fully explored until his 1936 paper, we hold off on discussion of the import of this encoding and focus here on an exposition of the ability of the λ-calculus to compute functions of natural numbers. The numerals are defined as follows:
1 = λab.ab
2 = λab.a(ab)
3 = λab.a(a(ab))
Successor, addition, multiplication, predecessor, and subtraction can also be defined as follows:
succ = λnab.a(nab)
plus = λmnab.ma(nab)
mult = λmna.n(ma)
pred = λnab.n(λgh.h(ga))(λu.b)(λu.u)
minus = λmn.(n pred)m
As in the case of the booleans, an example of using the successor and addition functions will help show how these definitions functionally capture arithmetic.
succ 1 = (λnab.a(nab))(λab.ab)
β (λab.a(nab))[n := (λab.ab)]
= λab.a((λab.ab)ab)
(λab.ab)ab β ((λb.ab)[a := a])b
= (λb.ab)b
β ab
λab.a((λab.ab)ab) β λab.a(ab)
= 2
Similarly, an example of addition:
plus 1 2 = (λmnab.ma(nab))(λab.ab)(λab.a(ab))
(λmnab.ma(nab)) 1 2 β ((λnab.ma(nab))[m := λab.ab])2
= (λnab.(λab.ab)a(nab))2
β (λab.(λab.ab)a(nab))[n := λab.a(ab)]
= λab.(λab.ab)a((λab.a(ab))ab)
(λab.ab)a β λb.ab
(λab.a(ab))ab β (λb.a(ab))b
β a(ab)
λab.1a(2ab) β λab.a(a(ab))
= 3
Seeing as how these examples capture the appropriate function on natural numbers in terms of Church numerals, we can introduce the notion of λ-definability in the way that Church did in 1936. A function f of one natural number is said to be λ-definable if it is possible to find a term f such that if f(m) = r, then fmβ r, where m and r are the Church numerals of m and r. The arithmetical functions defined above clearly fit this description. The astute reader may have anticipated the result, proved in Kleene [1936], that the λ-definable functions coincide exactly with partial recursive functions. This quite remarkable result encapsulates the arithmetical expressiveness of the untyped λ-calculus completely. While the import of the result will be discussed in section 4, I will briefly discuss how fixed-point combinators allow recursive functions to be encoded in the λ-calculus. Recall that a combinator is a term with no free variables. A combinator M is called a fixed-point combinator if for every term N, we have that MN =β N(MN). In particular, Y as defined before, is a fixed point combinator, since
YF ≡ (λf. (λx.f(xx))(λx.f(xx)))F
β (λx.F(xx))(λx.F(xx))
β F(xx)[x := λx.F(xx)]
F((λx.F(xx))(λx.F(xx)))
=β F(YF)
Although the technical development of the result lies outside the scope of the present exposition, one must note that fixed-point combinators allow the minimization operator to be defined in the λ-calculus. Therefore, all (partial) recursive functions on natural numbers—not just the primitive recursive ones (which are a proper subset of the partial recursive functions)—can be encoded in the λ-calculus.

c. Notes

In the exposition of booleans, connectives such as and, or, et cetera, were given in a very general formulation. They can also be defined more concretely in terms of true and false in a manner that may be closer to capturing their truth tables. For instance, one may define and = λpq.p(q true false) false and verify that it also captures the truth table properly even though it does not directly β-reduce to the definition presented above. In the 1933 paper, Church defined plus, mult, and minus somewhat differently than presented here. This original exposition used some of the idiosyncrasies of the early version of the λ-calculus that were dropped by the time of the 1936 paper. The predecessor function was originally defined by Kleene, in a story which will be discussed in section 4. In addition to the original Kleene paper, more modern expositions of the result that the partial recursive functions are co-extensive with the λ-definable functions can be found in Sørensen and Urzyczyn [2006, pp. 20–22] and Barendregt [1984, pp. 135–139], among others.

4. Philosophical Importance

The expressiveness of the λ-calculus has allowed it to play a critical role in debates at the foundations of mathematics, logic and programming languages. At the end of the introduction to his 1936 paper, Church writes that “The purpose of the present paper is to propose a definition of effective calculability which is thought to correspond satisfactorily to the somewhat vague intuitive notion in terms of which problems of this class are often stated, and to show, by means of an example, that not every problem of this class is solvable.” The first subsection below will explore the definition of effective calculability in terms of λ-definability. We will then explore the second purpose of the 1936 paper, namely that of proving a negative answer to Hilbert’s Entscheidungsproblem. In the course of both of these expositions, connections will be made with the work of other influential logicians (Gödel and Turing) of the time. Finally, we will conclude with a brief foray into more modern developments in typed λ-calculi and the correspondence between them and logical proofs, a correspondence often referred to as “proofs-as-programs” for reasons that will become clear.

a. The Church-Turing Thesis

In his 1936 paper, in a section entitled “The notion of effective calculability”, Church writes that “We now define the notion, already discussed, of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers (or a λ-definable function of positive integers)” [Church, 1936b, p. 356, emphasis in original]. The most striking feature of this definition, which will be discussed in more depth in what follows, is that it defines an intuitive notion (effective calculability) with a formal notion. In this sense, some regard the thesis as more of an unverifiable hypothesis than a definition. In what follows, we will consider, in addition to the development of alternative formulations of the thesis, what qualifies as evidence in support of this thesis. In the preceding parts of Church’s paper, he claims to have shown that for any λ-definable function, there exists an algorithm for calculating its values. The idea of an algorithm existing to calculate the values of a function (F) plays a pivotal role in Church’s thesis. Church argues that an effectively calculable function of one natural number must have an algorithm, consisting in a series of expressions (“steps” in the algorithm) that lead to the calculated result. Then, because each of these expressions can be represented as a natural number using Gödel’s methodology, the functions that act on them to calculate F will be recursive and therefore F will be as well. In short, a function which has an algorithm that calculates its values will be effectively calculable and any effectively calculable function of one natural number has such an algorithm. Furthermore, if such an algorithm exists, it will be λ-definable (via the formal result that all recursive functions are λ-definable). Church offers a second notion of effective calculability in terms of provability within an arbitrary logical system. Because he can show that this notion also coincides with recursiveness, he concludes that “no more general definition of effective calculability than that proposed above can be obtained by either of the two methods which naturally suggest themselves” [Church, 1936b, p. 358]. The history of this particular version of the thesis has no clear story. Kleene allegedly realized how to define the predecessor function (as given in the previous section: Kleene had earlier defined the predecessor function using an encoding of numerals different from those of Church) while at the dentist for the removal of two wisdom teeth. For a first-hand account, see Crossley [1975, pp. 4–5]. According to Barendregt [1997, p. 186]:
After Kleene showed the solution to his teacher, Church remarked something like: “But then all intuitively computable functions must be lambda definable. In fact, lambda definability must coincide with intuitive computability.” Many years later—it was at the occasion of Robin Gandy’s 70th birthday, I believe—I heard Kleene say: “I would like to be able to say that, at the moment of discovering how to lambda define the predecessor function, I got the idea of Church’s Thesis. But I did not, Church did.”
Alan Turing, in his groundbreaking paper also published in 1936, developed the notion of computable numbers in terms of his computing machines. While Turing believes that any definition of an intuitive notion will be “rather unsatisfactory mathematically,” he frames the question in terms of computation as opposed to effective calculability: “The real question at issue is ‘what are the possible processes which can be carried out in computing a number?’ ” [Turing, 1937a, p. 249] By analyzing the way that a computer—in fact, an idealized human computer—would use a piece of paper to carry out this process of computing a number, which he reduces to observing a finite number of squares on a one-dimensional paper and changing one of the squares or another within a small distance of the currently observed squares, Turing shows how this intuitive notion of computing can be captured by one of his machines. Thus, he claims to have captured the intuitive notion of computability with computability by Turing machines. In the appendix of this paper added later, Turing also offers a sketch of a proof that λ-definability and his computability capture the same class of functions. In a subsequent paper, he also suggests that “The identification of ‘effectively calculable’ functions with computable functions is possibly more convincing than an identification with the λ-definable or general recursive functions” [Turing, 1937b, p. 153]. While Turing may believe that his machines are closer to the intuitive notion of calculating a number, the fact that the three classes of functions are identical means they can be used interchangeably. This fact has often been considered to count as evidence that these formalisms do in fact capture an intuitive notion of calculability. Because Turing’s analysis of the notion of computability often makes reference to a human computer, with squares on paper referred to as “states of mind”, the Church-Turing thesis, along with Turing’s result that there is a universal machine which can compute any function that any Turing machine can, has often been interpreted, in conjunction with the computational theory of mind, as having profound results on the limits of human thought. For a detailed analysis of many misinterpretations of the thesis and what it does imply in the context of the nature of mind, see Copeland [2010].

b. An Answer to Hilbert’s Entscheidungsproblem

In a famous lecture addressed at the meeting of the International Congress of Mathematicians in 1900, David Hilbert proposed 23 problems in need of solution. The second of these was to give a proof of the consistency of analysis without reducing it to a consistency proof of another system. Though much of Hilbert’s thought and the particular phrasing of this problem changed over the ensuing 30 years, the challenge to prove the consistency of certain formal systems remained. At a 1928 conference, Hilbert posed three problems that continued his general program. The third of these problems was the Entscheidungsproblem in the sense that Church used the term: “By the Entscheidungsproblem of a system of symbolic logic is here understood the problem to find an effective method by which, given any expression Q in the notation of the system, it can be determined whether or not Q is provable in the system” [Church, 1936a, p. 41]. In 1936b, Church used the λ-calculus to prove a negative solution to this problem for a wide/natural/interesting class of formal systems. The technical result proved in this paper is that it is undecidable, given two λ-terms A and B whether A is convertible into B. Church argues that any system of logic that is strong enough to express a certain amount of arithmetic will be able to express the formula ψ(a, b), which encodes that a and b are the Gödel numbers of A and B such that A is convertible into B. If the system of logic satisfied the Entscheidungsproblem, then it would be decidable for every a, b whether ψ(a, b) is provable; but this would provide a way to decide whether A is convertible into B for any terms A and B, contradicting Church’s already proven result that this problem is not decidable. Given that we have just seen the equivalence between Turing machine computability and λ-definability, it should come as no surprise that Turing also provided a negative solution to the Entscheidungsproblem. Turing proceeds by defining a formula of one variable which takes as input the complete description of a Turing machine and is provable if and only if the machine input ever halts. Therefore, if the Entscheidungsproblem can be solved, then it provides a method for proving whether or not a given machine halts. But Turing had shown earlier that this halting problem cannot be solved. These two results, in light of the Church-Turing Thesis, imply that there are functions of natural numbers that are not effectively calculable/computable. It remains open, however, whether there are computing machines or physical processes that elude the limitations of Turing computability (equivalently, λ-definability) and whether or not the human brain may embody one such process. For a thorough discussion of computation in physical systems and the resulting implications for philosophy of mind, see Piccinini [2007].

c. Notes

Kleene [1952] offers (in chapters 12 and 13) a detailed account of the development of the Church-Turing thesis from a figure who was central to the logical developments involved. Turing’s strategy of reducing the Entscheudingsproblem to the halting problem has been adopted by many researchers tackling other problems. Kleene [1952] also discusses Turing computability. The halting problem, as outlined above, however, takes an arbitrary machine and an arbitrary input. There are, of course, particular machines and particular inputs for which it can be decided whether the machine halts on that input.  For instance, every finite-state automaton halts on every input.

5. Types and Programming Languages

a. Overview

In 1940, Church gave a “formulation of the simple theory of types which incorporates certain features of λ-conversion” [Church, 1940, p. 56]. Though the history of simple types extends beyond the scope of this paper and Church’s original formulation extended beyond the syntax of the untyped λ-calculus (in a way somewhat similar to the original, inconsistent formulation), one can intuitively think of types as syntactic decorations that restrict the formation of terms so that functions may only be applied to appropriate arguments. In informal mathematics, a function f : ℕ → ℕ cannot be applied to a fraction. We now define a set of types and a new notion of term that incorporates this idea. We have a set of base types, say σ1, σ2. (In Church’s formulation, ι and o were the only base types. In many programming languages, base types include things such as nat, bool, etc. This set can be countably infinite for the purposes of proof although no programming language could implement such a type system.) For any types α, β, then αβ is also a type. (Church used the notation (βα), but the arrow is used in most modern expositions because it captures the intuition of a function type better.) In what follows, σ, τ and any other lower-case Greek letters will range over types. We will write either e : τ or eτ to indicate that term e is of type τ. The simply typed terms can then be defined as follows:
  • xβ is a term.
  • If xβ is a variable and Mα is a term, then (λxβ.Mα)βα is a term.
  • If Mαβ and Nα are terms, then (MN)β is a term.
While the first two rules can be seen as simply adding “syntactic decoration” to the original untyped terms, this third rule captures the idea of only applying functions to proper input. It can be read as saying: If M is a function from α to β and N is an α, then MN is a β. This formal system does not permit other application terms to be formed. One must note that this simply-typed λ-calculus is strongly normalizing, meaning that any well-typed term has a finite reduction sequence. It has been mentioned that λ-calculi are prototypical programming languages. We can then think of typed λ-calculi as being typed programming languages. Seen in this light, the strongly normalizing property states that one cannot write a non-terminating program. For instance, the term Y in the untyped λ-calculus has no simply-typed equivalent. Clearly, then, this is a somewhat restrictive language. Because of this property and the results in the preceding section, it follows that the simply-typed λ-calculus is not Turing complete, i.e. there are λ-definable functions which are not definable in the simply-typed version. Taking this analogy with programming languages literally, we can interpret a family of results collectively known as the Curry-Howard correspondence as providing a “proofs-as-programs” metaphor. (The correspondence is sometimes referred to as Curry-Howard-Lambek due to Lambek’s work in expressing both the λ-calculus and natural deduction in a category-theoretic setting.) In the case of the simply-typed λ-calculus, the correspondence is between the λ-calculus and the implicational fragment of intuitionistic natural deduction. One can readily see that simple types correspond directly to propositions in this fragment of propositional logic (modulo a translation of base types to propositional variables). The correspondence then states that a proposition is provable if and only if there is a λ-term with the type of the proposition. The term then corresponds to a proof of the proposition. In the simply-typed case, the correspondence can be fleshed out as:
λ-calculus propositional logic
type proposition
term proof
abstraction arrow introduction
application modus ponens
variables assumptions
Table 1. Curry-Howard in the simply-typed λ-calculus.
For example, consider the intuitionistic tautology p → ¬¬p, where ¬p =df p → ⊥. While a full exposition of natural deduction for intuitionistic propositional logic lies beyond the present scope, observe that we have the following proof of p → ¬¬p = p → ((p → ⊥) → ⊥):

  p, p → ⊥ ⊢ p → ⊥         p, p → ⊥ ⊢ p       p, p → ⊥ ⊢ ⊥         p ⊢ (p → ⊥) → ⊥     p → ((p → ⊥) → ⊥)

We can then follow this proof in order to construct a corresponding term in the simply-typed λ-calculus as follows:

  xp, yp→⊥yp→⊥         xp, yp→⊥xp       xp, yp→⊥yx         xp ⊢ (λyp→⊥.yx)(p→⊥)→⊥     ⊢ (λxp.λyp→⊥.yx)p→((p→⊥)→⊥)

As was mentioned, the Curry-Howard correspondence is not one correspondence, but rather a family. The main idea is that terms correspond to proofs of their type. In the way that we can extend logics beyond the implicational fragment of intuitionistic propositional logic to include things like connectives, negation, quantifiers, and higher-order logic, so too can we extend the syntax and type systems of λ-calculus. There are thus many variations of the proofs-as-programs correspondence. See the following notes section for many sources exploring the development of more complex correspondences. That programs represent proofs is more than an intellectual delight: it is the correspondence on which software verifiers and automated mathematical theorem provers (such as Automath, HOL, Isabelle, and Coq) are based. These tools both verify and help users generate proofs and have in fact been used to develop proofs of unproven conjectures.

b. Notes

For a thorough exposition of typed λ-calculi, starting with simple types and progressing through all faces of the “λ-cube”, see Barendregt [1992]. Sørensen and Urzyczyn [2006] explores the Curry-Howard correspondence between many different forms of logic and the corresponding type systems. Pierce [2002] focuses on the λ-calculus and its relation to programming languages, thoroughly developing calculi from the simple to the very complex, along with type checking and operational semantics for each.

6. References and Further Reading

  • Barendregt, Henk. 1984. The Lambda Calculus: Its Syntax and Semantics. Elsevier Science, Amsterdam.
  • Barendregt, Henk. 1992. Lambda Calculi with Types. Oxford University Press, Oxford.
  • Barendregt, Henk. 1997. “The Impact of the Lambda Calculus in Logic and Computer Science.” The Bulletin of Symbolic Logic 3(2):181–215.
  • Cardone, Felice and J. R. Hindley. 2006. Handbook of the History of Logic, eds., D. M. Gabbay and J. Woods, vol. 5., History of Lambda-Calculus and Combinatory Logic, Elsevier.
  • Church, Alonzo. 1932. “A Set of Postulates for the Foundation of Logic (Part I).” Annals of Mathematics, 33(2):346–366.
  • Church, Alonzo. 1933. “A Set of Postulates for the Foundation of Logic (Part II).” Annals of Mathematics, 34(4):839–864.
  • Church, Alonzo. 1935. “A Proof of Freedom from Contradiction.” Proceedings of the National Academy of Sciences of the United States of America, 21(5):275–281.
  • Church, Alonzo. 1936a. “A Note on the Entscheidungsproblem.” Journal of Symbolic Logic, 1(1):40–41.
  • Church, Alonzo. 1936b. “An Unsolvable Problem of Elementary Number Theory.” American Journal of Mathematics, 58(2):345–363, 1936b.
  • Church, Alonzo. 1940. “A Formulation of the Simple Theory of Types.” Journal of Symbolic Logic, 5(2):56–68.
  • Church, Alonzo. 1941. The Calculi of Lambda Conversion. Princeton University Press, Princeton, NJ.
  • Copeland, B. Jack. 2010. “The Church-Turing Thesis.” Stanford Encyclopedia of Philosophy.
  • Crossley, J. N. 1975. “Reminiscences of Logicians.” Algebra and Logic: Papers from the 1974 Summer Research Institute of the Australian Mathematical Society, Monash University, Australia, Lecture Notes in Mathematics, vol. 450, Springer, Berlin.
  • Hankin, Chris. 2004. An Introduction to Lambda Calculi for Computer Scientists. College Publications.
  • Hindley, J. Roger and Jonathan P. Seldin. 2008. Lambda-Calculus and Combinators: An Introduction. Cambridge University Press, Cambridge.
  • Kleene, S. C. 1936. “λ-definability and Recursiveness.” Duke Mathematical Journal, 2(2):340–353.
  • Kleene, S. C. 1952. Introduction to Metamathematics. D. van Norstrand, New York.
  • Kleene, S. C. and J. B. Rosser. 1935. “The Inconsistency of Certain Formal Logics.” Annals of Mathematics, 36(3):630–636.
  • Piccinini, Gualtiero. 2007. “Computational Modelling vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?” Australasian Journal of Philosophy, 85(1):93–115.
  • Pierce, Benjamin C. 2002. Types and Programming Languages. MIT Press, Cambridge.
  • Sørensen, M. H. and P. Urzyczyn. 2006. Lectures on the Curry-Howard Isomorphism. Elsevier Science, Oxford.
  • Turing, Alan M. 1937a. “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society, 42(2):230–265.
  • Turing, Alan M. 1937b. “Computability and λ-Definability.” Journal of Symbolic Logic, 2(4): 153–163.

Author Information

Shane Steinert-Threlkeld Email: shanest@stanford.edu Stanford University U. S. A.

Fallacies

Fallacies

A fallacy is a kind of error in reasoning. The alphabetical list below contains 209 names of the most common fallacies, and it provides brief explanations and examples of each of them. Fallacies should not be persuasive, but they often are. Fallacies may be created unintentionally, or they may be created intentionally in order to deceive other people. The vast majority of the commonly identified fallacies involve arguments, although some involve explanations, or definitions, or other products of reasoning. Sometimes the term "fallacy" is used even more broadly to indicate any false belief or cause of a false belief. The list below includes some fallacies of these sorts, but most are fallacies that involve kinds of errors made while arguing informally in natural language.

An informal fallacy is fallacious because of both its form and its content. The formal fallacies are fallacious only because of their logical form. For example, the slippery slope fallacy has this form: Step 1 "leads to" step 2. Step 2 leads to step 3. Step 3 leads to ... until we reach an obviously unacceptable step, so step 1 is not acceptable. That form occurs in both good arguments and fallacious arguments. The quality of an argument of this form depends crucially on the probabilities that each step does lead to the next, but the probabilities involve the argument's content, not merely its form.

The discussion that precedes the long alphabetical list of fallacies begins with an account of the ways in which the term "fallacy" is vague. Attention then turns to the number of competing and overlapping ways to classify fallacies of argumentation. For pedagogical purposes, researchers in the field of fallacies disagree about the following topics: which name of a fallacy is more helpful to students' understanding; whether some fallacies should be de-emphasized in favor of others; and which is the best taxonomy of the fallacies. Researchers in the field are also deeply divided about how to define the term "fallacy" itself, how to define certain fallacies, and whether any theory of fallacies at all should be pursued if that theory's goal is to provide necessary and sufficient conditions for distinguishing between fallacious and non-fallacious reasoning generally. Analogously, there is doubt in the field of ethics regarding whether researchers should pursue the goal of providing necessary and sufficient conditions for distinguishing moral actions from immoral ones.

Table of Contents

  1. Introduction
  2. Taxonomy of Fallacies
  3. Pedagogy
  4. What is a fallacy?
  5. Other Controversies
  6. Partial List of Fallacies
  7. References and Further Reading

1. Introduction

The first known systematic study of fallacies was due to Aristotle in his De Sophisticis Elenchis (Sophistical Refutations), an appendix to the Topics. He listed thirteen types. After the Dark Ages, fallacies were again studied systematically in Medieval Europe. This is why so many fallacies have Latin names. The third major period of study of the fallacies began in the later twentieth century due to renewed interest from the disciplines of philosophy, logic, communication studies, rhetoric, psychology, and artificial intelligence.

The more frequent the error within public discussion and debate the more likely it is to have a name. That is one reason why there is no specific name for the fallacy of subtracting five from thirteen and concluding that the answer is seven, though the error is common.

The term "fallacy" is not a precise term. One reason is that it is ambiguous. It can refer either to (a) a kind of error in an argument, (b) a kind of error in reasoning (including arguments, definitions, explanations, and so forth), (c) a false belief, or (d) the cause of any of the previous errors including what are normally referred to as "rhetorical techniques." Philosophers who are researchers in fallacy theory prefer to emphasize (a), but their lead is often not followed in textbooks and public discussion.

Regarding (d), ill health, being a bigot, being hungry, being stupid, and being hypercritical of our enemies are all sources of error in reasoning, so they could qualify as fallacies of kind (d), but they are not included in the list below. On the other hand, wishful thinking, stereotyping, being superstitious, rationalizing, and having a poor sense of proportion are sources of error and are included in the list below, though they wouldn't be included in a list devoted only to faulty arguments. Thus there is a certain arbitrariness to what appears in lists such as this. What have been left off the list below are the following persuasive techniques commonly used to influence others and to cause errors in reasoning: apple polishing, assigning the burden of proof inappropriately, using propaganda techniques, ridiculing, being sarcastic, selecting terms with strong negative or positive associations, using innuendo, and weasling. All of the techniques are worth knowing about if one wants to reason well.

In describing the fallacies below, the custom is followed of not distinguishing between a reasoner using a fallacy and the reasoning itself containing the fallacy.

Real arguments are often embedded within a very long discussion. Richard Whately, one of the greatest of the 19th century researchers into informal logic, wisely said, "A very long discussion is one of the most effective veils of Fallacy; ...a Fallacy, which when stated barely...would not deceive a child, may deceive half the world if diluted in a quarto volume."

2. Taxonomy of Fallacies

There are a number of competing and overlapping ways to classify fallacies of argumentation. For example, they can be classified as either formal or informal. A formal fallacy can be detected by examining the logical form of the reasoning, whereas an informal fallacy depends upon the content of the reasoning and possibly the purpose of the reasoning. That is, informal fallacies are errors of reasoning that cannot easily be expressed in our system of formal logic (such as symbolic, deductive, predicate logic). The list below contains very few formal fallacies. Fallacious arguments also can be classified as deductive or inductive, depending upon whether the fallacious argument is most properly assessed by deductive standards or instead by inductive standards. Deductive standards demand deductive validity, but inductive standards require inductive strength such as making the conclusion more likely. Fallacies can be divided into categories according to the psychological factors that lead people to use them, and they can also be divided into categories according to the epistemological or logical factors that cause the error. In the latter division there are three categories: (1) the reasoning is invalid but is presented as if it were a valid argument, or else it is inductively much weaker than it is presented as being, (2) the argument has an unjustified premise, or (3) some relevant evidence has been ignored or suppressed. Regarding (2), a premise can be justified or warranted at a time even if we later learn that the premise was false, and it can be justified if we are reasoning about what would have happened even when we know it didn't happen.

Similar fallacies are often grouped together under a common name intended to bring out how the fallacies are similar. Here are three examples. Fallacies of relevance include fallacies that occur due to reliance on an irrelevant reason. In addition, ad hominem, appeal to pity, and affirming the consequent are some other fallacies of relevance. Accent, amphiboly and equivocation are examples of fallacies of ambiguity. The fallacies of illegitimate presumption include begging the question, false dilemma, no true Scotsman, complex question and suppressed evidence. Notice how these categories don't fall neatly into just one of the categories (1), (2), and (3) above.

3. Pedagogy

It is commonly claimed that giving a fallacy a name and studying it will help the student identify the fallacy in the future and will steer them away from using the fallacy in their own reasoning. As Steven Pinker says in The Stuff of Thought (p. 129),

If a language provides a label for a complex concept, that could make it easier to think about the concept, because the mind can handle it as a single package when juggling a set of ideas, rather than having to keep each of its components in the air separately. It can also give a concept an additional label in long-term memory, making it more easily retrivable than ineffable concepts or those with more roundabout verbal descriptions.

For pedagogical purposes, researchers in the field of fallacies disagree about the following topics: which name of a fallacy is more helpful to students' understanding; whether some fallacies should be de-emphasized in favor of others; and which is the best taxonomy of the fallacies. Fallacy theory is criticized by some teachers of informal reasoning for its over-emphasis on poor reasoning rather than good reasoning. Do colleges teach the Calculus by emphasizing all the ways one can make mathematical mistakes? The critics want more emphasis on the forms of good arguments and on the implicit rules that govern proper discussion designed to resolve a difference of opinion. But there has been little systematic study of which emphasis is more successful.

4. What is a fallacy?

Researchers disagree about how to define the very term "fallacy." Focusing just on fallacies in sense (a) above, namely fallacies of argumentation, some researchers define a fallacy as an argument that is deductively invalid or that has very little inductive strength. Because examples of false dilemma, inconsistent premises, and begging the question are valid arguments in this sense, this definition misses some standard fallacies. Other researchers say a fallacy is a mistake in an argument that arises from something other than merely false premises. But the false dilemma fallacy is due to false premises. Still other researchers define a fallacy as an argument that is not good. Good arguments are then defined as those that are deductively valid or inductively strong, and that contain only true, well-established premises, but are not question-begging. A complaint with this definition is that its requirement of truth would improperly lead to calling too much scientific reasoning fallacious; every time a new scientific discovery caused scientists to label a previously well-established claim as false, all the scientists who used that claim as a premise would become fallacious reasoners. This consequence of the definition is acceptable to some researchers but not to others. Because informal reasoning regularly deals with hypothetical reasoning and with premises for which there is great disagreement about whether they are true or false, many researchers would relax the requirement that every premise must be true. One widely accepted definition defines a fallacious argument as one that either is deductively invalid or is inductively very weak or contains an unjustified premise or that ignores relevant evidence that is available and that should be known by the arguer. Finally, yet another theory of fallacy says a fallacy is a failure to provide adequate proof for a belief, the failure being disguised to make the proof look adequate.

Other researchers recommend characterizing a fallacy as a violation of the norms of good reasoning, the rules of critical discussion, dispute resolution, and adequate communication. The difficulty with this approach is that there is so much disagreement about how to characterize these norms.

In addition, all the above definitions are often augmented with some remark to the effect that the fallacies are likely to persuade many reasoners. It is notoriously difficult to be very precise about this vague and subjective notion of being likely to persuade, and some researchers in fallacy theory have therefore recommended dropping the notion in favor of "can be used to persuade."

Some researchers complain that all the above definitions of fallacy are too broad and do not distinguish between mere blunders and actual fallacies, the more serious errors.

Researchers in the field are deeply divided, not only about how to define the term "fallacy" and how to define some of the individual fallacies, but also about whether any general theory of fallacies at all should be pursued if that theory's goal is to provide necessary and sufficient conditions for distinguishing between fallacious and non-fallacious reasoning generally. Analogously, there is doubt in the field of ethics whether researchers should pursue the goal of providing necessary and sufficient conditions for distinguishing moral actions from immoral ones.

5. Other Controversies

How do we defend the claim that an item of reasoning should be labeled as a particular fallacy? A major goal in the field of informal logic is provide some criteria for each fallacy. Schwartz presents the challenge this way:

Fallacy labels have their use. But fallacy-label texts tend not to provide useful criteria for applying the labels. Take the so-called ad verecundiam fallacy, the fallacious appeal to authority. Just when is it committed? Some appeals to authority are fallacious; most are not. A fallacious one meets the following condition: The expertise of the putative authority, or the relevance of that expertise to the point at issue, are in question. But the hard work comes in judging and showing that this condition holds, and that is where the fallacy-label texts leave off. Or rather, when a text goes further, stating clear, precise, broadly applicable criteria for applying fallacy labels, it provides a critical instrument more fundamental than a taxonomy of fallacies and hence to that extent goes beyond the fallacy-label approach. The further it goes in this direction, the less it need to emphasize or event to use fallacy labels. (Schwartz, 232)

The controversy here is the extent to which it is better to teach students what Schwartz calls "the critical instrument" than to teach the fallacy-label approach. Is the fallacy-label approach better for some kinds of fallacies than others? If so, which others?

Another controversy involves the relationship between the fields of logic and rhetoric. In the field of rhetoric, the primary goal is to persuade the audience. The audience is not going to be persuaded by an otherwise good argument with true premises unless they believe those premises are true. Philosophers tend to de-emphasize this difference between rhetoric and informal logic, and they concentrate on arguments that should fail to convince the ideally rational reasoner rather than on arguments that are likely not to convince audiences who hold certain background beliefs. Given specific pedagogical goals, how pedagogically effective is this de-emphasis?

Advertising in magazines and on television is designed to achieve visual persuasion. And a hug or the fanning of fumes from freshly baked donuts out onto the sidewalk are occasionally used for visceral persuasion. There is some controversy among researchers in informal logic as to whether the reasoning involved in this nonverbal persuasion can always be assessed properly by the same standards that are used for verbal reasoning.

6. Partial List of Fallacies

Consulting the list below will give a general idea of the kind of error involved in passages to which the fallacy name is applied. However, simply applying the fallacy name to a passage cannot substitute for a detailed examination of the passage and its context or circumstances because there are many instances of reasoning to which a fallacy name might seem to apply, yet, on further examination, it is found that in these circumstances the reasoning is really not fallacious.

Abusive Ad Hominem

See Ad Hominem.

Accent

The accent fallacy is a fallacy of ambiguity due to the different ways a word is emphasized or accented.

Example:

A member of Congress is asked by a reporter if she is in favor of the President's new missile defense system, and she responds, "I'm in favor of a missile defense system that effectively defends America."

With an emphasis on the word "favor," her response is likely to favor the President's missile defense system. With an emphasis, instead, on the words "effectively defends," her remark is likely to be against the President's missile defense system. And by using neither emphasis, she can later claim that her response was on either side of the issue. Aristotle's version of the fallacy of accent allowed only a shift in which syllable is accented within a word.

Accident

We often arrive at a generalization but don't or can't list all the exceptions. When we reason with the generalization as if it has no exceptions, our reasoning contains the fallacy of accident. This fallacy is sometimes called the "fallacy of sweeping generalization."

Example:

People should keep their promises, right? I loaned Dwayne my knife, and he said he'd return it. Now he is refusing to give it back, but I need it right now to slash up my neighbors who disrespected me.

People should keep their promises, but there are exceptions to this generaliztion as in this case of the psychopath who wants Dwayne to keep his promise to return the knife.

Ad Baculum

See Scare Tactic and Appeal to Emotions (Fear).

Ad Consequentiam

See Appeal to Consequence.

Ad Crumenum

See Appeal to Money.

Ad Hoc Rescue

Psychologically, it is understandable that you would try to rescue a cherished belief from trouble. When faced with conflicting data, you are likely to mention how the conflict will disappear if some new assumption is taken into account. However, if there is no good reason to accept this saving assumption other than that it works to save your cherished belief, your rescue is an ad hoc rescue.

Example:

Yolanda: If you take four of these tablets of vitamin C every day, you will never get a cold.

Juanita: I tried that last year for several months, and still got a cold.

Yolanda: Did you take the tablets every day?

Juanita: Yes.

Yolanda: Well, I'll bet you bought some bad tablets.

The burden of proof is definitely on Yolanda's shoulders to prove that Juanita's vitamin C tablets were probably "bad" -- that is, not really vitamin C. If Yolanda can't do so, her attempt to rescue her hypothesis (that vitamin C prevents colds) is simply a dogmatic refusal to face up to the possibility of being wrong.

Ad Hominem

Your reasoning contains this fallacy if you make an irrelevant attack on the arguer and suggest that this attack undermines the argument itself.

Example:

What she says about Johannes Kepler's astronomy of the 1600s must be just so much garbage. Do you realize she's only fifteen years old?

This attack may undermine the arguer's credibility as a scientific authority, but it does not undermine her reasoning itself because her age is irrelevant to quality of the reasoning. That reasoning should stand or fall on the scientific evidence, not on the arguer's age or anything else about her personally. Reasoning that has the ad hominem form is not always fallacious, if the form is: "The reasoner said X, but the reasoner has unacceptable trait T, so X is not acceptable." The major difficulty with labeling a piece of reasoning of this form as an ad hominem fallacy is deciding whether the personal attack is relevant or irrelevant. For example, attacks on a person for their actually immoral sexual conduct are irrelevant to the quality of their mathematical reasoning, but they are relevant to arguments promoting the person for a leadership position in a church.

If the fallacious reasoner points out irrelevant circumstances that the reasoner is in, such as the arguer's having a vested interesting in people accepting the position, then the ad hominem fallacy may be called a Circumstantial Ad Hominem. If the fallacious attack is on the arguer's associates, or ability or background or personal character it may be called an Abusive Ad Hominem, although the attack on the arguer's associates is more commonly called Guilt by Association. If the fallacy is due to the origin of the arguer's views it is a kind of Genetic Fallacy. If the fallacy is due to claiming the person does not practice what is preached, it is the Tu Quoque Fallacy. Two Wrongs Make a Right is also another type of ad hominem fallacy.

Ad Hominem, Circumstantial

See Guilt by Association.

Ad Ignorantiam

See Appeal to Ignorance.

Ad Misericordiam

See Appeal to Emotions.

Ad Novitatem

See Bandwagon.

Ad Numerum

See Appeal to the People.

Ad Populum

See Appeal to the People.

Ad Verecundiam

See Appeal to Authority.

Affirming the Consequent

If you have enough evidence to affirm the consequent of a conditional and then suppose that as a result you have sufficient reason for affirming the antecedent, your reasoning contains the fallacy of affirming the consequent. This formal fallacy is often mistaken for modus ponens, which is a valid form of reasoning also using a conditional. A conditional is an if-then statement; the if-part is the antecedent, and the then-part is the consequent. The following argument affirms the consequent that she does speaks Portuguese.

Example:

If she's Brazilian, then she speaks Portuguese. Hey, she does speak Portuguese. So, she is Brazilian.

If the arguer believes or suggests that the premises definitely establish that she is Brazilian, then the argumentation contains the fallacy. See the non sequitur fallacy for more discussion of this point.

Against the Person

See Ad Hominem.

All-or-Nothing

See Black-or-White Fallacy.

Ambiguity

Any fallacy that turns on ambiguity. See the fallacies of Amphiboly, Accent, and Equivocation.

Amphiboly

This is an error due to taking a grammatically ambiguous phrase in two different ways during the reasoning.

Example:

In a cartoon, two elephants are driving their car down the road in India. They say, "We'd better not get out here," as they pass a sign saying:

ELEPHANTS

PLEASE STAY IN YOUR CAR

Upon one interpretation of the grammar, the pronoun "YOUR" refers to the elephants in the car, but on another it refers to those humans who are driving cars in the vicinity. Unlike equivocation, which is due to multiple meanings of a phrase, amphiboly is due to syntactic ambiguity, ambiguity caused by multiple ways of understanding the grammar of the phrase.

Anecdotal Evidence

This is fallacious generalizing on the basis of a some story that provides an inadequate sample. If you discount evidence arrived at by systematic search or by testing in favor of a few firsthand stories, then your reasoning contains the fallacy of overemphasizing anecdotal evidence.

Example:

Yeah, I've read the health warnings on those cigarette packs and I know about all that health research, but my brother smokes, and he says he's never been sick a day in his life, so I know smoking can't really hurt you.

Anthropomorphism

This is the error of projecting uniquely human qualities onto something that isn't human. Usually this occurs with projecting the human qualities onto animals, but when it is done to nonliving things, as in calling the storm cruel, the pathetic fallacy is created. There is also, but less commonly, called the Disney Fallacy or the Walt Disney Fallacy.

Example:

My dog is wagging his tail and running around me. Therefore, he knows that I love him.

The fallacy would be averted if the speaker had said "My dog is wagging his tail and running around me. Therefore, he is happy to see me." Animals are likely to have some human emotions, but not the ability to ascribe knowledge to other beings. Your dog knows where it buried its bone, but not that you also know where the bone is.

Appeal to Authority

You appeal to authority if you back up your reasoning by saying that it is supported by what some authority says on the subject. Most reasoning of this kind is not fallacious, and much of our knowledge properly comes from listening to authorities. However, appealing to authority as a reason to believe something is fallacious whenever the authority appealed to is not really an authority in this particular subject, when the authority cannot be trusted to tell the truth, when authorities disagree on this subject (except for the occasional lone wolf), when the reasoner misquotes the authority, and so forth. Although spotting a fallacious appeal to authority often requires some background knowledge about the subject or the authority, in brief it can be said that it is fallacious to accept the words of a supposed authority when we should be suspicious of the authority's words.

Example:

The moon is covered with dust because the president of our neighborhood association said so.

This is a fallacious appeal to authority because, although the president is an authority on many neighborhood matters, you are given no reason to believe the president is an authority on the composition of the moon. It would be better to appeal to some astronomer or geologist. A TV commercial that gives you a testimonial from a famous film star who wears a Wilson watch and that suggests you, too, should wear that brand of watch is using a fallacious appeal to authority. The film star is an authority on how to act, not on which watch is best for you.

Appeal to Consequence

Arguing that a belief is false because it implies something you'd rather not believe. Also called Argumentum Ad Consequentiam.

Example:

That can't be Senator Smith there in the videotape going into her apartment. If it were, he'd be a liar about not knowing her. He's not the kind of man who would lie. He's a member of my congregation.

Smith may or may not be the person in that videotape, but this kind of arguing should not convince us that it's someone else in the videotape.

Appeal to Emotions

Your reasoning contains the fallacy of appeal to emotions when someone's appeal to you to accept their claim is accepted merely because the appeal arouses your feelings of anger, fear, grief, love, outrage, pity, pride, sexuality, sympathy, relief, and so forth. Example of appeal to relief from grief:

[The speaker knows he is talking to an aggrieved person whose house is worth much more than $100,000.] You had a great job and didn't deserve to lose it. I wish I could help somehow. I do have one idea. Now your family needs financial security even more. You need cash. I can help you. Here is a check for $100,000. Just sign this standard sales agreement, and we can skip the realtors and all the headaches they would create at this critical time in your life.

There is nothing wrong with using emotions when you argue, but it's a mistake to use emotions as the key premises or as tools to downplay relevant information. Regarding the fallacy of appeal to pity, it is proper to pity people who have had misfortunes, but if as the person's history instructor you accept Max's claim that he earned an A on the history quiz because he broke his wrist while playing in your college's last basketball game, then you've used the fallacy of appeal to pity.

Appeal to Force

See Scare Tactic.

Appeal to Ignorance

The fallacy of appeal to ignorance comes in two forms: (1) Not knowing that a certain statement is true is taken to be a proof that it is false. (2) Not knowing that a statement is false is taken to be a proof that it is true. The fallacy occurs in cases where absence of evidence is not good enough evidence of absence. The fallacy uses an unjustified attempt to shift the burden of proof. The fallacy is also called "Argument from Ignorance."

Example:

Nobody has ever proved to me there's a God, so I know there is no God.

This kind of reasoning is generally fallacious. It would be proper reasoning only if the proof attempts were quite thorough, and it were the case that if God did exist, then there would be a discoverable proof of this. Another common example of the fallacy involves ignorance of a future event: People have been complaining about the danger of Xs ever since they were invented, but there's never been any big problem with them, so there's nothing to worry about.

Appeal to Money

The fallacy of appeal to money uses the error of supposing that, if something costs a great deal of money, then it must be better, or supposing that if someone has a great deal of money, then they're a better person in some way unrelated to having a great deal of money. Similarly it's a mistake to suppose that if something is cheap it must be of inferior quality, or to suppose that if someone is poor financially then they're poor at something unrelated to having money.

Example:

He's rich, so he should be the president of our Parents and Teachers Organization.

Appeal to Past Practice

See Appeal to the People.

Appeal to Pity

See Appeal to Emotions.

Appeal to Snobbery

See Appeal to Emotions.

Appeal to the Gallery

See Appeal to the People.

Appeal to the Mob

See Appeal to the People.

Appeal to the Masses

See Appeal to the People.

Appeal to the People

If you suggest too strongly that someone's claim or argument is correct simply because it's what most everyone believes, then your reasoning contains the fallacy of appeal to the people. Similarly, if you suggest too strongly that someone's claim or argument is mistaken simply because it's not what most everyone believes, then your reasoning also uses the fallacy. Agreement with popular opinion is not necessarily a reliable sign of truth, and deviation from popular opinion is not necessarily a reliable sign of error, but if you assume it is and do so with enthusiasm, then you are using this fallacy. It is essentially the same as the fallacies of ad numerum, appeal to the gallery, appeal to the masses, argument from popularity, argumentum ad populum, common practice, mob appeal, past practice, peer pressure, traditional wisdom. The "too strongly" mentioned above is important in the description of the fallacy because what most everyone believes is, for that reason, somewhat likely to be true, all things considered. However, the fallacy occurs when this degree of support is overestimated.

Example:

You should turn to channel 6. It's the most watched channel this year.

This is fallacious because of its implicitly accepting the questionable premise that the most watched channel this year is, for that reason alone, the best channel for you. If you stress the idea of appealing to a new idea held by the gallery, masses, mob, peers, people, and so forth, then it is a bandwagon fallacy.

Appeal to the Stick

See Appeal to Emotions (fear).

Appeal to Unqualified Authority

See Appeal to Authority.

Appeal to Vanity

See Appeal to Emotions.

Argument from Ignorance

See Appeal to Ignorance.

Argument from Outrage

See Appeal to Emotions.

Argument from Popularity

See Appeal to the People.

Argumentum Ad ....

See Ad .... without the word "Argumentum."

Argumentum Consensus Gentium

See Appeal to Traditional Wisdom.

Avoiding the Issue

A reasoner who is supposed to address an issue but instead goes off on a tangent is properly accused of using the fallacy of avoiding the issue. Also called missing the point, straying off the subject, digressing, and not sticking to the issue.

Example:

A city official is charged with corruption for awarding contracts to his wife's consulting firm. In speaking to a reporter about why he is innocent, the city official talks only about his wife's conservative wardrobe, the family's lovable dog, and his own accomplishments in supporting Little League baseball.

However, the fallacy isn't used by a reasoner who says that some other issue must first be settled and then continues by talking about this other issue, provided the reasoner is correct in claiming this dependence of one issue on the other.

Avoiding the Question

The fallacy of avoiding the question is a type of fallacy of avoiding the issue that occurs when the issue is how to answer some question. The fallacy occurs when someone's answer doesn't really respond to the question asked.

Example:

Question: Would the Oakland Athletics be in first place if they were to win tomorrow's game?

Answer: What makes you think they'll ever win tomorrow's game?

Bald Man

See Line-Drawing.

Bandwagon

If you suggest that someone's claim is correct simply because it's what most everyone is coming to believe, then you're are using the bandwagon fallacy. Get up here with us on the wagon where the band is playing, and go where we go, and don't think too much about the reasons. The Latin term for this fallacy of appeal to novelty is Argumentum ad Novitatem.

Example:

[Advertisement] More and more people are buying sports utility vehicles. It is time you bought one, too.

Like its close cousin, the fallacy of appeal to the people, the bandwagon fallacy needs to be carefully distinguished from properly defending a claim by pointing out that many people have studied the claim and have come to a reasoned conclusion that it is correct. What most everyone believes is likely to be true, all things considered, and if one defends a claim on those grounds, this is not a fallacious inference. What is fallacious is to be swept up by the excitement of a new idea or new fad and to unquestionably give it too high a degree of your belief solely on the grounds of its new popularity, perhaps thinking simply that 'new is better.' The key ingredient that is missing from a bandwagon fallacy is knowledge that an item is popular because of its high quality.

Begging the Question

A form of circular reasoning in which a conclusion is derived from premises that presuppose the conclusion. Normally, the point of good reasoning is to start out at one place and end up somewhere new, namely having reached the goal of increasing the degree of reasonable belief in the conclusion. The point is to make progress, but in cases of begging the question there is no progress.

Example:

"Women have rights," said the Bullfighters Association president. "But women shouldn't fight bulls because a bullfighter is and should be a man."

The president is saying basically that women shouldn't fight bulls because women shouldn't fight bulls. This reasoning isn't making any progress.

Insofar as the conclusion of a deductively valid argument is "contained" in the premises from which it is deduced, this containing might seem to be a case of presupposing, and thus any deductively valid argument might seem to be begging the question. It is still an open question among logicians as to why some deductively valid arguments are considered to be begging the question and others are not. Some logicians suggest that, in informal reasoning with a deductively valid argument, if the conclusion is psychologically new insofar as the premises are concerned, then the argument isn't an example of the fallacy. Other logicians suggest that we need to look instead to surrounding circumstances, not to the psychology of the reasoner, in order to assess the quality of the argument. For example, we need to look to the reasons that the reasoner used to accept the premises. Was the premise justified on the basis of accepting the conclusion? A third group of logicians say that, in deciding whether the fallacy is present, more evidence is needed. We must determine whether any premise that is key to deducing the conclusion is adopted rather blindly or instead is a reasonable assumption made by someone accepting their burden of proof. The premise would here be termed reasonable if the arguer could defend it independently of accepting the conclusion that is at issue.

Beside the Point

Arguing for a conclusion that is not relevant to the current issue. Also called Irrelevant Conclusion. It is a form of the Red Herring Fallacy

Biased Generalizing

Generalizing from a biased sample. Using an unrepresentative sample and overestimating the strength of an argument based on that sample.
See Unrepresentative Sample.

Biased Sample

See Unrepresentative Sample.

Biased Statistics

See Unrepresentative Sample.

Bifurcation

See Black-or-White.

Black-or-White

The black-or-white fallacy is a false dilemma fallacy that unfairly limits you to only two choices.

Example:

Well, it's time for a decision. Will you contribute $10 to our environmental fund, or are you on the side of environmental destruction?

A proper challenge to this fallacy could be to say, "I do want to prevent the destruction of our environment, but I don't want to give $10 to your fund. You are placing me between a rock and a hard place." The key to diagnosing the black-or-white fallacy is to determine whether the limited menu is fair or unfair. Simply saying, "Will you contribute $10 or won't you?" is not unfair. The black-or-white fallacy is often committed intentionally in jokes such as: "My toaster has two settings—burnt and off." In thinking about this kind of fallacy it is helpful to remember that everything is either black or not black, but not everything is either black or white.

Cherry-Picking the Evidence

This is another name for the Fallacy of Suppressed Evidence.

Circular Reasoning

Circular reasoning occurs when the reasoner begins with what he or she is trying to end up with. The most well known examples are cases of the fallacy of begging the question. However, if the circle is very much larger, including a wide variety of claims and a large set of related concepts, then the circular reasoning can be informative and so is not considered to be fallacious. For example, a dictionary contains a large circle of definitions that use words which are defined in terms of other words that are also defined in the dictionary. Because the dictionary is so informative, it is not considered as a whole to be fallacious. However, a small circle of definitions is considered to be fallacious.

Here is Steven Pinker's example:

Definition: endless loop, n. See loop, endless.

Definition: loop, endless, n. See endless loop.

In properly constructed recursive definitions, defining a term by using that term is not fallacious. For example, here is a recursive definition of "a stack of coins." Basis step: Two coins, with one on top of the other, is a stack of coins. Recursion step: If p is a stack of coins, then adding a coin on top of p produces a stack of coins. For additional difficulties in deciding whether an argument is deficient because it is circular, see Begging the Question. For a deeper discussion of circular reasoning see Infinitism in Epistemology.

Circumstantial Ad Hominem

See Ad Hominem, Circumstantial.

Clouding the Issue

See Smokescreen.

Common Belief

See Appeal to the People and Traditional Wisdom.

Common Cause

This fallacy occurs during causal reasoning when a causal connection between two kinds of events is claimed when evidence is available indicating that both are the effect of a common cause.

Example:

Noting that the auto accident rate rises and falls with the rate of use of windshield wipers, one concludes that the use of wipers is somehow causing auto accidents.

However, it's the rain that's the common cause of both.

Common Practice

See Appeal to the People and Traditional Wisdom.

Complex Question

You use this fallacy when you frame a question so that some controversial presupposition is made by the wording of the question.

Example:

[Reporter's question] Mr. President: Are you going to continue your policy of wasting taxpayer's money on missile defense?

The question unfairly presumes the controversial claim that the policy really is a waste of money. The fallacy of complex question is a form of begging the question.

Composition

The composition fallacy occurs when someone mistakenly assumes that a characteristic of some or all the individuals in a group is also a characteristic of the group itself, the group "composed" of those members. It is the converse of the division fallacy.

Example:

Each human cell is very lightweight, so a human being composed of cells is also very lightweight.

Confirmation Bias

The tendency to look only for evidence in favor of one's controversial hypothesis and not to look for disconfirming evidence, or to pay insufficient attention to it. This is the most common kind of Fallacy of Selective Attention.

Example:

She loves me, and there are so many ways that she has shown it. When we signed the divorce papers in her lawyer's office, she wore my favorite color. When she slapped me at the bar and called me a "handsome pig," she used the word "handsome" when she didn't have to. When I called her and she said never to call her again, she first asked me how I was doing and whether my life had changed. When I suggested that we should have children in order to keep our marriage together, she laughed. If she can laugh with me, if she wants to know how I am doing and whether my life has changed, and if she calls me "handsome" and wears my favorite color on special occasions, then I know she really loves me.

Using the fallacy of confirmation bias is often a sign that one has adopted some belief dogmatically and isn't seriously setting about to confirm or disconfirm the belief.

Confusing an Explanation with an Excuse

Treating someone's explanation of a fact as if it were a justification of the fact. Explaining a crime should not be confused with excusing the crime, but it too often is.
Example:

Speaker: The German atrocities committed against the French and Belgians during World War I were in part due to the anger of German soldiers who learned that French and Belgian soldiers were ambushing German soldiers, shooting them in the back, or even poisoning, blinding and castrating them.

Respondent: I don't understand how you can be so insensitive as to condone those German atrocities.

Consensus Gentium

Fallacy of argumentum consensus gentium (argument from the consensus of the nations). See Traditional Wisdom.

Consequence

See Appeal to Consequence.

Converse Accident

If we reason by paying too much attention to exceptions to the rule, and generalize on the exceptions, our reasoning contains this fallacy. This fallacy is the converse of the accident fallacy. It is a kind of Hasty Generalization, by generalizing too quickly from a peculiar case.

Example:

I've heard that turtles live longer than tarantulas, but the one turtle I bought lived only two days. I bought it at Dowden's Pet Store. So, I think that turtles bought from pet stores do not live longer than tarantulas.

The original generalization is "Turtles live longer than tarantulas." There are exceptions, such as the turtle bought from the pet store. Rather than seeing this for what it is, namely an exception, the reasoner places too much trust in this exception and generalizes on it to produce the faulty generalization that turtles bought from pet stores do not live longer than tarantulas.

Cover-up

See Suppressed Evidence.

Cum Hoc, Ergo Propter Hoc

Latin for "with this, therefore because of this." This is a false cause fallacy that doesn't depend on time order (as does the post hoc fallacy), but on any other chance correlation of the supposed cause being in the presence of the supposed effect.

Example:

Gypsies live near our low-yield cornfields. So, gypsies are causing the low yield.

Definist

The definist fallacy occurs when someone unfairly defines a term so that a controversial position is made easier to defend. Same as the Persuasive Definition.

Example:

During a controversy about the truth or falsity of atheism, the fallacious reasoner says, "Let's define 'atheist' as someone who doesn't yet realize that God exists."

Denying the Antecedent

You are using this fallacy if you deny the antecedent of a conditional and then suppose that doing so is a sufficient reason for denying the consequent. This formal fallacy is often mistaken for modus tollens, a valid form of argument using the conditional. A conditional is an if-then statement; the if-part is the antecedent, and the then-part is the consequent.

Example:

If she were Brazilian, then she would know that Brazil's official language is Portuguese. She isn't Brazilian; she's from London. So, she surely doesn't know this about Brazil's language.

Digression

See Avoiding the Issue.

Distraction

See Smokescreen.

Division

Merely because a group as a whole has a characteristic, it often doesn't follow that individuals in the group have that characteristic. If you suppose that it does follow, when it doesn't, your reasoning contains the fallacy of division. It is the converse of the composition fallacy.

Example:

Joshua's soccer team is the best in the division because it had an undefeated season and won the division title, so their goalie must be the best goalie in the division.

Domino

See Slippery Slope.

Double Standard

There are many situations in which you should judge two things or people by the same standard. If in one of those situations you use different standards for the two, your reasoning contains the fallacy of using a double standard.

Example:

I know we will hire any man who gets over a 70 percent on the screening test for hiring Post Office employees, but women should have to get an 80 to be hired because they often have to take care of their children.

This example is a fallacy if it can be presumed that men and women should have to meet the same standard for becoming a Post Office employee.

Either/Or

See Black-or-White.

Equivocation

Equivocation is the illegitimate switching of the meaning of a term during the reasoning.

Example:

Brad is a nobody, but since nobody is perfect, Brad must be perfect, too.

The term "nobody" changes its meaning without warning in the passage. So does the term "political jokes" in this joke: I don't approve of political jokes. I've seen too many of them get elected.

Etymological

The etymological fallacy occurs whenever someone falsely assumes that the meaning of a word can be discovered from its etymology or origins.

Example:

The word "vise" comes from the Latin "that which winds," so it means anything that winds. Since a hurricane winds around its own eye, it is a vise.

Every and All

The fallacy of every and all turns on errors due to the order or scope of the quantifiers "every" and "all" and "any." This is a version of the scope fallacy.

Example:

Every action of ours has some final end. So, there is some common final end to all our actions.

In proposing this fallacious argument, Aristotle believed the common end is the supreme good, so he had a rather optimistic outlook on the direction of history.

Exaggeration

When we overstate or overemphasize a point that is a crucial step in a piece of reasoning, then we are guilty of the fallacy of exaggeration. This is a kind of error called Lack of Proportion.

Example:

She's practically admitted that she intentionally yelled at that student while on the playground in the fourth grade. That's verbal assault. Then she said nothing when the teacher asked, "Who did that?" That's lying, plain and simple. Do you want to elect as secretary of this club someone who is a known liar prone to assault? Doing so would be a disgrace to our Collie Club.

When we exaggerate in order to make a joke, though, we do not use the fallacy because we do not intend to be taken literally.

Excluded Middle

See False Dilemma or Black-or-White.

False Analogy

The problem is that the items in the analogy are too dissimilar. When reasoning by analogy, the fallacy occurs when the analogy is irrelevant or very weak or when there is a more relevant disanalogy. See also Faulty Comparison.

Example:

The book Investing for Dummies really helped me understand my finances better. The book Chess for Dummies was written by the same author, was published by the same press, and costs about the same amount. So, this chess book would probably help me understand my finances, too.

False Cause

Improperly concluding that one thing is a cause of another. The Fallacy of Non Causa Pro Causa is another name for this fallacy. Its four principal kinds are the Post Hoc Fallacy, the Fallacy of Cum Hoc, Ergo Propter Hoc, the Regression Fallacy, and the Fallacy of Reversing Causation.

Example:

My psychic adviser says to expect bad things when Mars is aligned with Jupiter. Tomorrow Mars will be aligned with Jupiter. So, if a dog were to bite me tomorrow, it would be because of the alignment of Mars with Jupiter.

False Dichotomy

See False Dilemma or Black-or-White.

False Dilemma

A reasoner who unfairly presents too few choices and then implies that a choice must be made among this short menu of choices is using the false dilemma fallacy, as does the person who accepts this faulty reasoning.

Example:

I want to go to Scotland from London. I overheard McTaggart say there are two roads to Scotland from London: the high road and the low road. I expect the high road would be too risky because it's through the hills and that means dangerous curves. But it's raining now, so both roads are probably slippery. I don't like either choice, but I guess I should take the low road and be safer.

This would be fine reasoning is you were limited to only two roads, but you've falsely gotten yourself into a dilemma with such reasoning. There are many other ways to get to Scotland. Don't limit yourself to these two choices. You can take other roads, or go by boat or train or airplane. The fallacy is called the "False Dichotomy Fallacy" when the unfair menu contains only two choices. Think of the unpleasant choice between the two as being a charging bull. By demanding other choices beyond those on the unfairly limited menu, you thereby "go between the horns" of the dilemma, and are not gored. For another example of the fallacy, see Black-or-White.

Far-Fetched Hypothesis

This is the fallacy of offering a bizarre (far-fetched) hypothesis as the correct explanation without first ruling out more mundane explanations.

Example:

Look at that mutilated cow in the field, and see that flattened grass. Aliens must have landed in a flying saucer and savaged the cow to learn more about the beings on our planet.

Faulty Comparison

If you try to make a point about something by comparison, and if you do so by comparing it with the wrong thing, then your reasoning uses the fallacy of faulty comparison or the fallacy of questionable analogy.

Example:

We gave half the members of the hiking club Durell hiking boots and the other half good-quality tennis shoes. After three months of hiking, you can see for yourself that Durell lasted longer. You, too, should use Durell when you need hiking boots.

Shouldn't Durell hiking boots be compared with other hiking boots, not with tennis shoes?

Faulty Generalization

A fallacy produced by some error in the process of generalizing. See Hasty Generalization or Unrepresentative Generalization for examples.

Formal

Formal fallacies are all the cases or kinds of reasoning that fail to be deductively valid. Formal fallacies are also called logical fallacies or invalidities.

Example:

Some cats are tigers. Some tigers are animals. So, some cats are animals.

This might at first seem to be a good argument, but actually it is fallacious because it has the same logical form as the following more obviously invalid argument:

Some women are Americans. Some Americans are men. So, some women are men.

Nearly all the infinity of types of invalid inferences have no specific fallacy names.

Four Terms

The fallacy of four terms (quaternio terminorum) occurs when four rather than three categorical terms are used in a standard-form syllogism.

Example:

All rivers have banks. All banks have vaults. So, all rivers have vaults.

The word "banks" occurs as two distinct terms, namely river bank and financial bank, so this example also is an equivocation. Without an equivocation, the four term fallacy is trivially invalid.

Gambler's

This fallacy occurs when the gambler falsely assumes that the history of outcomes will affect future outcomes.

Example:

I know this is a fair coin, but it has come up heads five times in a row now, so tails is due on the next toss.

The fallacious move was to conclude that the probability of the next toss coming up tails must be more than a half. The assumption that it's a fair coin is important because, if the coin comes up heads five times in a row, one would otherwise become suspicious that it's not a fair coin and therefore properly conclude that the probably is high that heads is more likely on the next toss.

Genetic

A critic uses the genetic fallacy if the critic attempts to discredit or support a claim or an argument because of its origin (genesis) when such an appeal to origins is irrelevant.

Example:

Whatever your reasons are for buying that DVD they've got to be ridiculous. You said yourself that you got the idea for buying it from last night's fortune cookie. Cookies can't think!

Fortune cookies are not reliable sources of information about what DVD to buy, but the reasons the person is willing to give are likely to be quite relevant and should be listened to. The speaker is using the genetic fallacy by paying too much attention to the genesis of the idea rather than to the reasons offered for it. An ad hominem fallacy is one kind of genetic fallacy, but the genetic fallacy in our passage isn't an ad hominem.

If I learn that your plan for building the shopping center next to the Johnson estate originated with Johnson himself, who is likely to profit from the deal, then my pointing out to the planning commission the origin of the deal would be relevant in their assessing your plan. Because not all appeals to origins are irrelevant, it sometimes can be difficult to decide if the fallacy has been used. For example, if Sigmund Freud shows that the genesis of a person's belief in God is their desire for a strong father figure, then does it follow that their belief in God is misplaced, or is Freud's reasoning using the genetic fallacy?

Group Think

A reasoner uses the group think fallacy if he or she substitutes pride of membership in the group for reasons to support the group's policy. If that's what our group thinks, then that's good enough for me. It's what I think, too. "Blind" patriotism is a rather nasty version of the fallacy.

Example:

We K-Mart employees know that K-Mart brand items are better than Wall-Mart brand items because, well, they are from K-Mart, aren't they?

Guilt by Association

Guilt by association is a version of the ad hominem fallacy in which a person is said to be guilty of error because of the group he or she associates with. The fallacy occurs when we unfairly try to change the issue to be about the speaker's circumstances rather than about the speaker's actual argument. Also called "Ad Hominem, Circumstantial."

Example:

Secretary of State Dean Acheson is too soft on communism, as you can see by his inviting so many fuzzy-headed liberals to his White House cocktail parties.

Has any evidence been presented here that Acheson's actions are inappropriate in regards to communism? This sort of reasoning is an example of McCarthyism, the technique of smearing liberal Democrats that was so effectively used by the late Senator Joe McCarthy in the early 1950s. In fact, Acheson was strongly anti-communist and the architect of President Truman's firm policy of containing Soviet power.

Hasty Conclusion

See Jumping to Conclusions.

Hasty Generalization

A hasty generalization is a fallacy of jumping to conclusions in which the conclusion is a generalization. See also Biased Statistics.

Example:

I've met two people in Nicaragua so far, and they were both nice to me. So, all people I will meet in Nicaragua will be nice to me.

In any hasty generalization the key error is to overestimate the strength of an argument that is based on too small a sample for the implied confidence level or error margin. In this argument about Nicaragua, using the word "all" in the conclusion implies zero error margin. With zero error margin you'd need to sample every single person in Nicaragua, not just two people.

Heap

See Line-Drawing.

Hedging

You are hedging if you refine your claim simply to avoid counterevidence and then act as if your revised claim is the same as the original.

Example:

Samantha: David is a totally selfish person.

Yvonne: I thought we was a boy scout leader. Don’t you have to give a lot of your time for that?

Samantha: Well, David’s totally selfish about what he gives money to. He won’t spend a dime on anyone else.

Yvonne: I saw him bidding on things at the high school auction fundraiser.

Samantha: Well, except for that he’s totally selfish about money.

You do not use the fallacy if you explicitly accept the counterevidence, admit that your original claim is incorrect, and then revise it so that it avoids that counterevidence.

Hooded Man

This is an error in reasoning due to confusing the knowing of a thing with the knowing of it under all its various names or descriptions.

Example:

You claim to know Socrates, but you must be lying. You admitted you didn't know the hooded man over there in the corner, but the hooded man is Socrates.

Hyperbolic Discounting

The fallacy of hyperbolic discounting occurs when someone too heavily weighs the importance of a present reward over a significantly greater reward in the near future, but only slightly differs in their valuations of those two rewards if they are to be received in the far future. The person’s preferences are biased toward the present.

Example:

When asked to decide between receiving an award of $50 now or $60 tomorrow, the person chooses the $50; however, when asked to decide between receiving $50 in two years or $60 in two years and one day, the person chooses the $60.

If the person is in a situation in which $50 now will solve their problem but $60 tomorrow will not, then there is no fallacy in having a bias toward the present.

Hypostatization

The error of inappropriately treating an abstract term as if it were a concrete one.

Example:

Nature decides which organisms live and which die.

Nature isn't capable of making decisions. The point can be made without reasoning fallaciously by saying: "Which organisms live and which die is determined by natural causes."

Ignoratio Elenchi

See Irrelevant Conclusion. Also called missing the point.

 

Ignoring a Common Cause

See Common Cause.

Incomplete Evidence

See Suppressed Evidence.

Inconsistency

The fallacy occurs when we accept an inconsistent set of claims, that is, when we accept a claim that logically conflicts with other claims we hold.

Example:

I'm not racist. Some of my best friends are white. But I just don't think that white women love their babies as much as our women do.

That last remark implies the speaker is a racist, although the speaker doesn't notice the inconsistency.

Inductive Conversion

Improperly reasoning from a claim of the form "All As are Bs" to "All Bs are As" or from one of the form "Many As are Bs" to "Many Bs are As" and so forth.

Example:

Most professional basketball players are tall, so most tall people are professional basketball players.

The term "conversion" is a technical term in formal logic.

Insufficient Statistics

Drawing a statistical conclusion from a set of data that is clearly too small.

Example:

A pollster interviews ten London voters in one building about which candidate for mayor they support, and upon finding that Churchill receives support from six of the ten, declares that Churchill has the majority support of London voters.

This fallacy is a form of the Fallcy of Jumping to Conclusions.

Intensional

The mistake of treating different descriptions or names of the same object as equivalent even in those contexts in which the differences between them matter. Reporting someone's beliefs or assertions or making claims about necessity or possibility can be such contexts. In these contexts, replacing a description with another that refers to the same object is not valid and may turn a true sentence into a false one.

Example:

Michelle said she wants to meet her new neighbor Stalnaker tonight. But I happen to know Stalnaker is a spy for North Korea, so Michelle said she wants to meet a spy for North Korea tonight.

Michelle said no such thing. The faulty reasoner illegitimately assumed that what is true of a person under one description will remain true when said of that person under a second description even in this context of indirect quotation. What was true of the person when described as “her new neighbor Stalnaker” is that Michelle said she wants to meet him, but it wasn’t legitimate for me to assume this is true of the same person when he is described as “a spy for North Korea.”

Extensional contexts are those in which it is legitimate to substitute equals for equals with no worry. But any context in which this substitution of co-referring terms is illegitimate is called an intensional context. Intensional contexts are produced by quotation, modality, and intentionality (propositional attitudes). Intensionality is failure of extensionality, thus the name “intensional fallacy”.

Invalid Reasoning

An invalid inference. An argument can be assessed by deductive standards to see if the conclusion would have to be true if the premises were to be true. If the argument cannot meet this standard, it is invalid. An argument is invalid only if it is not an instance of any valid argument form. The fallacy of invalid reasoning is a formal fallacy.

Example:

If it's raining, then there are clouds in the sky. It's not raining. Therefore, there are no clouds in the sky.

This invalid argument is an instance of denying the antecedent. Any invalid inference that is also inductively very weak is a non sequitur.

Irrelevant Conclusion

The conclusion that is drawn is irrelevant to the premises; it misses the point.

Example:

In court, Thompson testifies that the defendant is a honorable person, who wouldn't harm a flea. The defense attorney uses the fallacy by rising to say that Thompson's testimony shows once again that his client was not near the murder scene.

The testimony of Thompson may be relevant to a request for leniency, but it is irrelevant to any claim about the defendant not being near the murder scene. Other examples of this fallacy are Ad Hominem, Appeal to Authority, Appeal to Emotions, and Argument from Ignorance.

Irrelevant Reason

This fallacy is a kind of non sequitur in which the premises are wholly irrelevant to drawing the conclusion.

Example:

Lao Tze Beer is the top selling beer in Thailand. So, it will be the best beer for Canadians.

Is-Ought

The is-ought fallacy occurs when a conclusion expressing what ought to be so is inferred from premises expressing only what is so, in which it is supposed that no implicit or explicit ought-premises are need. There is controversy in the philosophical literature regarding whether this type of inference is always fallacious.

Example:

He's torturing the cat.

So, he shouldn't do that.

This argument would not use the fallacy if there were an implicit premise indicating that he is a person and persons shouldn't torture other beings.

Jumping to Conclusions

It is not always a mistake to make a quick decision, but when we draw a conclusion without taking the trouble to acquire enough of the relevant evidence, our reasoning uses the fallacy of jumping to conclusions, provided there was sufficient time to acquire and assess that extra evidence, and provided that the extra effort it takes to get the evidence isn't prohibitive.

Example:

This car is really cheap. I'll buy it.

Hold on. Before concluding that you should buy it, you ought to have someone check its operating condition, or else you should make sure you get a guarantee about the car's being in working order. And, if you stop to think about it, there may be other factors you should consider before making the purchase, such as size, appearance, and gas usage.

Lack of Proportion

Either exaggerating or downplaying a point that is a crucial step in a piece of reasoning is an example of the Fallacy of Lack of Proportion. It's a mistake of not adopting the proper perspective. An extreme form of downplaying occurs in the Fallacy of Suppressed Evidence.

Example:

Chandra just overheard the terrorists say that they are about to plant the bomb in the basement of the courthouse, after which they'll drive to the airport and get away. But they won't be taking along their cat. The poor cat. The first thing that Chandra and I should do is to call the Humane Society and check the "Cat Wanted" section of the local newspapers to see if we can find a proper home for the cat.

Line-Drawing

If we improperly reject a vague claim because it is not as precise as we'd like, then we are using the line-drawing fallacy. Being vague is not being hopelessly vague. Also called the Bald Man Fallacy, the Fallacy of the Heap and the Sorites Fallacy.

Example:

Dwayne can never grow bald. Dwayne isn't bald now. Don't you agree that if he loses one hair, that won't make him go from not bald to bald? And if he loses one hair after that, then this one loss, too, won't make him go from not bald to bald. Therefore, no matter how much hair he loses, he can't become bald.

Loaded Language

Loaded language is emotive terminology that expresses value judgments. When used in what appears to be an objective description, the terminology unfortunately can cause the listener to adopt those values when in fact no good reason has been given for doing so. Also called Prejudicial Language.

Example:

[News broadcast] In today's top stories, Senator Smith carelessly cast the deciding vote today to pass both the budget bill and the trailer bill to fund yet another excessive watchdog committee over coastal development.

This broadcast is an editorial posing as a news report.

Logic Chopping

Obscuring the issue by using overly-technical logic tools, especially the techniques of formal symbolic logic, that focus attention on trivial details. A form of Smokescreen and Quibbling.

Logical

See Formal.

Lying

A fallacy of reasoning that depends on intentionally saying something that is known to be false. If the lying occurs in an argument's premise, then it is an example of the fallacy of questionable premise.

Example:

Abraham Lincoln, Theodore Roosevelt, and John Kennedy were assassinated.

They were U.S. presidents.

Therefore, at least three U.S. presidents have been assassinated.

Roosevelt was never assassinated.

Maldistributed Middle

See Undistributed Middle.

Many Questions

See Complex Question.

Misconditionalization

See Modal Fallacy.

Misleading Vividness

When the fallacy of jumping to conclusions is due to a special emphasis on an anecdote or other piece of evidence, then the fallacy of misleading vividness has occurred.

Example:

Yes, I read the side of the cigarette pack about smoking being harmful to your health. That's the Surgeon General's opinion, him and all his statistics. But let me tell you about my uncle. Uncle Harry has smoked cigarettes for forty years now and he's never been sick a day in his life. He even won a ski race at Lake Tahoe in his age group last year. You should have seen him zip down the mountain. He smoked a cigarette during the award ceremony, and he had a broad smile on his face. I was really proud. I can still remember the cheering. Cigarette smoking can't be as harmful as people say.

The vivid anecdote is the story about Uncle Harry. Too much emphasis is placed on it and not enough on the statistics from the Surgeon General.

Misplaced Concreteness

Mistakenly supposing that something is a concrete object with independent existence, when it's not. Also known as the Fallacy of Reification

Example:

There are two footballs lying on the floor of an otherwise empty room. When asked to count all the objects in the room, John says there are three: the two balls plus the group of two.

John mistakenly supposed a group or set of concrete objects is also a concrete object.

A less metaphysical example would be a situation where John says a criminal was caught by K-9 aid, and thereby supposed that K-9 aid was some sort of concrete object. John could have expressed the same point less misleadingly by saying a K-9 dog aided in catching a criminal.

Misrepresentation

If the misrepresentation occurs on purpose, then it is an example of lying. If the misrepresentation occurs during a debate in which there is misrepresentation of the opponent's claim, then it would be the cause of a straw man fallacy.

Missing the Point

See Irrelevant Conclusion.

Mob Appeal

See Appeal to the People.

Modal

This is the error of treating modal conditionals as if the modality applies only to the then-part of the conditional when it more properly applies to the entire conditional.

Example:

James has two children. If James has two children, then he necessarily has more than one child. So, it is necessarily true that James has more than one child.

This apparently valid argument is invalid. It is not necessarily true that James has more than one child; it's merely true that he has more than one child. He could have had no children. It is logically possible that James has no children even though he actually has two. The solution to the fallacy is to see that the premise "If James has two children, then he necessarily has more than one child," requires the modality "necessarily" to apply logically to the entire conditional "If James has two children,then he has more than one child" even though grammatically it applies only to "he has more than one child." The modal fallacy is the most well known of the infinitely many errors involving modal concepts. Modal concepts include necessity, possibility, and so forth.

Monte Carlo

See Gambler's Fallacy.

Name Calling

See Ad Hominem.

Naturalistic

On a broad interpretation of the fallacy, it is said to apply to any attempt to argue from an "is" to an "ought," that is, to argue directly from a list of facts to a claim about what ought to be done.

Example:

Owners of financially successful companies are more successful than poor people in the competition for wealth, power and social status. Therefore, these owners are morally better than poor people, and the poor deserve to be poor.

The fallacy would also occur if one argued from the natural to the moral as follows: since women are naturally capable of bearing and nursing children, they ought to be the primary caregivers of children. There is considerable disagreement among philosophers regarding what sorts of arguments the term "Naturalistic Fallacy" applies to, and even whether it is a fallacy at all.

Neglecting a Common Cause

See Common Cause.

No Middle Ground

See False Dilemma.

No True Scotsman

This error is a kind of ad hoc rescue of one's generalization in which the reasoner re-characterizes the situation solely in order to escape refutation of the generalization.

Example:

Smith: All Scotsmen are loyal and brave.

Jones: But McDougal over there is a Scotsman, and he was arrested by his commanding officer for running from the enemy.

Smith: Well, if that's right, it just shows that McDougal wasn't a TRUE Scotsman.

Non Causa Pro Causa

This label is Latin for mistaking the "non-cause for the cause." See False Cause.

Non Sequitur

When a conclusion is supported only by extremely weak reasons or by irrelevant reasons, the argument is fallacious and is said to be a non sequitur. However, we usually apply the term only when we cannot think of how to label the argument with a more specific fallacy name. Any deductively invalid inference is a non sequitur if it also very weak when assessed by inductive standards.

Example:

Nuclear disarmament is a risk, but everything in life involves a risk. Every time you drive in a car you are taking a risk. If you're willing to drive in a car, you should be willing to have disarmament.

The following is not an example: "If she committed the murder, then there'd be his blood stains on her hands. His blood stains are on her hands. So, she committed the murder." This deductively invalid argument uses the fallacy of affirming the consequent, but it isn't a non sequitur because it has significant inductive strength.

Obscurum per Obscurius

Explaining something obscure or mysterious by something that is even more obscure or more mysterious.

Example:

Let me explain what a lucky result is. It is a fortuitous collapse of the quantum mechanical wave packet that leads to a surprisingly pleasing result.

One-Sidedness

See the related fallacies of Confirmation BiasSlanting and Suppressed Evidence.

Opposition

Being opposed to someone's reasoning because of who they are, usually because of what group they are associated with. See the Fallacy of Guilt by Association.

 

Overgeneralization

See Sweeping Generalization.

Oversimplification

You oversimplify when you cover up relevant complexities or make a complicated problem appear to be too much simpler than it really is.

Example:

President Bush wants our country to trade with Fidel Castro's Communist Cuba. I say there should be a trade embargo against Cuba. The issue in our election is Cuban trade, and if you are against it, then you should vote for me for president.

Whom to vote for should be decided by considering quite a number of issues in addition to Cuban trade. When an oversimplification results in falsely implying that a minor causal factor is the major one, then the reasoning also uses the false cause fallacy.

Past Practice

See Traditional Wisdom.

Pathetic

The pathetic fallacy is a mistaken belief due to attributing peculiarly human qualities to inanimate objects (but not to animals). The fallacy is caused by anthropomorphism.

Example:

Aargh, it won't start again. This old car always breaks down on days when I have a job interview. It must be afraid that if I get a new job, then I'll be able to afford a replacement, so it doesn't want me to get to my interview on time.

Peer Pressure

See Appeal to the People.

Persuasive Definition

Some people try to win their arguments by getting you to accept their faulty definition. If you buy into their definition, they've practically persuaded you already. Same as the Definist Fallacy. Poisoning the Well when presenting a definition would be an example of a using persuasive definition.

Example:

Let's define a Democrat as a leftist who desires to overtax the corporations and abolish freedom in the economic sphere.

Perfectionist

If you remark that a proposal or claim should be rejected solely because it doesn't solve the problem perfectly, in cases where perfection isn't really required, then you've used the perfectionist fallacy.

Example:

You said hiring a house cleaner would solve our cleaning problems because we both have full-time jobs. Now, look what happened. Every week she unplugs the toaster oven and leaves it that way. I should never have listened to you about hiring a house cleaner.

Petitio Principii

See Begging the Question.

Poisoning the Well

Poisoning the well is a preemptive attack on a person in order to discredit their testimony or argument in advance of their giving it. A person who thereby becomes unreceptive to the testimony reasons fallaciously and has become a victim of the poisoner. This is a kind of ad hominem, circumstantial fallacy.

Example:

[Prosecuting attorney in court] When is the defense attorney planning to call that twice-convicted child molester, David Barnington, to the stand? OK, I'll rephrase that. When is the defense attorney planning to call David Barnington to the stand?

Post Hoc

Suppose we notice that an event of kind A is followed in time by an event of kind B, and then hastily leap to the conclusion that A caused B. If so, our reasoning contains the post hoc fallacy. Correlations are often good evidence of causal connection, so the fallacy occurs only when the leap to the causal conclusion is done "hastily." The Latin term for the fallacy is post hoc, ergo propter hoc ("After this, therefore because of this"). It is a kind of false cause fallacy.

Example:

I ate in that Ethiopian restaurant three days ago and now I've just gotten food poisoning. The only other time I've eaten in an Ethiopian restaurant I also got food poisoning, but that time I got sick a week later. My eating in those kinds of restaurants is causing my food poisoning.

Your background knowledge should tell you this is unlikely because the effects of food poisoning are felt soon after the food is eaten. Before believing your illness was caused by eating in an Ethiopian restaurant, you'd need to rule out other possibilities, such as your illness being caused by what you ate a few hours before the onset of the illness.

Prejudicial Language

See Loaded Language.

Proof Surrogate

Substituting a distracting comment for a real proof.

Example:

I don't need to tell a smart person like you that you should vote Republican.

This comment is trying to avoid a serious disagreement about whether one should vote Republican.

Prosecutor’s Fallacy

This is the mistake of over-emphasizing the strength of a piece of evidence while paying insufficient attention to the context.

Example:

Suppose a prosecutor is trying to gain a conviction and points to the evidence that at the scene of the burglary the police found a strand of the burglar’s hair. A forensic test showed that the burglar’s hair matches the suspect’s own hair. The forensic scientist testified that the chance of a randomly selected person producing such a match is only one in two thousand. The prosecutor concludes that the suspect has only a one in two thousand chance of being innocent. On the basis of only this evidence, the prosecutor asks the jury for a conviction.

That is fallacious reasoning, and if you are on the jury you should not be convinced. Here's why. The prosecutor paid insufficient attention to the pool of potential suspects. Suppose that pool has six million people who could have committed the crime, all other things being equal. If the forensic lab had tested all those people, they’d find that about one in every two thousand of them would have a hair match, but that is three thousand people. The suspect is just one of the 3000, so the suspect is very probably innocent unless the prosecutor can provide more evidence. The prosecutor over-emphasized the strength of a piece of evidence by focusing on one suspect while paying insufficient attention to the context which suggests a pool of many more suspects.

Quantifier Shift

Confusing the phrase "For all x there is some y" with "There is some (one) y such that for all x."

Example:

Everything has a cause, so there's one cause of everything.

The error is also made if you argue from "Everybody loves someone" to "There is someone whom everybody loves."

Questionable Begging

See Begging the Question

Questionable Analogy

See False Analogy.

Questionable Cause

See False Cause.

Questionable Premise

If you have sufficient background information to know that a premise is questionable or unlikely to be acceptable, then you use this fallacy if you accept an argument based on that premise. This broad category of fallacies of argumentation includes appeal to authority, false dilemma, inconsistency, lying, stacking the deck, straw man, suppressed evidence, and many others.

Quibbling

We quibble when we complain about a minor point and falsely believe that this complaint somehow undermines the main point. To avoid this error, the logical reasoner will not make a mountain out of a mole hill nor take people too literally. Logic Chopping is a kind of quibbling.

Example:

I've found typographical errors in your poem, so the poem is neither inspired nor perceptive.

Quoting out of Context

If you quote someone, but select the quotation so that essential context is not available and therefore the person's views are distorted, then you've quoted "out of context." Quoting out of context in an argument creates a straw man fallacy.

Example:

Smith: I've been reading about a peculiar game in this article about vegetarianism. When we play this game, we lean out from a fourth-story window and drop down strings containing "Free food" signs on the end in order to hook unsuspecting passers-by. It's really outrageous, isn't it? Yet isn't that precisely what sports fishermen do for entertainment from their fishing boats? The article says it's time we put an end to sport fishing.

Jones: Let me quote Smith for you. He says "We...hook unsuspecting passers-by." What sort of moral monster is this man Smith?

Jones's selective quotation is fallacious because it makes Smith appear to advocate this immoral activity when the context makes it clear that he doesn't.

Rationalization

We rationalize when we inauthentically offer reasons to support our claim. We are rationalizing when we give someone a reason to justify our action even though we know this reason is not really our own reason for our action, usually because the offered reason will sound better to the audience than our actual reason.

Example:

"I bought the matzo bread from Kroger's Supermarket because it is the cheapest brand and I wanted to save money," says Alex [who knows he bought the bread from Kroger's Supermarket only because his girlfriend works there].

Red Herring

A red herring is a smelly fish that would distract even a bloodhound. It is also a digression that leads the reasoner off the track of considering only relevant information.

Example:

Will the new tax in Senate Bill 47 unfairly hurt business? I notice that the main provision of the bill is that the tax is higher for large employers (fifty or more employees) as opposed to small employers (six to forty-nine employees). To decide on the fairness of the bill, we must first determine whether employees who work for large employers have better working conditions than employees who work for small employers. I am ready to volunteer for a new committee to study this question. How do you suppose we should go about collecting the data we need?

Bringing up the issue of working conditions is the red herring.

Refutation by Caricature

See Ad Hominem.

Regression

This fallacy occurs when regression to the mean is mistaken for a sign of a causal connection. Also called the Regressive Fallacy. It is a kind of false cause fallacy.

Example:

You are investigating the average heights of groups of people living in the United States. You sample some people living in Columbus, Ohio and determine their average height. You have the numerical figure for the mean height of people living in the U.S., and you notice that members of your sample from Columbus have an average height that differs from this mean. Your second sample of the same size is from people living in Dayton, Ohio. When you find that this group's average height is closer to the U.S. mean height [as it is very likely to be due to common statistical regression to the mean], you falsely conclude that there must be something causing people living in Dayton to be more like the average U.S. resident than people living in Columbus.

There is most probably nothing causing people from Dayton to be more like the average resident of the U.S.; but rather what is happening is that averages are regressing to the mean.

Reification

Considering a word to be referring to an object, when the meaning of the word can be accounted for more mundanely without assuming the object exists. Also known as the Fallacy of Misplaced Concreteness

Example:

 The 19th century composer Tchaikovsky described the introduction to his Fifth Symphony as "a complete resignation before fate."

He is treating "fate" as if it is naming some object, when it would be less misleading, but also less poetic, to say the introduction suggests that listeners will resign themselves to accepting whatever events happen to them. The Fallacy occurs also when someone says, "I succumbed to nostalgia." Without committing the fallacy, one can make the same point by saying, "My mental state caused actions that would best be described as my reflecting an unusual desire to return to some past period of my life." Another common way the Fallacy is used is when someone says that if you understand what "Sherlock Holmes" means, then Sherlock Holmes exists in your understanding. The larger point being made in this last example is that nouns can be meaningful without them referring to an object, yet those who use the Fallacy of Reification do not understand this point.

Reversing Causation

Drawing an improper conclusion about causation due to a causal assumption that reverses cause and effect. A kind of false cause fallacy.

Example:

All the corporate officers of Miami Electronics and Power have big boats. If you're ever going to become an officer of MEP, you'd better get a bigger boat.

The false assumption here is that having a big boat helps cause you to be an officer in MEP, whereas the reverse is true. Being an officer causes you to have the high income that enables you to purchase a big boat.

Scapegoating

If you unfairly blame an unpopular person or group of people for a problem, then you are scapegoating. This is a kind of fallacy of appeal to emotions.

Example:

Augurs were official diviners of ancient Rome. During the pre-Christian period, when Christians were unpopular, an augur would make a prediction for the emperor about, say, whether a military attack would have a successful outcome. If the prediction failed to come true, the augur would not admit failure but instead would blame nearby Christians for their evil influence on his divining powers. The elimination of these Christians, the augur would claim, could restore his divining powers and help the emperor. By using this reasoning tactic, the augur was scapegoating the Christians.

Scare Tactic

If you suppose that terrorizing your opponent is giving him a reason for believing that you are correct, then you are using a scare tactic and reasoning fallaciously.

Example:

David: My father owns the department store that gives your newspaper fifteen percent of all its advertising revenue, so I'm sure you won't want to publish any story of my arrest for spray painting the college.

Newspaper editor: Yes, David, I see your point. The story really isn't newsworthy.

David has given the editor a financial reason not to publish, but he has not given a relevant reason why the story is not newsworthy. David's tactics are scaring the editor, but it's the editor who uses the scare tactic fallacy, not David. David has merely used a scare tactic. This fallacy's name emphasizes the cause of the fallacy rather than the error itself. See also the related fallacy of appeal to emotions.

Scope

The scope fallacy is caused by improperly changing or misrepresenting the scope of a phrase.

Example:

Every concerned citizen who believes that someone living in the US is a terrorist should make a report to the authorities. But Shelley told me herself that she believes there are terrorists living in the US, yet she hasn't made any reports. So, she must not be a concerned citizen.

The first sentence has ambiguous scope. It was probably originally meant in this sense: Every concerned citizen who believes (of someone that this person is living in the US and is a terrorist) should make a report to the authorities. But the speaker is clearly taking the sentence in its other, less plausible sense: Every concerned citizen who believes (that there is someone or other living in the US who is a terrorist) should make a report to the authorities. Scope fallacies usually are amphibolies.

Secundum Quid

See Accident and Converse Accident, two versions of the fallacy.

Selective Attention

Improperly focusing attention on certain things and ignoring others.

Example:

Father: Justine, how was your school day today? Another C on the history test like last time?

Justine: Dad, I got an A- on my history test today. Isn't that great? Only one student got an A.

Father: I see you weren't the one with the A. And what about the math quiz?

Justine: I think I did OK, better than last time.

Father: If you really did well, you'd be sure. What I'm sure of is that today was a pretty bad day for you.

The pessimist who pays attention to all the bad news and ignores the good news thereby use the fallacy of selective attention. The remedy for this fallacy is to pay attention to all the relevant evidence. The most common examples of selective attention are the fallacy of Suppressed Evidence and the fallacy of Confirmation Bias. See also the Sharpshooter's Fallacy.

Self-Fulfilling Prophecy

The fallacy occurs when the act of prophesying will itself produce the effect that is prophesied, but the reasoner doesn't recognize this and believes the prophesy is a significant insight.

Example:

A group of students are selected to be interviewed individually by the teacher. Each selected student is told that the teacher has predicted they will do significantly better in their future school work. Actually, though, the teacher has no special information about the students and has picked the group at random. If the students believe this prediction about themselves, then, given human psychology, it is likely that they will do better merely because of the teacher's making the prediction.

The prediction will fulfill itself, so to speak, and the students' reasoning contains the fallacy.

This fallacy can be dangerous in an atmosphere of potential war between nations when the leader of a nation predicts that their nation will go to war against their enemy. This prediction could very well precipitate an enemy attack because the enemy calculates that if war is inevitable then it is to their military advantage not to get caught by surprise.

Self-Selection

A biased generalization in which the bias is due to self-selection for membership in the sample used to make the generalization.

Example:

The radio announcer at a student radio station in New York asks listeners to call in and say whether they favor Jones or Smith for president. 80% of the callers favor Jones, so the announcer declares that Americans prefer Jones to Smith.

The problem here is that the callers selected themselves for membership in the sample, but clearly the sample is unlikely to be representative of Americans.

Sharpshooter's

The sharpshooter's fallacy gets its name from someone shooting a rifle at the side of the barn and then going over and drawing a target and bulls eye concentrically around the bullet hole. The fallacy is caused by overemphasizing random results or making selective use of coincidence. See the Fallacy of Selective Attention.

Example:

Psychic Sarah makes twenty-six predictions about what will happen next year. When one, but only one, of the predictions comes true, she says, "Aha! I can see into the future."

Slanting

This error occurs when the issue is not treated fairly because of misrepresenting the evidence by, say, suppressing part of it, or misconstruing some of it, or simply lying. See the following related fallacies: Confirmation BiasLying, Misrepresentation, Questionable Premise, Quoting out of Context, Straw Man, Suppressed Evidence.

Slippery Slope

Suppose someone claims that a first step (in a chain of causes and effects, or a chain of reasoning) will probably lead to a second step that in turn will probably lead to another step and so on until a final step ends in trouble. If the likelihood of the trouble occurring is exaggerated, the slippery slope fallacy is present.

Example:

Mom: Those look like bags under your eyes. Are you getting enough sleep?

Jeff: I had a test and stayed up late studying.

Mom: You didn't take any drugs, did you?

Jeff: Just caffeine in my coffee, like I always do.

Mom: Jeff! You know what happens when people take drugs! Pretty soon the caffeine won't be strong enough. Then you will take something stronger, maybe someone's diet pill. Then, something even stronger. Eventually, you will be doing cocaine. Then you will be a crack addict! So, don't drink that coffee.

The form of a slippery slope fallacy looks like this:

A leads to B.

B leads to C.

C leads to D.

...

Z leads to HELL.

We don't want to go to HELL.

So, don't take that first step A.

The key claim in the fallacy is that taking the first step will lead to the final, unacceptable step. Arguments of this form may or may not be fallacious depending on the probabilities involved in each step. The analyst asks how likely it is that taking the first step will lead to the final step. For example, if A leads to B with a probability of 80 percent, and B leads to C with a probability of 80 percent, and C leads to D with a probability of 80 percent, is it likely that A will eventually lead to D? No, not at all; there is about an even (50-50) chance. The proper analysis of a slippery slope argument depends on sensitivity to such probabilistic calculations. Regarding terminology, if the chain of reasoning A, B, C, D, ..., Z is about causes, then the fallacy is called the Domino Fallacy.

Small Sample

This is the fallacy of using too small a sample. If the sample is too small to provide a representative sample of the population, and if we have the background information to know that there is this problem with sample size, yet we still accept the generalization upon the sample results, then we use the fallacy. This fallacy is the fallacy of hasty generalization, but it emphasizes statistical sampling techniques.

Example:

I've eaten in restaurants twice in my life, and both times I've gotten sick. I've learned one thing from these experiences: restaurants make me sick.

How big a sample do you need to avoid the fallacy? Relying on background knowledge about a population's lack of diversity can reduce the sample size needed for the generalization. With a completely homogeneous population, a sample of one is large enough to be representative of the population; if we've seen one electron, we've seen them all. However, eating in one restaurant is not like eating in any restaurant, so far as getting sick is concerned. We cannot place a specific number on sample size below which the fallacy is produced unless we know about homogeneity of the population and the margin of error and the confidence level.

Smear Tactic

A smear tactic is an unfair characterization either of the opponent or the opponent's position or argument. Smearing the opponent causes an ad hominem fallacy. Smearing the opponent's argument causes a straw man fallacy.

Smokescreen

This fallacy occurs by offering too many details in order either to obscure the point or to cover-up counter-evidence. In the latter case it would be an example of the fallacy of suppressed evidence. If you produce a smokescreen by bringing up an irrelevant issue, then you produce a red herring fallacy. Sometimes called clouding the issue.

Example:

Senator, wait before you vote on Senate Bill 88. Do you realize that Delaware passed a bill on the same subject in 1932, but it was ruled unconstitutional for these twenty reasons. Let me list them here.... Also, before you vote on SB 88 you need to know that .... And so on.

There is no recipe to follow in distinguishing smokescreens from reasonable appeals to caution and care.

Sorites

See Line-Drawing.

Special Pleading

Special pleading is a form of inconsistency in which the reasoner doesn't apply his or her principles consistently. It is the fallacy of applying a general principle to various situations but not applying it to a special situation that interests the arguer even though the general principle properly applies to that special situation, too.

Example:

Everyone has a duty to help the police do their job, no matter who the suspect is. That is why we must support investigations into corruption in the police department. No person is above the law. Of course, if the police come knocking on my door to ask about my neighbors and the robberies in our building, I know nothing. I'm not about to rat on anybody.

In our example, the principle of helping the police is applied to investigations of police officers but not to one's neighbors.

Specificity

Drawing an overly specific conclusion from the evidence. A kind of jumping to conclusions.

Example:

The trigonometry calculation came out to 5,005.6833 feet, so that's how wide the cloud is up there.

Stacking the Deck

See Suppressed Evidence and Slanting.

Stereotyping

Using stereotypes as if they are accurate generalizations for the whole group is an error in reasoning. Stereotypes are general beliefs we use to categorize people, objects, and events; but these beliefs are overstatements that shouldn't be taken literally. For example, consider the stereotype "She’s Mexican, so she’s going to be late." This conveys a mistaken impression of all Mexicans. On the other hand, even though most Mexicans are punctual, a German is more apt to be punctual than a Mexican, and this fact is said to be the "kernel of truth" in the stereotype. The danger in our using stereotypes is that speakers or listeners will not realize that even the best stereotypes are accurate only when taken probabilistically. As a consequence, the use of stereotypes can breed racism, sexism, and other forms of bigotry.

Example:

German people aren't good at dancing our sambas. She's German. So, she's not going to be any good at dancing our sambas.

This argument is deductively valid, but it's unsound because it rests on a false, stereotypical premise. The grain of truth in the stereotype is that the average German doesn't dance sambas as well as the average South American, but to overgeneralize and presume that ALL Germans are poor samba dancers compared to South Americans is a mistake called "stereotyping."

Straw Man

Your reasoning contains the straw man fallacy whenever you attribute an easily refuted position to your opponent, one that the opponent wouldn't endorse, and then proceed to attack the easily refuted position (the straw man) believing you have undermined the opponent's actual position. If the misrepresentation is on purpose, then the straw man fallacy is caused by lying.

Example (a debate before the city council):

Opponent: Because of the killing and suffering of Indians that followed Columbus's discovery of America, the City of Berkeley should declare that Columbus Day will no longer be observed in our city.

Speaker: This is ridiculous, fellow members of the city council. It's not true that everybody who ever came to America from another country somehow oppressed the Indians. I say we should continue to observe Columbus Day, and vote down this resolution that will make the City of Berkeley the laughing stock of the nation.

The speaker has twisted what his opponent said; the opponent never said, nor even indirectly suggested, that everybody who ever came to America from another country somehow oppressed the Indians. The critical thinker will respond to the fallacy by saying, "Let's get back to the original issue of whether we have a good reason to discontinue observing Columbus Day."

Style Over Substance

Unfortunately the style with which an argument is presented is sometimes taken as adding to the substance or strength of the argument.

Example:

You've just been told by the salesperson that the new Maytag is an excellent washing machine because it has a double washing cycle. If you notice that the salesperson smiled at you and was well dressed, this does not add to the quality of the salesperson's argument, but unfortunately it does for those who are influenced by style over substance, as most of us are.

Subjectivist

The subjectivist fallacy occurs when it is mistakenly supposed that a good reason to reject a claim is that truth on the matter is relative to the person or group.

Example:

Justine has just given Jake her reasons for believing that the Devil is an imaginary evil person. Jake, not wanting to accept her conclusion, responds with, "That's perhaps true for you, but it's not true for me."

Superstitious Thinking

Reasoning deserves to be called superstitious if it is based on reasons that are well known to be unacceptable, usually due to unreasonable fear of the unknown, trust in magic, or an obviously false idea of what can cause what. A belief produced by superstitious reasoning is called a superstition. The fallacy is an instance of the False Cause Fallacy.

Example:

I never walk under ladders; it's bad luck.

It may be a good idea not to walk under ladders, but a proper reason to believe this is that workers on ladders occasionally drop things, and that ladders might have dripping wet paint that could damage your clothes. An improper reason for not walking under ladders is that it is bad luck to do so.

Suppressed Evidence

Intentionally failing to use information suspected of being relevant and significant is committing the fallacy of suppressed evidence. This fallacy usually occurs when the information counts against one's own conclusion. Perhaps the arguer is not mentioning that experts have recently objected to one of his premises. The fallacy is a kind of fallacy of Selective Attention.

Example:

Buying the Cray Mac 11 computer for our company was the right thing to do. It meets our company's needs; it runs the programs we want it to run; it will be delivered quickly; and it costs much less than what we had budgeted.

This appears to be a good argument, but you'd change your assessment of the argument if you learned the speaker has intentionally suppressed the relevant evidence that the company's Cray Mac 11 was purchased from his brother-in-law at a 30 percent higher price than it could have been purchased elsewhere, and if you learned that a recent unbiased analysis of ten comparable computers placed the Cray Mac 11 near the bottom of the list.

If the relevant information is not intentionally suppressed but rather inadvertently overlooked, the fallacy of suppressed evidence also is said to occur, although the fallacy's name is misleading in this case. The fallacy is also called the Fallacy of Incomplete Evidence and Cherry-Picking the Evidence. See also Slanting.

Sweeping Generalization

See Fallacy of Accident.

Syllogistic

Syllogistic fallacies are kinds of invalid categorical syllogisms. This list contains the fallacy of undistributed middle and the fallacy of four terms, and a few others though there are a great many such formal fallacies.

Tokenism

If you interpret a merely token gesture as an adequate substitute for the real thing, you've been taken in by tokenism.

Example:

How can you call our organization racist? After all, our receptionist is African American.

If you accept this line of reasoning, you have been taken in by tokenism.

Traditional Wisdom

If you say or imply that a practice must be OK today simply because it has been the apparently wise practice in the past, then your reasoning contains the fallacy of traditional wisdom. Procedures that are being practiced and that have a tradition of being practiced might or might not be able to be given a good justification, but merely saying that they have been practiced in the past is not always good enough, in which case the fallacy is present. Also called argumentum consensus gentium when the traditional wisdom is that of nations.

Example:

Of course we should buy IBM's computer whenever we need new computers. We have been buying IBM as far back as anyone can remember.

The "of course" is the problem. The traditional wisdom of IBM being the right buy is some reason to buy IBM next time, but it's not a good enough reason in a climate of changing products, so the "of course" indicates that the fallacy of traditional wisdom has occurred. The fallacy is essentially the same as the fallacies of appeal to the common practice, gallery, masses, mob, past practice, people, peers, and popularity.

Tu Quoque

The fallacy of tu quoque occurs in our reasoning if we conclude that someone's argument not to perform some act must be faulty because the arguer himself or herself has performed it. Similarly, when we point out that the arguer doesn't practice what he or she preaches, and then suppose that there must be an error in the preaching for only this reason, then we are reasoning fallaciously and creating a tu quoque. This is a kind of ad hominem circumstantial fallacy.

Example:

Look who's talking. You say I shouldn't become an alcoholic because it will hurt me and my family, yet you yourself are an alcoholic, so your argument can't be worth listening to.

Discovering that a speaker is a hypocrite is a reason to be suspicious of the speaker's reasoning, but it is not a sufficient reason to discount it.

Two Wrongs Make a Right

When you defend your wrong action as being right because someone previously has acted wrongly, you are using the fallacy called "two wrongs make a right." This is a special kind of ad hominem fallacy.

Example:

Oops, no paper this morning. Somebody in our apartment building probably stole my newspaper. So, that makes it OK for me to steal one from my neighbor's doormat while nobody else is out here in the hallway.

Undistributed Middle

In syllogistic logic, failing to distribute the middle term over at least one of the other terms is the fallacy of undistributed middle. Also called the fallacy of maldistributed middle.

Example:

All collies are animals.

All dogs are animals.

Therefore, all collies are dogs.

The middle term ("animals") is in the predicate of both universal affirmative premises and therefore is undistributed. This formal fallacy has the logical form: All C are A. All D are A. Therefore, all C are D.

Unfalsifiability

This error in explanation occurs when the explanation contains a claim that is not falsifiable, because there is no way to check on the claim. That is, there would be no way to show the claim to be false if it were false.

Example:

He lied because he's possessed by demons.

This could be the correct explanation of his lying, but there's no way to check on whether it's correct. You can check whether he's twitching and moaning, but this won't be evidence about whether a supernatural force is controlling his body. The claim that he's possessed can't be verified if it's true, and it can't be falsified if it's false. So, the claim is too odd to be relied upon for an explanation of his lying. Relying on the claim is an instance of fallacious reasoning.

Unrepresentative Generalization

If the plants on my plate are not representative of all plants, then the following generalization should not be trusted.

Example:

Each plant on my plate is edible.

So, all plants are edible.

The set of plants on my plate is called "the sample" in the technical vocabulary of statistics, and the set of all plants is called "the target population." If you are going to generalize on a sample, then you want your sample to be representative of the target population, that is, to be like it in the relevant respects. This fallacy is the same as the Fallacy of Unrepresentative Sample.

Unrepresentative Sample

If the means of collecting the sample from the population are likely to produce a sample that is unrepresentative of the population, then a generalization upon the sample data is an inference using the fallacy of unrepresentative sample. A kind of hasty generalization. When some of the statistical evidence is expected to be relevant to the results but is hidden or overlooked, the fallacy is called suppressed evidence. There are many ways to bias a sample. Knowingly selecting atypical members of the population produces a biased sample.

Example:

The two men in the matching green suits that I met at the Star Trek Convention in Las Vegas had a terrible fear of cats. I remember their saying they were from France. I've never met anyone else from France, so I suppose everyone there has a terrible fear of cats.

Most people's background information is sufficient to tell them that people at this sort of convention are unlikely to be representative, that is, are likely to be atypical members of the rest of society. Having a small sample does not by itself cause the sample to be biased. Small samples are OK if there is a corresponding large margin of error or low confidence level.

Large samples can be unrepresentative, too.

Example:

We've polled over 400,000 Southern Baptists and asked them whether the best religion in the world is Southern Baptist. We have over 99% agreement, which proves our point about which religion is best.

Getting a larger sample size does not overcome sampling bias.

Untestability

See Unfalsifiability.

Vested Interest

The vested interest fallacy occurs when a person argues that someone’s claim or recommended action is incorrect because the person is motivated by their interest in gaining something by it, with the implication that were it not for this vested interest then the person wouldn’t make the claim or recommend the action. Because this reasoning attacks the reasoner rather than the reasoning itself, it is a kind of Ad Hominem fallacy.

Example:

According to Samantha we all should all vote for Anderson for Congress, but she’s a lobbyist in the pay of Anderson and will get a nice job in the capitol if he’s elected, so that convinces me to vote against Anderson.

This is fallacious reasoning by the speaker because whether the speaker should vote for Anderson ought to depend on Anderson’s qualifications, not on whether Samantha will or won’t get a nice job if he’s elected.

Weak Analogy

See False Analogy.

Willed ignorance

I've got my mind made up, so don't confuse me with the facts. This is usually a case of the Traditional Wisdom Fallacy.

Example:

Of course she's made a mistake. We've always had meat and potatoes for dinner, and our ancestors have always had meat and potatoes for dinner, and so nobody knows what they're talking about when they start saying meat and potatoes are bad for us.

Wishful Thinking

A reasoner who suggests that a claim is true, or false, merely because he or she strongly hopes it is, is using the fallacy of wishful thinking. Wishing something is true is not a relevant reason for claiming that it is actually true.

Example:

There's got to be an error here in the history book. It says Thomas Jefferson had slaves. I don't believe it. He was our best president, and a good president would never do such a thing. That would be awful.

You-Too

This is an informal name for the Tu Quoque fallacy.

7. References and Further Reading

  • Eemeren, Frans H. van, R. F. Grootendorst, F. S. Henkemans, J. A. Blair, R. H. Johnson, E. C. W. Krabbe, C. W. Plantin, D. N. Walton, C. A. Willard, J. A. Woods, and D. F. Zarefsky, 1996. Fundamentals of Argumentation Theory: A Handbook of Historical Backgrounds and Contemporary Developments. Mahwah, New Jersey, Lawrence Erlbaum Associates, Publishers.
  • Fearnside, W. Ward and William B. Holther, 1959. Fallacy: The Counterfeit of Argument. Prentice-Hall, Inc. Englewood Cliffs, New Jersey.
  • Fischer, David Hackett., 1970. Historian's Fallacies: Toward Logic of Historical Thought. New York, Harper & Row, New York, N.Y.
    • This book contains additional fallacies to those in this article, but they are much less common, and many have obscure names.
  • Groarke, Leo and C. Tindale, 2003. Good Reasoning Matters! 3rd edition, Toronto, Oxford University Press.
  • Hamblin, Charles L., 1970. Fallacies. London, Methuen.
  • Hansen, Has V. and R. C. Pinto., 1995. Fallacies: Classical and Contemporary Readings. University Park, Pennsylvania State University Press.
  • Huff, Darrell, 1954. How to Lie with Statistics. New York, W. W. Norton.
  • Levi, D. S., 1994. "Begging What is at Issue in the Argument," Argumentation, 8, 265-282.
  • Schwartz, Thomas, 1981. "Logic as a Liberal Art," Teaching Philosophy 4, 231-247.
  • Walton, Douglas N., 1989. Informal Logic: A Handbook for Critical Argumentation. Cambridge, Cambridge University Press.
  • Walton, Douglas N., 1995. A Pragmatic Theory of Fallacy. Tuscaloosa, University of Alabama Press.
  • Walton, Douglas N., 1997. Appeal to Expert Opinion: Arguments from Authority. University Park, Pennsylvania State University Press.
  • Whately, Richard, 1836. Elements of Logic. New York, Jackson.
  • Woods, John and D. N. Walton, 1989. Fallacies: Selected Papers 1972-1982. Dordrecht, Holland, Foris.

Research on the fallacies of informal logic is regularly published in the following journals: Argumentation, Argumentation and Advocacy, Informal Logic, Philosophy and Rhetoric, and Teaching Philosophy.

Author Information

Bradley Dowden
Email: dowden@csus.edu
California State University, Sacramento
U. S. A.

Paraconsistent Logic

Paraconsistent Logic

A paraconsistent logic is a way to reason about inconsistent information without lapsing into absurdity. In a non-paraconsistent logic, inconsistency explodes in the sense that if a contradiction obtains, then everything (everything!) else obtains, too. Someone reasoning with a paraconsistent logic can begin with inconsistent premises—say, a moral dilemma, a Kantian antinomy, or a semantic paradox—and still reach sensible conclusions, without completely exploding into incoherence.

Paraconsistency is a thesis about logical consequence: not every contradiction entails arbitrary absurdities. Beyond that minimal claim, views and mechanics of paraconsistent logic come in a broad spectrum, from weak to strong, as follows.

On the very weak end, paraconsistent logics are taken to be safeguards to control for human fallibility. We inevitably revise our theories, have false beliefs, and make mistakes; to prevent falling into incoherence, a paraconsistent logic is required. Such modest and conservative claims say nothing about truth per se. Weak paraconsistency is still compatible with the thought that if a contradiction were true, then everything would be true, too—because, beliefs and theories notwithstanding, contradictions cannot be true.

On the very strong end of the spectrum, paraconsistent logics underwrite the claim that some contradictions really are true. This thesis—dialetheism—is that sometimes the best theory (of mathematics, or metaphysics, or even the empirical world) is contradictory. Paraconsistency is mandated because the dialetheist still maintains that not everything is true. In fact, strong paraconsistency maintains that all contradictions are false—even though some contradictions also are true. Thus, at this end of the spectrum, dialetheism is itself one of the true contradictions.

This article offers a brief discussion of some main ideas and approaches to paraconsistency. Modern logics are couched in the language of mathematics and formal symbolism. Nevertheless, this article is not a tutorial on the technical aspects of paraconsistency, but rather a synopsis of the underlying ideas. See the  suggested readings for formal expositions, as well as historical material.

Table of Contents

  1. The Problem
  2. Logical Background
    1. Definitions
    2. Two Grades of Paraconsistency
    3. Requirements for a Logic to Be Paraconsistent
  3. Schools of Paraconsistent Logic
    1. Discussive Logic
    2. Preservationism
    3. Adaptive Logic
    4. Relevance
    5. Logics of Formal Inconsistency
    6. Dialetheism
  4. Applications
    1. Moral Dilemmas
    2. Law, Science, and Belief Revision
    3. Closed Theories – Truth and Sets
      1. Naïve Axioms
      2. Further Logical Restrictions
    4. Learning, Beliefs, and AI
  5. Conclusion
  6. References and Further Reading

1. The Problem

Consider an example due to Alan Weir, concerning a political leader who absolutely, fundamentally believes in the sanctity of human life, and so believes that war is always wrong. All the same, a situation arises where her country must enter into war (else people will die, which is wrong). Entering into war will inevitably mean that some people will die. Plausibly, the political leader is now embroiled in a dilemma. This is exactly when paraconsistent inference is appropriate. Imagine our leader thinking, ‘War is always wrong, but since we are going to war anyway, we may as well bomb civilians.’ Absurdist reasoning of this sort is not only bad logic, but just plain old bad.

David Hume once wrote (1740, p. 633),

I find myself involv’d in such a labyrinth, that, I must confess, I neither know how to correct my former opinions, nor how to render them consistent.

As Schotch and Jennings rightly point out, ‘it is no good telling Hume that if his inconsistent opinions were, all of them, true then every sentence would be true.’ The best we could tell Hume is that at least some of his opinions are wrong—but ‘this, so far from being news to Hume, was what occasioned much of the anguish he evidently felt’ (Schotch et al. p. 23). We want a way to keep sensible and reasonable even when—especially when—such problems arise. We need a way to keep from falling to irrational pieces when life, logic, mathematics or even philosophy leads us into paradox and conundrum. That is what paraconsistent logics are for.

2. Logical Background

a. Definitions

A logic is a set of well-formed formulae, along with an inference relation ⊢. The inference relation, also called logical consequence, may be specified syntactically or semantically, and tells us which formulae (conclusions) follow from which formulae (premises). When a sentence B follows from a bunch of sentences A0, A1, …, An, we write

A0, A1, …, AnB.

When the relation ⊢ holds, we say that the inference is valid. The set of all sentences that can be validly inferred in a logic is called a theory.

A key distinction behind the entire paraconsistent enterprise is that between consistency and coherence. A theory is consistent if no pairs of contradictory sentences A, ¬A are derivable, or alternatively iff no single sentence of the form A & ¬A is derivable. Coherence is a broader notion, sometimes called absolute (as opposed to simple) consistency, and more often called non-triviality. A trivial or absurd theory is one in which absolutely every sentence holds. The idea of paraconsistency is that coherence is possible even without consistency. Put another way, a paraconsistent logician can say that a theory is inconsistent without meaning that the theory is incoherent, or absurd. The former is a structural feature of the theory, worth repair or further study; the latter means the theory has gone disastrously wrong. Paraconsistency gives us a principled way to resist equating contradiction with absurdity.

In classical logic, the logic developed by Boole, Frege, Russell et al. in the late 1800s, and the logic almost always taught in university courses, has an inference relation according to which

A, ¬AB

is valid. Here the conclusion, B, could be absolutely anything at all. Thus this inference is called ex contradictione quodlibet (from a contradiction, everything follows) or explosion. Paraconsistent logicians have urged that this feature of classical inference is incorrect. While the reasons for denying the validity of explosion will vary according to one’s view of the role of logic, a basic claim is that the move from a contradiction to an arbitrary formula does not seem like reasoning. As the founders of relevant logic, Anderson and Belnap, urge in their canonical book Entailment, a ‘proof’ submitted to a mathematics journal in which the essential steps fail to provide a reason to believe the conclusion, e.g. a proof by explosion, would be rejected out of hand. Mark Colyvan (2008) illustrates the point by noting that no one has laid claim to a startlingly simple proof of the Riemann hypothesis:

Riemann’s Hypothesis: All the zeros of the zeta function have real part equal to 1/2.
Proof: Let R stand for the Russell set, the set of all sets that are not members of themselves. It is straightforward to show that this set is both a member of itself and not a member of itself. Therefore, all the zeros of Riemann’s zeta function have real part equal to 1/2.

Needless to say, the Riemann hypothesis remains an open problem at time of writing.

Minimally, paraconsistent logicians claim that there are or may be situations in which paraconsistency is a viable alternative to classical logic. This is a pluralist view, by which different logics are appropriate to different areas. Just as a matter of practical value, explosion does not seem like good advice for a person who is faced with a contradiction, as the quote from Hume above makes clear. More forcefully, paraconsistent logics make claim to being a better account of logic than the classical apparatus. This is closer to a monistic view, in which there is, essentially, one correct logic, and it is paraconsistent.

b. Two Grades of Paraconsistency

Let us have a formal definition of paraconsistency.

Definition 1. A logic is paraconsistent iff it is not the case for all sentences A, B that A, ¬AB.

This definition simply is the denial of ex contradictione quodlibet; a logic is paraconsistent iff it does not validate explosion. The definition is neutral as to whether any inconsistency will ever arise. It only indicates that, were an inconsistency to arise, this would not necessarily lead to inferential explosion. In the next definition, things are a little different:

Definition 2. A logic is paraconsistent iff there are some sentences A, B such that ⊢ A and ⊢ ¬A, but not ⊢ B.

A logic that is paraconsistent in the sense of definition 2 automatically satisfies definition 1. But the second definition suggests that there are actually inconsistent theories. The idea is that, in order for explosion to fail, one needs to envisage circumstances in which contradictions obtain. The difference between the definitions is subtle, but it will help us distinguish between two main gradations of paraconsistency, weak and strong.

Roughly, weak paraconsistency is the cluster concept that

  • any apparent contradictions are always due to human error;
  • classical logic is preferable, and in a better world where humans did not err, we would use classical logic;
  • no true theory would ever contain an inconsistency.

Weak paraconsistent logicians see their role as akin to doctors or mechanics. Sometimes information systems develop regrettable but inevitable errors, and paraconsistent logics are tools for damage control. Weak paraconsistentists look for ways to restore consistency to the system or to make the system work as consistently as possible. Weak paraconsistentists have the same view, more or less, of contradictions as do classical logicians.

On the other side, strong paraconsistency includes ideas like

  • Some contradictions may not be errors;
  • classical logic is wrong in principle;
  • some true theories may actually be inconsistent.

A strong paraconsistentist considers relaxing the law of non-contradiction in some way, either by dropping it entirely, so that ¬(A & ¬A) is not a theorem, or by holding that the law can itself figure into contradictions, of the form

Always, not (A and not A),
and sometimes, both A and not A.

Strong paraconsistentists may be interested in inconsistent systems for their own sake, rather like a mathematician considering different non-Euclidean systems of geometry, without worry about the ‘truth’ of the systems; or a strong paraconsistentist may expect that inconsistent systems are true and accurate descriptions of the world, like a physicist considering a non-Euclidean geometry as the actual geometry of space.

It is important to keep weak paraconsistency distinct from logical pluralism, and strong paraconsistency or dialetheism (see §3f.) distinct from logical monism. For example, one can well be a weak paraconsistentist, insofar as one claims that explosion is invalid, even though there are no true contradictions, and at the same time a logical monist, holding that the One True Logic is paraconsistent. This was the position of the fathers of relevance logic, Anderson and Belnap, for instance. Similarly, one could be a dialetheist and a logical pluralist, as is the contemporary philosophical logician Jc Beall (see suggested readings).

c. Requirements for a Logic to be Paraconsistent

All approaches to paraconsistency seek inference relations that do not explode. Sometimes this is accomplished by going back to basics, developing new and powerful ideas about the meaning of logical consequence, and checking that these ideas naturally do not lead to explosion (e.g. relevance logic, §3d). More often paraconsistency is accomplished by looking at what causes explosion in classical inference, and simply removing the causes. In either case, there are some key constraints on a paraconsistent logic that we should look at up front.

Of course, the main requirement is to block the rule of explosion. This is not really a limitation, since explosion is prima facie invalid anyway. But we cannot simply remove the inference of explosion from classical logic and automatically get a paraconsistent logic. The reason for this, and the main, serious constraint on a paraconsistent logic, was discovered by C. I. Lewis in the 1950s. Suppose we have both A and ¬A as premises. If we have A, then we have that either A or B, since a disjunction only requires that one of its disjuncts holds. But then, given ¬A, it seems that we have B, since if either A or B, but not A, then B. Therefore, from A and ¬A, we have deduced B. The problem is that B is completely arbitrary—an absurdity. So if it is invalid to infer everything from a contradiction, then this rule, called disjunctive syllogism,

AB, ¬AB,

must be invalid, too.

There are two things to remark about the failure of disjunctive syllogism (DS).

First, we might say that classical logic runs into trouble when it comes to inconsistent situations. This something like the way Newtonian physics makes bad predictions when it comes to the large-scale structure of space-time. And so similarly, as Newtonian physics is still basically accurate and applicable on medium-sized domains, we can say that classical logic is still accurate and appropriate in consistent domains. For working out sudoku puzzles, paying taxes, or solving murder mysteries, there is nothing wrong with classical reasoning. For exotic objects like contradictions, though, classical logic in unprepared.

Secondly, since DS is a valid classical inference, we can see clearly that a paraconsistent logic will validate fewer inferences than classical logic. (No classically invalid inferences are going to become valid by dint of inconsistent information.) That is the whole idea—that classical logic allows too much, and especially given the possibility of inconsistency, we must be more discriminating. This is sometimes expressed by saying that paraconsistent logics are ‘weaker’ than classical logic; but since paraconsistent logics are more flexible and apply to more situations, we needn’t focus too much on the slang. Classical logic is in many ways more limited than paraconsistent logic (see §4c.).

A third point, which we will take up in §3d, is that the invalidity of DS shows, essentially, that for the basic inference of modus ponens to be valid in all situations, we need a new logical connective for implication, not defined in terms of disjunction and negation. Now we turn to some weak and strong systems of paraconsistency.

3. Schools of Paraconsistent Logic

a. Discussive Logic

The first paraconsistent logic was developed by Jaśkowski, a student of Lukasiewicz, in Poland in 1948. He gave some basic criteria for a paraconsistent logic:

To find a system of sentential calculus which:
1) when applied to contradictory systems would not entail their triviality;
2) would be rich enough to enable practical inference;
3) would have intuitive justification.

To meet his own criteria, Jaśkowski’s idea is to imagine a group of people having a discussion, some of whom are disagreeing with each other. One person asserts: ‘Wealth should be distributed equally amongst all persons.’ Another person says, ‘No, it should not; everyone should just have what he earns.’ The group as a whole is now in an inconsistent information state. We face such states all time time: reading news articles, blogs, and opinion pieces, we take in contradictions (even if each article is internally consistent, which is unusual). How to reason about conflicting information like this?

Jaśkowski’s idea is to prevent the inconsistent information from co-mingling. He does so, in effect, by blocking the rule of adjunction:

A, BA & B.

This rule says that, given two premises A and B, we can conjoin them into a single statement, (AB). If the adjunction rule is removed, then we can have A and ¬A, without deriving a full-blown contradiction A & ¬A. The information is kept separate. On this approach, the classical rule of explosion actually can still hold, in the form

A & ¬AB.

The aim of this approach is not to prevent explosion at the sentence level, but rather to ensure that no contradictory sentence (as opposed to inconsistent sentences) can ever arise. So while the inconsistency arising from different disagreeing parties can be made coherent sense of, a person who is internally contradictory is still reckoned to be absurd.

In 1974, Rescher and Brandom suggested a very similar approach, in terms of worlds. As Belnap has pointed out, the non-adjunctive idea has obvious applications to computer science, for example when a large amount of polling data is stored by a system.

b. Preservationism

Around 1978, the Candadian logicians Schotch and Jennings developed an approach to modal logic and paraconsistency that has some close affinities with the discussion approach. Their approach is now known as the preservationist school. The fundamental idea is that, given an inconsistent collection of premises, we should not try to reason about the collection of premises as a whole, but rather focus on internally consistent subsets of premises. Like discussion logics, preservationists see an important distinction between an inconsistent data set, like

{A, ¬A},

which is considered tractable, versus an outright contradiction like

A & ¬A,

which is considered hopeless. The whole idea is summarized in a paraphrase of Gillman Payette, a major contributor to the preservationist program:

Question: How do you reason from an inconsistent set of premises?
Answer: You don't, since every formula follows in that case. You reason from consistent subsets of premises.

Preservationists begin with an already defined logic X, usually classical logic. They assert that we, as fallible humans, are simply sometimes ‘stuck with bad data’; and this being the case, some kind of repair is needed on the logic X to insure coherence. Preservationists define the level of a set of premises to be the least number of cells into which the set must be divided for every cell to be internally consistent. They then define an inference relation, called forcing, in terms of the logic X, as follows:

A set of sentences Γ forces A iff there is at least one subset Δ of Γ such that A is an X-valid inference from Δ.

Forcing preserves the level of Γ. If there is any consistency to preserve, forcing ensures that things do not get any more inconsistent. In particular, if a data set is inconsistent but contains no single-sentence contradictions, then the forcing relation is paraconsistent.

Aside from paraconsistent applications, and roots in modal logic, preservationists have recently proved some deep theorems about logic more generally. Payette has shown, for example, that two logics are identical iff they assign any set of sentences the same level.

Detour: Chunk and Permeate

Closely related to the preservationist paradigm is a technique called chunk and permeate, developed by Bryson Brown and Graham Priest to explain the early differential calculus of Newton and Leibniz (see inconsistent mathematics). It is known that the early calculus involved contradictions of some kind, in particular, infinitesimal numbers that are sometimes identical to zero, and other times of a non-zero quantity. Brown and Priest show how reasoning about infinitesimals (and their related notions of derivatives) can be done coherently, by breaking up the reasoning into consistent ‘chunks,’ and defining carefully controlled ‘permeations’ between the chunks. The permeations show how enough but not too much information can pass from one chunk to another, and thus reconstruct how a correct mathematical solution can obtain from apparently inconsistent data.

c. Adaptive Logic

Taking applied examples from scientific reasoning as its starting point, the adaptive logic program considers systems in which the rules of inference themselves can change as we go along. The logics are dynamic. In dynamic logics, rules of inference change as a function of what has been derived to that point, and so some sentences which were derivable at a point in time are no longer derivable, and vice versa. The program has been developed by Dederik Batens and his school in Ghent.

The idea is that our commitments may entail a belief that we nevertheless reject. This is because, as humans, our knowledge is not closed under logical consequence and so we are not fully aware of all the consequences of our commitments. When we find ourselves confronted with a problem, there may be two kinds of dynamics at work. In external dynamics, a conclusion may be withdrawn given some new information; logics in which this is allowed are called non-monotonic. External dynamics are widely recognized and are also important to the preservationist program. In internal dynamics, the premises themselves may lead to a conclusion being withdrawn. This kind of dynamic is less recognized and is more properly within the ambit of paraconsistency. Sometimes, we do derive a consequence we later reject, without modifying our convictions.

Adaptive systems work by recognizing abnormalities, and deploying formal strategies. Both of these notions are defined specifically to the task at hand; for instance, an abnormality might be an inconsistency, or it might be an inductive inference, and a strategy might be to delete a line of a proof, or to change an inference rule. The base paraconsistent logic studied by the adaptive school is called CLuN, which is all of the positive (negation-free) fragment of classical logic, plus the law of excluded middle A ∨ ¬A.

d. Relevance

Relevant logic is not fundamentally about issues of consistency and contradiction. Instead the chief motivation of relevant logic is that, for an argument to be valid, the premises must have a meaningful connection to the conclusion. For example, classical inferences like

BAB,

or

¬(AB) ⊢ A,

seem to relevance logicians to fail as decent logical inferences. The requirement that premises be relevant to the conclusion delivers a paraconsistent inference relation as a byproduct, since in ex contradictione quodlibet, the premises A and ¬A do not have anything to do with an arbitrary conclusion B. Relevant logic begins with Ackermann, and was properly developed in the work of Anderson and Belnap. Many of the founders of relevant logic, such as Robert Meyer and Richard Routley, have also been directly concerned with paraconsistency.

From our perspective, one of the most important aspects of relevant logic is that it provides an implication connective that obeys modus ponens, even in inconsistent situations. In §2b, we saw that the disjunctive syllogism is not paraconsistently valid; and so in any logic in which implication is defined by negation and disjunction, modus ponens is invalid, too. That is,

AB := ¬AB

does not, as we saw in §2b above, define a conditional that obeys

A, ABB.

In the argot, we say that ‘hook is not detachable’ or ‘ponenable’. In relevant logic, implication AB is not defined with truth-functional connectives at all, but rather is defined either axiomatically or semantically (with worlds or algebraic semantics). Going this way, one can have a very robust implication connective, in which not only modus ponens is valid,

AB, A; therefore, B.

Other widely used inferences obtain, too. Let’s just mention a few that involve negation in ways that might seem suspect from a paraconsistent point of view. We can have contraposition

AB ⊢ ¬B → ¬A,

which gives us modus tollens

AB, ¬B ⊢ ¬A.

With the law of non-contradiction ¬(A & ¬A), this gives us reductio ad absurdum, in two forms,

A → (B & ¬B) ⊢ ¬A,

A → ¬A ⊢ ¬A,

and consequentia mirabilis:

¬AAA.

Evidently the relevant arrow restores a lot of power apparently lost in the invalidity of disjunctive syllogism.

There are a great number of relevant logics differing in strength. One can do away with the laws of non-contradiction and excluded middle, giving a very weak consistent paraconsistent logic called B (for basic). Or one can add powerful negation principles as we have just seen above for inconsistent but non-trivial logics. The relevant approach was used in Meyer’s attempt to found a paraconsistent arithmetic in a logic called R# (see inconsistent mathematics). It has also been used by Brady for naïve set theory (§4c), and, more recently, Beall for truth theory. On the other hand, relevant logics validate fewer entailments than classical logic; in order for AB to be valid, we have additional requirements of relevance besides truth preservation in all possible circumstances. Because of this, it is often difficult to recapture within a relevant logic some of classical mathematical reasoning. We return to this problem in §4c below.

e. Logics of Formal Inconsistency

One of the first pioneers of paraconsistent logic was Newton C. A. da Costa in Brazil, in the 1950s. Da Costa’s interests have been largely in paraconsistent mathematics (with applications to physics), and his attitude toward paraconsistency is more open minded than some of the others we have seen. Da Costa considers the investigation of inconsistent but not trivial theories as akin to the study of non-Euclidean geometry. He has been an advocate of paraconsistency not only for its pragmatic benefits, for example in reconstructing infinitesimal calculus, but also as an investigation of novel structure for its own sake. He gives the following methodological guidelines:

  • In these calculi, the principle of contradiction should not be generally valid;
  • From two contradictory statements it should not in general be possible to deduce any statement whatever;
  • The extension of these calculi to quantification calculi should be immediate.

Note that da Costa’s first principle is not like any we’ve seen so far, and his third is more ambitious than others. His main system is an infinite hierarchy of logics known as the C systems.

The main idea of the C systems is to track which sentences are consistent and to treat these differently than sentences that may be inconsistent. Following this method, first of all, means that the logic itself is about inconsistency. The logic can model how a person can or should reason about inconsistent information. Secondly, this gives us a principled way to make our paraconsistent logic as much like classical logic as possible: When all the sentences are marked as consistent, they can be safely reasoned about in a classical way, for example, using disjunctive syllogism.

To make this work, we begin with a base logic, called C(0). When a sentence A behaves consistently in C(0), we mark it according to this definition:

A0 := ¬(A & ¬A).

Then, a strong kind of negation can be defined:

A := ¬A & A0.

The logic with these two connectives added to it, we call C(1). In C(1) then we can have inferences like

¬AB, A, A0B.

And in the same way that we reached C(1), we could go on and define a logic C(2), with an operator A1 = (A0)0, that means something like ‘behaves consistently in C(1)’. The C systems continue up to the first transfinite ordinal, C(ω).

More recently, a broad generalization of the C-systems has been developed by Carnielli, Marcos, and others, called logics of formal inconsistency. Da Costa’s C-systems are a subclass (albeit an important one) of  the much wider family of  the LFIs. The C-systems are precisely the LFIs where consistency can be expressed as a unary operator.

These logics have been used to model some actual mathematics. The axioms of Zermelo–Fraenkel set theory and some postulates about identity (=) can be added to C(1), as can axioms asserting the existence of a universal set and a Russell set. This yields an inconsistent, non-trivial set theory. Arruda and Batens obtained some early results in this set theory. Work in arithmetic, infinitesimal calculus, and model theory has also been carried out by da Costa and his students.

A driving idea of da Costa’s paraconsistency is that the law of non-contradiction ¬(A & ¬A) should not hold at the propositional level. This is, philosophically, how his approach works: ¬(A & ¬A) is not true. Aside from some weak relevant logics, this is a unique feature of the C systems (among paraconsistent logics). In other schools like the discussion and preservationist schools, non-contradiction holds not only at the level of sentences, but as a normative rule; and in the next school we consider, non-contradiction is false, but it is true as well.

f. Dialetheism

The best reason to study paraconsistency, and to use it for developing theories, would be if there were actually contradictions in the world (as opposed to in our beliefs or theories). That is, if it turns out that the best and truest description of the world includes some inconsistency, then paraconsistency is not only required, but is in some sense natural and appropriate. ‘Dialetheism’ is a neologism meaning two-way truth and is the thesis that some sentences are both true and false, at the same time and in the same way. Dialetheism is particularly motivated as a response to the liar paradox and set theoretic antinomies like Russell’s Paradox, and was pioneered by Richard Routley and Graham Priest in Australia in the 1970s. Priest continues to be the best known proponent.

A dialetheic logic is easiest to understand as a many-valued logic. This is not the only way to understand dialetheism, and the logic we are about to consider is not the only logic a dialetheist could use. Dialetheism is not a logic. But here is a simple way to introduce the concept. In addition to the truth-values true and false, sentences can also be both. This third value is a little unusual, maybe, but uncomplicated: if a sentence A is both, then A is true, and A is false, and vice versa. The most straightforward application of a ‘both’ truth-value is Priest’s logic of paradox, or LP. In LP the standard logical connectives have a natural semantics, which can be deduced following the principle that a sentence is designated iff it is at least true—i.e. iff it is true only, or both true and false. If

¬A is true when A is false,

and

¬A is false when A is true,

for example, then

¬A is both iff A is both.

So inconsistent negation is something like a fixed point. An argument is valid in LP iff it is not possible for the conclusion to be completely false but all the premises at least true. That is, suppose we have premises that are all either true or both. If the argument is valid, then the conclusion is also at least true.

In LP, any sentence of the form ¬(A & ¬A) is always true, and also some instances are sometimes false. So the law of non-contradiction is itself a dialetheia—the schema ¬(A & ¬A) is universal but also has counterexamples—and furthermore, dialetheism says of itself that it is both true and false. (The statement ‘there are true contradictions’ is both true—there are some—and false—all contradictions are false.) This may seem odd, but it is appropriate, given dialetheism’s origins in the liar paradox.

LP uses only extensional connectives (and, or, not) and so has no detachable conditional. If one adds to LP a detachable conditional, then, given its semantics, the most natural extension of LP to a logic with an implication connective is the logic called RM3. Unfortunately, this logic is not appropriate for naïve set theory or truth theory (see §4c.ii). If a fourth neutral truth value is added to LP, the logic is weakened to the system of first degree entailment FDE. In FDE, the inference

BA ∨ ¬A

is not valid any more than explosion is. This makes some sense, since if the former is invalid by dint of not representing actual reasoning, then the latter should be invalid, too, since the premise does not ‘lead to’ the conclusion. Because of this, FDE has no theorems, of the form ⊢ A, at all.

4. Applications

A paraconsistent logic becomes useful when we are faced with inconsistencies. Motivations for and applications of paraconsistency arise from situations that are plausibly inconsistent—that is, situations in which inconsistency is not merely due to careless mistakes or confusion, but rather inconsistency that is not easily dispelled even upon careful and concentrated reflection. A student making an arithmetic error does not need a paraconsistent logic, but rather more arithmetic tutorials (although see inconsistent mathematics). On the other hand, people in the following situations may turn to a paraconsistent toolkit.

a. Moral Dilemmas

A mother gives birth to identical conjoined twins (in an example due to Helen Bohse). Doctors quickly assess that if the twins are not surgically separated, then neither will survive. However, doctors also know only one of the babies can survive surgery. The babies are completely identical in all respects. It seems morally obligatory to save one of life at the expense of the other. But because there is nothing to help choose which baby, it also seems morally wrong to let one baby die rather than the other. Quite plausibly, this is an intractable moral dilemma with premises of the form we ought to save the baby on the left, and, by symmetrical reasoning about the baby on the right, also we ought not to save the baby on the left. This is not yet technically a contradiction, but unless some logical precautions are taken, it is a tragic situation on the verge of rational disaster.

A moral dilemma takes the form O(A) and O(¬A), that it is obligatory to do A and it is obligatory to do ¬A. In standard deontic logic—a logic of moral obligations—we can argue from a moral dilemma to moral explosion as follows (see Routley and Plumwood 1989). First, obligations ‘aggregate’:

O(A), O(¬A) ⊢ O(A & ¬A).

Next, note that A & ¬A is equivalent to (A & ¬A) & B. (‘Equivalent’ here can mean classically, or in the sense of C. I. Lewis’ strict implication.) Thus

O(A & ¬A) ⊢ O((A & ¬A) & B)

But O((A & ¬A) & B) ⊢ O(B). So we have shown from inconsistent obligations O(A), O(¬A), that O(B), that anything whatsoever is obligatory—in standard, non-paraconsistent systems.

A paraconsistent deontic logic can follow any of the schools we have seen already. A standard paraconsistent solution is to follow the non-adjunctive approach of Jaśkowski and the preservationists. One can block the rule of modal aggregation, so that both O(A), O(¬A) may hold without implying O(A & ¬A).

Alternatively, one could deny that A & ¬A is strictly equivalent to (A & ¬A) & B, by adopting a logic (such as a relevant logic) in which such an equivalence fails. Taking this path, we would then run into the principle of deontic consistency,

O(A) ⊢ P(A),

that if you ought to do A, then it is permissible to do A. (You are not obliged not to do A.) Accordingly, from O(A & ¬A), we get P(A & ¬A). If we had the further axiom that inconsistent actions are not permitted, then we would now have a full blown inconsistency, P(A & ¬A) and ¬P(A & ¬A). If reductio is allowed, then we would also seem to have obligations such that O(A) and ¬O(A). This move calls attention to which obligations are consistent. One could drop deontic consistency, so that A is obligatory without necessarily being permissible. Or one could reason that, however odd inconsistent actions may sound, there is no obvious reason they should be impermissible. The result would be strange but harmless statements of the form P(A & ¬A).

A principle even stronger than deontic consistency is the Kantian dictum that ‘ought implies can,’ where ‘can’ means basic possibility. Kant’s dictum converts moral dilemmas to explicit contradictions. This seems to rule out moral dilemmas, since it is not possible, e.g., both to save and not to save a baby from our conjoined twins example, it is not obligatory to save one of the two babies, appearances to the contrary. So an option for the paraconsistent deontic logician is to deny Kant’s dictum. Perhaps we have unrealizable obligations; indeed, this seems to be the intuition behind moral dilemmas. A consequence of denying Kant’s dictum is that, sometimes, we inevitably do wrong.

Most liberally, one can keep everything and accept that sometimes inconsistent action is possible. For example, if I make a contract with you to break this very contract, then I break the contract if and only if I keep it. By signing, I am eo ipso breaking and not breaking the contract. In general, though, how one could do both A and its negation is a question beyond the scope of logic.

b. Laws, Science, and Revision

Consider a country with the following laws (in an example from Priest 2006, ch. 13):

(1) No non-Caucasian people shall have the right to vote.
(2) All landowners shall have the right to vote.

As it happens, though, Phil is not Caucasian, and owns a small farm. The laws, as they stand, are inconsistent. A judge may see this as a need to impose a further law (e.g. non-Caucasians cannot own land) or revise one of the current laws. In either case, though, the law as it stands needs to be dealt with in a discriminating way. Crucially, the inferential background of the current laws does not seem to permit or entail total anarchy.

Similarly, in science we hold some body of laws as true. It is part of the scientific process that these laws can be revised, updated, or even rejected completely. The process of such progress again requires that contradictions not be met with systemic collapse. At present, it seems extremely likely that different branches of science are inconsistent with one another—or even within the same discipline, as is the case in theoretical physics with relativity and quantum mechanics. Does this situation make science absurd?

c. Closed Theories – Truth and Sets

Conceptual closure means taking a full account of whatever is under study. Suppose, for example, we are studying language. We carry out our study using language. A closed theory would have to account for our study itself; the language of the theory would have to include terms like ‘language’, ‘theory’, ‘true’, and so forth. More expansively, a theory of everything would include the theory itself. Perhaps the simplest way to grasp the nature of a closed theory is through a remark of Wittgenstein, the preface to his Tractatus: ‘In order to draw a limit to thought, one would have to find both sides of the limit thinkable.’ Priest has argued that the problematic of closure can be seen in the philosophies of Kant and Hegel, as well as in earlier Greek and Medieval thought, and continues on in postmodernist philosophies. As was discovered in the 20th century, closed formal theories are highly liable to be inconsistent, because they are extremely conducive to self-reference and diagonalization (see logical paradoxes).

For logicians, the most important of the closed theories, susceptible to self-reference, are of truth and sets. Producing closed theories of truth and sets using paraconsistency is, at least to start with, straightforward. We will look at two paradigm cases, followed by some detail on how they can be pursued.

i. Naïve Axioms

In modern logic we present formal, mathematical descriptions of how sentences are true and false, e.g. (AB) is true iff A is true and B is true. This itself is a rational statement, presumably governed by some logic and so itself amenable to formal study. To reason about it logically, we would need to study the truth predicate, ‘x is true.’ An analysis of the concept of truth that is almost too-obviously correct is the schema

T(‘A’) iff A.

It seems so obvious—until (even when?) a sentence like

This sentence of the IEP is false,

a liar paradox which leads to a contradiction, falls out the other side. A paraconsistent logic can be used for a theory of truth in which the truth schema is maintained, but where either the derivation of the paradox is blocked (by dropping the law of excluded middle) or else the contradiction is not explosive.

In modern set theory, similarly, we understand mathematical objects as being built out of sets, where each set is itself built out of pre-given sets. The resulting picture is the iterative hierarchy of sets. The problem is that the iterative hierarchy itself is a mathematically definite object, but cannot itself reside on the hierarchy. A closed theory of sets will include objects like this, beginning from an analysis of the concept of set that is almost too-obviously correct: the naïve comprehension schema,

x is a member of {y: A(y)} iff A(x).

A way to understand what naïve comprehension means is to take it as the claim: any collection of objects is a set, which is itself an object. Naïve set theory can be studied, and has been, with paraconsistent logics; see inconsistent mathematics. Contradictions like the existence of a Russell set {y: y is not a member of y} arise but are simply theorems: natural parts of the theory; they do not explode the theory.

ii. Further Logical Restrictions

For both naïve truth theory and naïve set theory, there is an additional and extremely important restriction on the logic. A logic for these schemas cannot validate contraction,

If (if A then (if A then B)), then (if A then B).

This restriction is due to Curry’s paradox, which is a stronger form of the liar paradox. A Curry sentence says

If this sentence is true, then everything is true.

If the Curry sentence, call it C, is put into the truth-schema, then everything follows by the principle of contraction:

1) T(‘C’) iff (if T(‘C’) then everything). [truth schema]
2) If T(‘C’) then (if T(‘C’) then everything). [from 1]
3) If T(‘C’) then everything. [from 2 by contraction]
4) T(‘C’) [modus ponens on 1, 3]
5) Everything. [modus ponens on 3, 4]

Since not everything is true, if the T schema is correct then contraction is invalid. For set theory, analogously, the Curry set is

C = {x: If x is a member of x, then everything is true},

and a similar argument establishes triviality.

As was discovered later by Dunn, Meyer and Routley while studying naïve set theory in relevant logic, the sentence

(A & (AB)) → B

is a form of contraction too, and so must similarly not be allowed. (Let A be a Curry sentence and B be absurdity.) Calling this sentence (schema) invalid is different than blocking modus ponens, which is an inference, validated by a rule. The above sentence, meanwhile, is just that—a sentence—and we are saying whether or not all its instances are true. If naïve truth and set theories are coherent, instances of this sentence are not always true, even when modus ponens is valid.

The logic LP does not satisfy contraction and so a dialetheic truth or set theory can be embedded in it. Some basic contradictions, like the liar paradox and Russell’s paradox, do obtain, as do a few core operations. Because LP has no conditional, though, one does not get very far. Most other paraconsistent logics cannot handle naïve set theory and naïve truth theory as stated here. A hard problem in (strong) paraconsistency, then, is how to formulate the ‘iff’ in our naïve schemata, and in general how to formulate a suitable conditional. The most promising candidates to date have been relevant logics, though as we have seen there are strict limitations.

d. Learning, Beliefs, and AI

Some work has been done to apply paraconsistency to modeling cognition. The main idea here is that the limitations on machine reasoning as (apparently) dictated by Gödel’s incompleteness theorems no longer hold. What this has to do with cognition per se is a matter of some debate, and so most applications of paraconsistency to epistemology are still rather speculative. See Berto 2009 for a recent introduction to the area.

Tanaka has shown how a paraconsistent reasoning machine revises its beliefs differently than suggested by the more orthodox but highly idealized Alchourrón-Gärdenfors-Makinson theory. That latter prevailing theory of belief revision has it that inconsistent sets of beliefs are impossible. Paraconsistent reasoning machines, meanwhile, are situated reasoners, in sets of beliefs (say, acquired simply via education) that can occasionally be inconsistent. Consistency is just one of the criteria of epistemic adequacy among others—simplicity, unity, explanatory power, etc. If this is right, the notion of recursive learning might be extended, to shed new light on knowledge acquisition, conflict resolution, and pattern recognition. If the mind is able to reason around contradiction without absurdity, then paraconsistent machines may be better able to model the mind.

Paraconsistent logics have been applied by computer scientists in software architecture (though this goes beyond the expertise of the present author). That paraconsistency could have further applications to the theory of computation was explored by Jack Copeland and Richard Sylvan. Copeland has independently argued that there are effective procedures that go beyond the capacity of Turing machines. Sylvan (formerly Routley) further postulated the possibility of dialethic machines, programs capable of computing their own decision functions. In principle, this is a possibility. The non-computability of decision functions, and the unsolvability of the halting problem, are both proved by reductio ad absurdum: if a universal decision procedure were to exist, it would have some contradictions as outputs. Classically, this has been interpreted to mean that there is no such procedure. But, Sylvan suggests, there is more on heaven and Earth than is dreamt of in classical theories of computation.

5. Conclusion

Paraconsistency may be minimally construed as the doctrine that not everything is true, even if some contradictions are. Most paraconsistent logicians subscribe to views on the milder end of the spectrum; most paraconsistent logicians are actually much more conservative than a slur like Quine’s ‘deviant logician’ might suggest. On the other hand, taking paraconsistency seriously means on some level taking inconsistency seriously, something that a classically minded person will not do. It has therefore been thought that, insofar as true inconsistency is an unwelcome thought—mad, bad, and dangerous to know—paraconsistency might be some kind of gateway to darker doctrines. After all, once one has come to rational grips with the idea that inconsistent data may still make sense, what, really, stands in the way of inconsistent data being true? This has been called the slippery slope from weak to strong paraconsistency. Note that the slippery slope, while proposed as an attractive thought by those more inclined to strong paraconsistency, could seem to go even further, away from paraconsistency completely and toward the insane idea of trivialism: that everything really is true. That is, contradictions obtain, but explosion is also still valid. Why not?

No one, paraconsistentist or otherwise, is a trivialist. Nor is paraconsistency an invitation to trivilalism, even if it is a temptation to dialetheism. By analogy, when Hume pointed out that we cannot be certain that the sun will rise tomorrow, no one became seriously concerned about the possibility. But people did begin to wonder about the necessity of the ‘laws of nature’, and no one now can sit as comfortably as before Hume awoke us from our dogmatic slumber. So too with paraconsistent logic. In one sense, paraconsistent logics can do much more than classical logics. But in studying paraconsistency, especially strong paraconsistency closer to the dialetheic end of the spectrum, we see that there are many things logic cannot do. Logic alone cannot tell us what is true or false. Simply writing down the syntactic marking ‘A’ does nothing to show us that A cannot be false, even if A is a theorem. There is no absolute safeguard. Defending consistency, or denying the absurdity of trivialism, is ultimately not the job of logic alone. Affirming coherence and denying absurdity is an act, a job for human beings.

6. References and Further Reading

It’s a little dated, but the ‘bible’ of paraconsistency is still the first big collection on the topic:

  • Priest, G., Routley, R. & Norman, J. eds. (1989). Paraconsistent Logic: Essays on the Inconsistent. Philosophia Verlag.

This covers most of the known systems, including discussive and adaptive logic, with original papers by the founders. It also has extensive histories of paraconsistent logic and philosophy, and a paper by the Routleys on moral dilemmas. For more recent work, see also

  • Batens, D., Mortensen, C., Priest, G., & van Bendegem, J.-P. eds. (2000). Frontiers of Paraconsistent Logic. Kluwer.
  • Berto, F. and Mares, E., Paoli, F., and Tanaka, K. eds. (forthcoming). The Fourth World Congress on Paraconsistency.

A roundabout philosophical introduction to non-classical logics, including paraconsistency, is in

  • Beall, JC and Restall, Greg (2006). Logical Pluralism. Oxford University Press.

Philosophical introductions to strong paraconsistency:

  • Priest, Graham (2006). In Contradiction: A Study of the Transconsistent. Oxford University Press. Second edition.
  • Priest, Graham (2006). Doubt Truth to be a Liar. Oxford University Press.
  • Berto, Francesco (2007). How to Sell a Contradiction. Studies in Logic vol. 6. College Publications.

More philosophical debate about strong paraconsistency is in the excellent collection

  • Preist, G., Beall, JC and Armour-Garb, B. eds. (2004). The Law of Non-Contradiction. Oxford University Press.

For the technical how-to of paraconsistent logics:

  • Beall, JC and van Frassen, Bas (2003). Possibilities and Paradox: An Introduction to Modal and Many-Valued Logics. Oxford University Press.
  • Gabbay, Dov M. & Günthner, F. eds. (2002). Handbook of Philosophical Logic. Second edition, vol. 6, Kluwer.
  • Priest, Graham (2008). An Introduction to Non-Classical Logic. Cambridge University Press. Second edition.

For a recent introduction to preservationism, see

  • Schotch, P., Brown, B. and Jennings, R. eds. (2009). On Preserving: Essays on Preservationism and Paraconsistent Logic. University of Toronto Press.
  • Brown, Bryson and Priest, Graham (2004). “Chunk and Permeate I: The Infinitesimal Calculus.” Journal of Philosophical Logic 33, pp. 379–88.

Logics of formal inconsistency:

  • W. A. Carnielli and J. Marcos. A taxonomy of C- systems. In Paraconsistency: the Logical Way to the Inconsistent, Lecture Notes in Pure and Applied Mathematics, Vol. 228, pp. 01–94, 2002.
  • W. A. Carnielli, M. E. Coniglio and J. Marcos.  Logics of Formal Inconsistency. In Handbook of Philosophical Logic, vol. 14, pp. 15–107. Eds.: D. Gabbay; F. Guenthner. Springer, 2007.
  • da Costa, Newton C. A. (1974). “On the Theory of Inconsistent Formal Systems.” Notre Dame Journal of Formal Logic 15, pp. 497–510.
  • da Costa, Newton C. A. (2000). Paraconsistent Mathematics. In Batens et al. (2000), pp. 165–180.
  • da Costa, Newton C. A., Krause, Décio & Bueno, Otávio (2007). “Paraconsistent Logics and Paraconsistency.” In Jacquette, D. ed. Philosophy of Logic (Handbook of the Philosophy of Science), North-Holland, pp. 791–912.

Relevant logics:

  • Anderson, A. R. and Belnap, N. D., Jr. (1975). Entailment: The Logic of Relevance and Necessity. Princeton University Press, vol. I.
  • Mares, E. D. (2004). Relevant Logic: A Philosophical Interpretation. Cambridge University Press.

The implications of Gödel’s theorems:

  • Berto, Francesco (2009). There’s Something About Gödel. Wiley-Blackwell.

Belief revision:

  • Tanaka, Koji (2005). “The AGM Theory and Inconsistent Belief Change.” Logique et Analyse 189–92, pp. 113–50.

Artificial Intelligence:

  • Copeland, B. J. and Sylvan, R. (1999). “Beyond the Universal Turing Machine.” Australasian Journal of Philosophy 77, pp. 46–66.
  • Sylvan, Richard (2000). Sociative Logics and their Applications. Priest, G. and Hyde, D. eds. Ashgate.

Moral dilemmas:

  • Bohse, Helen (2005). “A Paraconsistent Solution to the Problem of Moral Dilemmas.” South African Journal of Philosophy 24, pp. 77–86.
  • Routley, R. and Plumwood, V. (1989). “Moral Dilemmas and the Logic of Deontic Notions.” In Priest et al. 1989, 653–690.
  • Weber, Zach (2007). “On Paraconsistent Ethics.” South African Journal of Philosophy 26, pp. 239–244.

Other works cited:

  • Colyvan, Mark (2008). “Who’s Afraid of Inconsistent Mathematics?” Protosociology 25, pp. 24–35. Reprinted in G. Preyer and G. Peter eds. Philosophy of Mathematics: Set Theory, Measuring Theories and Nominalism, Frankfurt: Verlag, 2008, pp. 28–39.
  • Hume, David (1740). A Treatise of Human Nature, ed. L. A. Selby-Bigge. Second edition 1978. Oxford: Clarendon Press.

Author Information

Zach Weber
Email: zweber@unimelb.edu.au
University of Melbourne, Australia
Email: z.weber@usyd.edu.au
University of Sydney, Australia

Liar Paradox

Liar Paradox

The Liar Paradox is an argument that arrives at a contradiction by reasoning about a Liar Sentence. The Classical Liar Sentence is the self-referential sentence, “This sentence is false,” which leads to the same difficulties as the sentence, “I am lying.”

Experts in the field of philosophical logic have never agreed on the way out of the trouble despite 2,300 years of attention. Here is the trouble—a sketch of the Liar Argument that reveals the contradiction:

Let L be the Classical Liar Sentence. If L is true, then L is false. But we can also establish the converse, as follows. Assume L is false. Because the Liar Sentence is just the sentence that ‘says’ L is false, the Liar Sentence is therefore true, so L is true. We have now shown that L is true if, and only if, it is false. Since L must be one or the other, it is both.

That contradictory result apparently throws us into the lion’s den of semantic incoherence. The incoherence is due to the fact that, according to the rules of classical logic, anything follows from a contradiction, even "1 + 1 = 3." This article explores the details and implications of the principal ways out of the Paradox, ways of restoring semantic coherence.

Most people, when first encountering the Liar Paradox, react in one of two ways. One reaction is to not take the Paradox seriously and say they won't reason any more about it. The second and more popular reaction is to say the Liar Sentence must be meaningless. Both of these reactions are a way out of the Paradox. That is, they stop the argument of the Paradox. However, the first reaction provides no useful diagnosis of the problem that was caused in the first place. The second is not an adequate solution if it can answer the question, “Why is the Liar Sentence meaningless?”  only with the ad hoc remark, “Otherwise we get a paradox.” An adequate solution should offer a more systematic treatment. For example, the self-referential English sentence, “This sentence is not in Italian,” is very similar to the Liar Sentence. Is it meaningless, too?  Apparently not. So, what feature of the Liar Sentence makes it be meaningless while “This sentence is not in Italian,” is not meaningless? The questions continue, and an adequate solution should address them systematically.

Table of Contents

  1. History of the Paradox
    1. Strengthened Liar
    2. Why the Paradox is a Serious Problem
    3. Tarski’s Undefinability Theorem
  2. Overview of Ways Out of the Paradox
    1. Five Ways Out
    2. Sentences, Statements, and Propositions
    3. An Ideal Solution to the Paradox
    4. Should Classical Logic be Revised?
  3. The Main Ways Out
    1. Russell’s Type Theory
    2. Tarski’s Hierarchy of Meta-Languages
    3. Kripke’s Hierarchy of Interpretations
    4. Barwise and Etchemendy
    5. Paraconsistency
  4. Conclusion
  5. References and Further Reading

1. History of the Paradox

Languages are expected to contain contradictions but not paradoxes. The sentence, “Snow is white, and snow is not white,” is just one of the many false sentences in the English language. But languages are not expected to contain paradoxes. A paradox is an apparently convincing argument leading to the conclusion that one of the language’s contradictory sentences is true. Why is that a problem? Well, let L be the Liar sentence, and let Q be a sentence we already know cannot be true, say "1 + 1 = 3". Then we can reason this way:

1. L and not-L from the Liar Paradox
2. L from 1
3. L or Q from 2
4. not-L from 1
5. Q from 3 and 4

The consequence is outrageous. So, an appropriate reaction to any paradox is to look for some unacceptable assumption made in the apparently convincing argument or else to look for a faulty step in the reasoning. Only very reluctantly would one want to learn to live with the contradiction being true, or ignore the contradiction altogether. By the way, paradoxes are commonly called "antinomies," although some authors prefer to save the word "antinomies" for only the more difficult paradoxes to resolve.

Zeno's Paradoxes were discovered in the 5th century B.C.E., and the Liar Paradox was discovered in the middle of the 4th century B.C.E., both in ancient Greece. The most ancient attribution of the Liar is to Eubulides of Miletus who included it among a list of seven puzzles. He said, “A man says that he is lying. Is what he says true or false?” Eubulides’ commentary on his puzzle has not been found. An ancient gravestone on the Greek Island of Kos was reported by Athenaeus to contain this poem about the difficulty of solving the Paradox:

O Stranger: Philetas of Kos am I,

‘Twas the Liar who made me die,

And the bad nights caused thereby.

Theophrastus, Aristotle’s successor, wrote three papyrus rolls about the Liar Paradox, and the Stoic philosopher Chrysippus wrote six, but their contents are lost in the sands of time. Despite various comments on how to solve the Paradox, no Greek suggested that Greek itself was inconsistent; it was the reasoning within Greek that was considered to be inconsistent.

In the Late Medieval period in Europe, the French philosopher Jean Buridan put the Liar Paradox to devious use with the following proof of the existence of God. It uses the pair of sentences:

God exists.

None of the sentences in this pair is true.

The only consistent way to assign truth values, that is, to have these two sentence be either true or false, requires making “God exists” be true. So, in this way, Buridan has “proved” that God does exist.

There are many other versions of the Paradox. Some liar paradoxes begin with a chain of sentences, no one of which is self-referential, although the chain as a whole is self-referential or circular:

The following sentence is true.

The following sentence is true.

The following sentence is true.

The first sentence in this list is false.

There are also Contingent Liars which may or may not lead to a paradox depending on what happens in the world beyond the sentence. For example:

It’s raining and this sentence is false.

Paradoxicality now depends on the weather. If it’s sunny, then the sentence is simply false, but if it’s raining, then we have the beginning of a paradox.

a. Strengthened Liar

The Strengthened Liar Paradox begins with the Strengthened Liar Sentence

This sentence is not true.

This version is called “Strengthened” because some promising solutions to the Classical Liar Paradox beginning with (L) fail when faced with the Strengthened Liar. So, finding one’s way out of the Strengthened Liar Paradox is the acid test of a successful solution.

Here is an example of the failure just mentioned. Consider the Strengthened Liar in the context of trying to solve the Liar Paradox by declaring that the Liar Sentence L cannot be used to make a claim. It is neither true nor false. That will stop the argument of the Classical Liar Paradox involving L. But suppose this attempted solution is unsystematic and implies nothing about our various semantic principles and so implies nothing about the Strengthened Liar Sentence. If so, we could use that Strengthened Liar Sentence to create a new paradox by asking for its truth value. If it were to be true it would not be true. But if it were not true, then it would therefore be true, and so we have arrived at a contradiction. That is why we want any solution which says that the Classical Liar Sentence L has no truth value to be systematic enough that it can be applied to the Strengthened Liar Sentence and show that it, too, has no truth value. That way, we do not solve the Classical Liar only to be ensnared by the Strengthened Liar.

b. Why the Paradox is a Serious Problem

To put the Liar Paradox in perspective, it is essential to appreciate why such an apparently trivial problem is a deep problem. Solving the Liar Paradox is part of the larger project of understanding truth. Understanding truth involves finding a theory of truth or a definition of truth or a proper analysis of the concept of truth; many researchers do not carefully distinguish these projects from each other.

Aristotle offered what most philosophers consider to be a correct proposal. Stripped of his overtones suggesting a correspondence theory of truth, Aristotle proposed (in Metaphysics 1011 b26) what is now called a precursor to Alfred Tarski's Convention T:

(T) A sentence is true if, and only if, what it says is so.

Here we need to take some care with the use-mention distinction. If pairs of quotation marks serve to name or mention a sentence, then the above is requiring that the sentence “It is snowing” be true just in case it is snowing. Similarly, if the sentence about snow were named not with quotation marks but with the numeral 88 inside a pair of parentheses, then (88) would be true just in case it is snowing. What could be less controversial about the nature of truth? Unfortunately, this is neither obviously correct nor trivial; and the resolution of the difficulty is still an open problem in philosophical logic. Why is that? The brief answer is that (T) can be used to produce the Liar Paradox. The longer answer refers to Tarski’s Undefinability Theorem of 1936.

c. Tarski’s Undefinability Theorem

tarskiThis article began with a mere sketch of the Liar Argument using sentence (L). To appreciate the central role of (T) in the argument, we need to examine more than just a sketch of the argument. Alfred Tarski proposed a more formal characterization of (T), which is called schema T or Convention T:

(T) X is true if, and only if, p,

where “p” is a variable for a grammatical sentence and “X” is a name for that sentence. Tarski was the first person to claim that any theory of truth that could not entail all sentences of this schema would fail to be an adequate theory of truth. Here is what Tarski is requiring. If we want to build a theory of truth for English, and we want to state the theory using English, then the theory must entail the T-sentence:

“Snow is white” is true if, and only if, snow is white.

If we want instead to build a theory of truth for German and use English to state the theory, then the theory should, among other things, at least entail the T-sentence:

“Der Schnee ist weiss” is true in German if, and only if, snow is white.

A great many philosophers believe Tarski is correct when he claims his Convention T is a necessary condition on any successful theory of truth for any language. However, do we want all the T-sentences to be entailed and thus come out true? Probably not the T-sentence for the Liar Sentence. That T-sentence has the logical form: T`s´ if and only if s.  Here T is the truth predicate, and s is the Liar Sentence, namely ~T`s´. Substituting the latter for s on the right of the biconditional yields the contradiction: T`s´ if and only if ~T`s´. That is the argument of the Liar Paradox, very briefly. Tarski wanted to find a way out.

Tarski added precision to the discussion of the Liar by focusing not on a natural language but on a classical, interpreted, formal language capable of expressing arithmetic. Here the difficulties produced by the Liar Argument became much clearer; and, very surprisingly, he was able to prove that Convention T plus the assumption that the language contains its own concept of truth do lead to semantic incoherence.

The proof requires the following assumptions in addition to Convention T. Here we quote from (Tarski 1944):

I. We have implicitly assumed that the language in which the antinomy is constructed contains, in addition to its expressions, also the names of these expressions, as well as semantic terms such as the term "true" referring to sentences of this language; we have also assumed that all sentences which determine the adequate usage of this term can be asserted in the language. A language with these properties will be called "semantically closed."

II. We have assumed that in this language the ordinary laws of logic hold.

Tarski pointed out that the crucial, unacceptable assumption of the formal version of the Liar Argument is that the language is semantically closed. For there to be a grammatical and meaningful Liar Sentence in that language, there must be a definable notion of “is true” which holds for the true sentences and fails to hold for the other sentences. If there were such a global truth predicate, then the predicate “is a false sentence” would also be definable; and [here is where we need the power of elementary number theory] a Liar Sentence would exist, namely a complex sentence ∃x(Qx & ~Tx), where Q and T are predicates which are satisfied by names of sentences. More specifically, T is the one-place, global truth predicate satisfied by all the names [that is, numerals for the Gödel numbers] of the true sentences, and Q is a one-place predicate that is satisfied only by the name of ∃x(Qx & ~Tx). But if so, then one can eventually deduce a contradiction. This deduction of Tarski’s is a formal analog of the informal argument of the Liar Paradox. The contradictory result tells us that the argument began with a false assumption. According to Tarski, the error that causes the contradiction is the assumption that the global truth predicate can be well-defined. Therefore, Tarski has proved that truth is not definable within a classical formal language—thus the name “Undefinability Theorem.” Tarski’s Theorem establishes that classically interpreted languages capable of expressing arithmetic cannot contain a global truth predicate. So his theorem implies that classical formal languages with the power to express arithmetic cannot be semantically closed.

There is no special difficulty is giving a careful definition of truth for a classical formal language, provided we do it outside the language; and Tarski himself was the first person to do this. In 1933 he created the first formal semantics for quantified predicate logic. Here are two imperfect examples of how he defines truth. First, the simple sentence 'Fa' is true if, and only if, a is a member of the set of objects that are F. Notice that the crucial fact that the English truth predicate "is true" occurring in the definition of truth for the formal language does not also occur in the formal language. The formal language being examined, that is, being given a theory of truth, is what Tarski calls the "object language." For a second example of defining truth, Tarski says the universally quantified sentence '∀xFx' is true if, and only if, all the objects are members of the set of objects that are F. To repeat, a little more precisely but still imperfectly, Tarski's theory implies that, if we have a simple, formal sentence `Fa´ in our formal language, in which ` is the name of some object in the domain of discourse (that is, what we can talk about) and if ` is a predicate designating a property that perhaps some of those objects have, then `Fa´ is true in the object language if, and only if, a is a member of the set of all things having property F. For the more complex sentence `∀xFx´ in our language, it is true just in case every object is a member of the set of all things having property F. These two definitions are still imprecise because the appeal to the concept of property should be eliminated, and the definitions should appeal to the notion of formulas being satisfied by sequences of objects. However, what we have here are two examples of partially defining truth for the object language, say language 0, but doing it from outside language 0, in a meta-language, say language 1, that contains set theory and that might or might not contain language 0 itself. Tarski was able to show that in language 1 we satisfy Convention T for the object language 0, because the equivalences

`Fa´ is true in language 0 if, and only if, Fa

`∀xFx´ is true in language 0 if, and only if, ∀xFx

are both deducible in language 1, as are the other T-sentences.

Despite Tarski's having this success with defining truth for an object language in its meta-language, Tarski's Undefinability Theorem establishes that there is apparently no hope of defining truth within the object language itself. Tarski then took on the project of discovering how close he could come to having a well-defined truth predicate within a classical formal language without actually having one. That project, his hierarchy of meta-languages, is also his key idea for solving the Liar Paradox. It will be discussed in a moment.

2. Overview of Ways Out of the Paradox

a. Five Ways Out

We should avoid having to solve the Liar Paradox merely by declaring that our logic obeys the principle "Avoid paradoxes." That gives us no guidance about how to avoid them. Since the Liar Paradox depends crucially upon our rules of making inferences and on the key semantic concepts of truth, reference, and negation, one might reasonably suppose that one of these rules or concepts needs revision. No one wants to solve the Paradox by being heavy-handed and jettisoning more than necessary.

Where should we make the changes? If we adopt the metaphor of a paradox as being an argument which starts from the home of seemingly true assumptions and which travels down the garden path of seemingly valid steps into the den of a contradiction, then a solution to the Liar Paradox has to find something wrong with the home, find something wrong with the garden path, or find a way to live within the den. Less metaphorically, the main ways out of the Paradox are the following:

  1. The Liar Sentence is ungrammatical and so has no truth value (yet the argument of the Liar Paradox depends on it having a truth value).
  2. The Liar Sentence is grammatical but meaningless and so has no truth value.
  3. The Liar Sentence is grammatical and meaningful but still it has no truth value; it falls into the “truth gap.”
  4. The Liar Sentence is grammatical, meaningful and has a truth value, but one other step in the argument of the Liar Paradox is faulty.
  5. The argument of the Liar Paradox is acceptable, and we need to learn how to live with the Liar Sentence being both true and false.

Two philosophers might take the same way out, but for different reasons.

There are many suggestions for how to deal with the Liar Paradox, but most are never developed to the point of giving a formal, detailed theory that can speak of its own syntax and semantics with precision. Some give philosophical arguments for why this or that conceptual reform is plausible as a way out of paradox, but then don’t show that their ideas can be carried through in a rigorous way. Other attempts at solutions will take the formal route and then require changes in standard formalisms so that a formal analog of the Liar Paradox’s argument fails, but then the attempted solution offers no philosophical argument to back up these formal changes. A decent theory of truth showing the way out of the Liar Paradox requires both a coherent formalism (or at least a systematic theory of some sort) and a philosophical justification backing it up. The point of the philosophical justification is an unveiling of some hitherto unnoticed or unaccepted rule of language for all sentences of some category which has been violated by the argument of the Paradox.

The leading solutions to the Liar Paradox, that is, the influential proposed solutions, all have a common approach, the “systematic approach.” The developers of these solutions agree that the Liar Paradox represents a serious challenge to our understanding the concepts, rules, and logic of natural language; and they agree that we must go back and systematically reform or clarify some of our original beliefs, and provide a motivation for doing so other than that doing so blocks the Paradox.

This need to have a systematic approach has been seriously challenged by Ludwig Wittgenstein (in 1938 in a discussion group with Alan Turing on the foundations of mathematics). He says one should try to overcome ”the superstitious fear and dread of mathematicians in the face of a contradiction.” The proper way to respond to any paradox, he says, is by an ad hoc reaction and not by any systematic treatment designed to cure both it and any future ills. Symptomatic relief is sufficient. It may appear legitimate, at first, to admit that the Liar Sentence is meaningful and also that it is true or false, but the Liar Paradox shows that one should retract this admission and either just not use the Liar Sentence in any arguments, or say it is not really a sentence, or at least say it is not one that is either true or false. Wittgenstein is not particularly concerned with which choice is made. And, whichever choice is made, it need not be backed up by any theory that shows how to systematically incorporate the choice. He treats the whole situation cavalierly and unsystematically. After all, he says, the language can’t really be incoherent because we have been successfully using it all along, so why all this “fear and dread”? Most logicians want systematic removal of the Paradox, but Wittgenstein is content to say that we may need to live with this paradox and to agree never to utter the Liar Sentence, especially if it seems that removal of the contradiction could have worse consequences.

P. F. Strawson has promoted the performative theory of truth as a way out of the Liar Paradox. Strawson has argued that the proper way out of the Liar Paradox is to carefully re-examine how the term “truth” is really used by speakers. He says such an investigation will reveal that the Liar Sentence is meaningful but fails to express a proposition. To explore this response more deeply, notice that Strawson’s proposed solution depends on the distinction between a proposition and the declarative sentence used to express that proposition. The next section explores what a proposition is, but let's agree for now that a sentence, when uttered, either expresses a true proposition, expresses a false proposition, or fails to express any proposition. According to Strawson, when we say some proposition is true, we are not making a statement about the proposition. We are not ascribing a property to the proposition such as the property of correspondence to the facts, or coherence, or usefulness. Rather, when we call a proposition “true,” we are only approving it, or praising it, or admitting it, or condoning it. We are performing an action of that sort. Similarly, when we say to our friend, “I promise to pay you fifty dollars,” we are not ascribing some property to the proposition, “I pay you fifty dollars.” Rather, we are performing the act of promising the $50. For Strawson, when speakers utter the Liar Sentence, they are attempting to praise a proposition that is not there, as if they were saying “Ditto” when no one has spoken. The person who utters the Liar Sentence is making a pointless utterance. According to this performative theory, the Liar Sentence is grammatical, but it is not being used to express a proposition and so is not something from which a contradiction can be derived.

b. Sentences, Statements, and Propositions

The Liar Paradox can be expressed in terms of sentences, statements, or propositions. We appropriately speak of the sentence, “This sentence is false,” and of the statement that this statement is false, and of the proposition that this proposition is false. Sentences are linguistic expressions whereas statements and propositions are not. When speaking about sentences, we are nearly always speaking about sentence types, not tokens. A token is the sound waves or the ink marks; these are specific collections of molecules. Philosophers do not agree on what a sentence is, but they disagree more about what a statement is, and they disagree even more about what a proposition is.

Despite Quine's famous complaint that there are no propositions because there can be no precise criteria for deciding whether two different sentences are being used to express identical propositions, there are some good reasons why researchers who work on the Liar Paradox should focus on propositions rather than either sentences or statements, but those reasons will not be explored here. [For a discussion, see (Barwise and Etchemendy 1987).] The present article will continue to speak primarily of sentences rather than propositions, though only for the purpose of simplicity.

c. An Ideal Solution to the Liar Paradox

We expect that any seriously proposed solution to the Liar Paradox will offer a better diagnosis of the problem than merely, “It stops the Liar Paradox.” A solution which says, “Quit using language” also will stop the Liar Paradox, but the Liar Paradox can be stopped by making more conservative changes than this. Hopefully any proposal to refine our semantic principles will be conservative for another reason: We want to minimize the damage; we want to minimize the amount and drastic nature of the changes because, all other things being equal, simple and intuitive semantic principles are to be preferred to complicated and less intuitive semantic principles. The same goes for revision of a concept or revision of a logic.

Ideally, we would like for a proposed solution to the Liar Paradox to provide a solution to all the versions of the Liar Paradox, such as the Strengthened Liar Paradox, the version that led to Buridan’s proof of God’s existence, and the contingent versions of the Liar Paradoxes. The solution should solve the paradox both for natural languages and formal languages, or provide a good explanation of why the paradox can be treated properly only in a formal language. The contingent versions of the Liar Paradox are going to be especially troublesome because if the production of the paradox doesn't depend only on something intrinsic to the sentence but also depends on what circumstances occur in the world, then there needs to be a detailed description of when those circumstances are troublesome and when they are not, and why.

It would be reasonable to expect a solution to tell us about the self-referential Truth-teller sentence:

This sentence is true.

It would also be reasonable to tell us how important self-reference is to the Liar Paradox. In the late 20th century, Stephen Yablo produced a semantic paradox that, he claims, shows that neither self-reference nor circularity is an essential feature of all the Liar paradoxes. In his paradox, there apparently is no way to coherently assign a truth value to any of the sentences in the countably infinite sequence of sentences of the form, “None of the subsequent sentences are true.” Imagine an unending line of people who say:

1. Everybody after me is lying.

2. Everybody after me is lying.

3. Everybody after me is lying.

...

Ask yourself whether the first person's sentence in the sequence is true or false. Notice that no sentence overtly refers to itself. To produce the paradox it is crucial that the line of speakers be infinite. There is controversy in the literature about whether the paradox actually contains a hidden appeal to self-reference or circularity. See (Beall 2001) for more discussion.

An important goal for the best solution, or solutions, to the Liar Paradox is to offer us a deeper understanding of how our semantic concepts and principles worked to produce the Paradox in the first place, especially if a solution to the Paradox requires changing or at least clarifying them. We want to understand the concepts of truth, reference, and negation that are involved in the Paradox. In addition to these, there are the subsidiary principles and related notions of denial, definability, naming, meaning, predicate, property, presupposition, antecedent, and operating on prior sentences to form newer meaningful ones rather than merely newer grammatical ones. We would like to know what limits there are on all these notions and mechanisms, and how one impacts another.

What are the important differences among the candidates for bearers of truth? The leading candidates are sentences, propositions, statements, claims, and utterances. Is one primary, while the others are secondary or derivative? And we would like know a great deal more about truth, especially truth, but also falsehood, and the related notions of fact, situation and state of affairs. We want to better understand what a language is and what the relationship is between an interpreted formal language and a natural language, relative to different purposes. Finally, it would be instructive to learn how the Liar Paradoxes are related to all the other paradoxes. That may be a lot of ask, but then our civilization does have considerable time before the Sun expands and vaporizes our little planet.

d. Should Classical Logic be Revised?

An important question regarding the Liar Paradox is: What is the relationship between a solution to the Paradox for (interpreted) formal languages and a solution to the Paradox for natural languages? There is significant disagreement on this issue. Is appeal to a formal language a turn away from the original problem, and so just changing the subject? Can one say we are still on the subject when employing a formal language because a natural language contains implicitly within it some formal language structure? Or should we be in the business of building an ideal language to replace natural language for the purpose of philosophical study?

Do we always reason informally in a semantically closed language, namely ordinary English? Or is it not clear what logic there is in English, and perhaps we should conclude from the Liar Paradox that the logic of English cannot be standard logic but must be one that restricts the explosion that occurs due to our permitting the deduction of anything whatsoever from a contradiction? Should we say English really has truth gaps or perhaps occasional truth gluts (sentences that are both true and false)?

Or instead can a formal language be defended on the ground that natural language is inconsistent and the formal language is showing the best that can be done rigorously? Can sense even be made of the claim that a natural language is inconsistent, for is not consistency a property only of languages with a rigorous structure, namely formal languages and not natural languages? Should we say people can reason inconsistently in natural language without declaring the natural language itself to be inconsistent? This article raises, but will not resolve, these questions, although some are easier to answer than others.

Many of the most important ways out of the Liar Paradox recommend revising classical formal logic. Classical logic is the formal logic known to introductory logic students as "predicate logic" in which, among other things, (i) all sentences of the formal language have exactly one of two possible truth values (TRUE, FALSE), (ii) the rules of inference allow one to deduce any sentence from an inconsistent set of assumptions, (iii) all predicates are totally defined on the range of the variables, and (iv) the formal semantics is the one invented by Tarski that provided the first precise definition of truth for a formal language in its metalanguage. A few philosophers of logic argue against any revision of classical logic by saying it is the incumbent formalism that should be accepted unless an alternative is required (probably it is believed to be incumbent because of its remarkable success in expressing most of modern mathematical inference). Still, most other philosophers argue that classical logic is not the incumbent which must remain in office unless an opponent can dislodge it. Instead, the office has always been vacant (for the purpose of examining natural language and its paradoxes).

Some philosophers object to revising classical logic if the purpose in doing so is merely to find a way out of the Paradox. They say that philosophers shouldn’t build their theories by attending to the queer cases. There are more pressing problems in the philosophy of logic and language than finding a solution to the Paradox, so any treatment of it should wait until these problems have a solution. From the future resulting theory which solves those problems, one could hope to deduce a solution to the Liar Paradox. However, for those who believe the Paradox is not a minor problem but is one deserving of immediate attention, there can be no waiting around until the other problems of language are solved. Perhaps the investigation of the Liar Paradox will even affect the solutions to those other problems.

3. The Main Ways Out

There have been many systematic proposals for ways out of the Liar Paradox. Below is a representative sample of five of the main ways out.

a. Russell’s Type Theory

Bertrand Russell said natural language is incoherent, but its underlying sensible part is an ideal formal language (such as the applied predicate logic of Principia Mathematica). He agreed with Henri Poincaré that the source of the Liar trouble is its use of self-reference. Russell wanted to rule out self-referential sentences as being ungrammatical or not well-formed in his ideal language, and in this way solve the Liar Paradox.

In 1908 in his article “Mathematical Logic as Based on the Theory of Types” that is reprinted in (Russell 1956, p. 79), Russell solves the Liar with his ramified theory of types. This is a formal language involving an infinite hierarchy of, among other things, orders of propositions:

If we now revert to the contradictions, we see at once that some of them are solved by the theory of types. Whenever ‘all propositions’ are mentioned, we must substitute ‘all propositions of order n’, where it is indifferent what value we give to n, but it is essential that n should have some value. Thus when a man says ‘I am lying’, we must interpret him as meaning: ‘There is a proposition of order n, which I affirm, and which is false’. This is a proposition of order n+1; hence the man is not affirming any propositions of order n; hence his statement is false, and yet its falsehood does not imply, as that of ‘I am lying’ appeared to do, that he is making a true statement. This solves the liar.

Russell’s implication is that the informal Liar Sentence is meaningless because it has no appropriate translation into his formal language since an attempted translation violates his type theory. This theory is one of his formalizations of the Vicious-Circle Principle: Whatever involves all of a collection must not be one of the collection. Russell believed that violations of this principle are the root of all the logical paradoxes.

His solution to the Liar Paradox has the drawback that it places so many subscript restrictions on what can refer to what. It is unfortunate that the Russell hierarchy requires even the apparently harmless self-referential sentences “This sentence is in English” and "This sentence is not in Italian" to be syntactically ill-formed. The type theory also rules out saying that legitimate terms must have a unique type, or that properties have the property of belonging to exactly one category in the hierarchy of types, which, if we step outside the theory of types, seems to be true about the theory of types. Bothered by this, Tarski took a different approach to the Liar Paradox.

b. Tarski’s Hierarchy of Meta-Languages

Reflection on the Liar Paradox suggests that either informal English (or any other natural language) is not semantically closed or, if it is semantically closed as it appears to be, then it is inconsistent—assuming for the moment that it does make sense to apply the term "inconsistent" to a natural language with a vague structure. Because of the vagueness of natural language, Tarski quit trying to find the paradox-free structure within natural languages and concentrated on developing formal languages that did not allow the deduction of a contradiction, but which diverge from natural language "as little as possible." Tarski emphasized that we should not be investigating a language-unspecific concept of truth, but only truth for a specific formal language. Many other philosophers of logic have not drawn Tarski’s pessimistic conclusion (about not being able to solve the Liar Paradox for natural language). W. V. O. Quine, in particular, argued that informal English can be considered to implicitly contain the hierarchy of Tarski’s metalanguages. This hierarchy is the tool Tarski used to solve the Liar Paradox for formal languages, although he gave no other justification for distinguishing a language from its metalanguage.

One virtue of Tarski's way out of the Paradox is that it does permit the concept of truth to be applied to sentences that involve the concept of truth, provided we apply level subscripts to the concept of truth and follow the semantic rule that any subscript inside, say, a pair of quotation marks is smaller than the subscript outside; any violation of this rule produces a meaningless, ungrammatical formal sentence, but not a false one. The language of level 1 is the meta-language of the object language in level 0. The (semi-formal) sentence "I0 am not true0" violates the subscript rule, as does "I1 am not true1" and so on up the numbered hierarchy. `I0´ is the name of the sentence "I0 am not true0," which is the obvious candidate for being the Strengthened Liar Sentence in level 0, the lowest level. The rule for subscripts stops the formation of either the Classical Liar sentence or the Strengthened Liar Sentence anywhere in the hierarchy. The subscript rule permits forming the Liar-like sentence “I0 am not true1.” That sentence is the closest the Tarski hierarchy can come to having a Liar Sentence, but it is not really the intended Liar Sentence because of the equivocation with the concept of truth, and it is simply false and leads to no paradox.

Russell's solution calls “This sentence is in English” ill-formed, but Tarski's solution does not, so that is a virtue of Tarski's way out. Tarski's clever treatment of the Liar Paradox unfortunately has drawbacks: English has a single word “true,” but Tarski is replacing this with an infinite sequence of truth-like predicates, each of which is satisfied by the truths only of the language below it. Intuitively, a more global truth predicate should be expressible in the language it applies to, so Tarski’s theory cannot make formal sense of remarks such as “The Liar Sentence implies it itself is false” although informally this is a true remark. One hopes to be able to talk truly about one’s own semantic theory. Despite these restrictions and despite the unintuitive and awkward hierarchy, Quine defends Tarski's way out of the Liar Paradox as follows. Like Tarski, he prefers to speak of the Antinomy instead of the Paradox.

Revision of a conceptual scheme is not unprecedented. It happens in a small way with each advance in science, and it happens in a big way with the big advances, such as the Copernican revolution and the shift from Newtonian mechanics to Einstein's theory of relativity. We can hope in time even to get used to the biggest such changes and to find the new schemes natural. There was a time when the doctrine that the earth revolves around the sun was called the Copernican paradox, even by the men who accepted it. And perhaps a time will come when truth locutions without implicit subscripts, or like safeguards, will really sound as nonsensical as the antinomies show them to be. (Quine 1976)

 Tarski adds to the defense by stressing that:

The languages (either the formalized languages or—what is more frequently the case—the portions of everyday language) which are used in scientific discourse do not have to be semantically closed. (Tarski, 1944)

(Kripke 1975) criticized Tarski’s way out for its inability to handle contingent versions of the Liar Paradox because Tarski cannot describe the contingency. That is, Tarski's solution does not provide a way to specify the circumstances in which a sentence leads to a paradox and the other circumstances in which that same sentence is paradox-free.

Putnam criticized Tarski's way out for another reason:

The paradoxical aspect of Tarski’s theory, indeed of any hierarchical theory, is that one has to stand outside the whole hierarchy even to formulate the statement that the hierarchy exists. But what is this “outside place”—“informal language”—supposed to be? It cannot be “ordinary language,” because ordinary language, according to Tarski, is semantically closed and hence inconsistent. But neither can it be a regimented language, for no regimented language can make semantic generalizations about itself or about languages on a higher level than itself. (Putnam 1990, 13)

Within the formal languages, we cannot say, “Every language has true sentences,” even though outside the hierarchy this is clearly a true remark about the hierarchy.

c. Kripke’s Hierarchy of Interpretations

Saul Kripke was the first person to emphasize that the reasoning of ordinary speakers often can produce a Liar Paradox. Statement (1) below can do so. Quoting from (Kripke 1975), "Consider the ordinary statement, made by Jones:

(1) Most (i.e., a majority) of Nixon's assertions about Watergate are false.

Clearly, nothing is intrinsically wrong with (1), nor is it ill-formed. Ordinarily the truth value of (1) will be ascertainable through an enumeration of Nixon's Watergate-related assertions, and an assessment of each for truth or falsity. Suppose, however, that Nixon's assertions about Watergate are evenly balanced between the true and the false, except for one problematic case,

(2) Everything ones says about Watergate is true.

Suppose, in addition, that (1) is Jones's sole assertion about Watergate.... Then it requires little expertise to show that (1) and (2) are both paradoxical: they are true if and only if they are false.

The example of (1) points up an important lesson: it would be fruitless to looks for an intrinsic criterion that will enable us to sieve out—as meaningless, or ill-formed—those sentences which lead to paradox." In that last sentence, Kripke attacks the solutions of Russell and Tarski. The additional lesson to learn from Kripke's example of the Contingent Liar involving Nixon's assertions about Watergate is that if a solution to the Liar Paradox is going to say that certain assertions such as this one fail to have a truth value in some circumstances but not in all circumstances, then the solution should tell us what those circumstances are, other than saying the circumstances are those that lead to a paradox.

Kripke’s way out requires a less radical revision in our semantic principles than does the Russell solution or the Tarski-Quine solution. Kripke retains the intuition that there is a semantically coherent and meaningful Liar Sentence, but argues that it is neither true nor false and so falls into a “truth value gap.” Tarski's Undefinability Theorem does not apply to languages having sentences that are neither true nor false.

Kripke trades Russell's and Tarski's infinite complexity of languages for infinite semantic complexity of a single formal language. He rejects Tarski's infinite hierarchy of meta-languages in favor of one formal language having an infinite hierarchy of partial interpretations. Consider a formal language containing a predicate T for truth (that is, for truth-in-an interpretation, although Kripke allows the interpretation to change throughout the hierarchy). In the base level of the hierarchy, this predicate T is given an interpretation in which it is true of all sentences that do not actually contain the symbol ‘T’. The predicate T is the formal language’s only basic partially-interpreted predicate. Each step up Kripke’s semantic hierarchy is a partial interpretation of the language, and in these interpretations all the basic predicates except one must have their interpretations already fixed in the base level from which the first step up the hierarchy is taken. This one exceptional predicate T is intended to be the truth predicate for the previous lower level.

For example, at the lowest level in the hierarchy we have the (formal equivalent of the) true sentence 7 + 5 = 12. Strictly speaking it is not grammatical in English to say 7 + 5 = 12 is true. More properly we should add quotation marks and say ‘7 + 5 = 12’ is true. In Kripke’s formal language, ‘7 + 5 = 12’ is true at the base level of the hierarchy. Meanwhile, the sentence that says it is true, namely ‘T(‘7+5=12’)’, is not true at that level, although it is true at the next higher level. Unfortunately at this new level, the even more complex sentence ‘T(‘T(‘7+5=12’)’)’ is still not yet true. It will become true at the next higher level. And so goes the hierarchy of interpretations as it attributes truth to more and more sentences involving the concept of truth itself. The extension of T, that is, the class of names of sentences that satisfy T, grows but never contracts as we move up the hierarchy, and it grows by calling more true sentences true. Similarly the anti-extension of T grows but never contracts as more false sentence involving T are correctly said to be false.

Kripke says T eventually becomes a truth predicate for its own level when the interpretation-building reaches the unique lowest fixed point at a countably infinite height in the hierarchyAt a fixed point, no new sentences are declared true or false at the next level, but the language satisfies Tarski’s Convention T, so for this reason many philosophers are sympathetic to Kripke’s claim that T is a truth predicate at that point. At this fixed point, the Liar Sentence still is neither true nor false, and so falls into the truth gap, just as Kripke set out to show. In this way, the Liar Paradox is solved, the formal semantics is coherent, and many of our intuitions about semantics are preserved. Regarding our intuition that is expressed in Convention T, a virtue of Kripke's theory is that, if ‘p’ abbreviates the name of the sentence X, it follows that Tp is true (or false) just in case X is true (or false).

However, there are difficulties with Kripke's way out. The treatment of the Classical Liar stumbles on the Strengthened Liar and reveals why that paradox deserves its name.  For a discussion of why, see (Kirkham 1992, pp. 293-4).

Some critics of Kripke's theory say that in the fixed-point the Liar Sentence does not actually contain a real, total truth predicate but rather only a clever restriction on the truth predicate, and so Kripke’s Liar Sentence is not really the Liar Sentence after all; therefore we do not have here a solution to the Liar Paradox. Other philosophers would say this is not a fair criticism of Kripke's theory since Tarski's Convention T, or some other intuitive feature of our concept of truth, must be restricted in some way if we are going to have a formal treatment of truth. What can more easily be agreed upon by the critics is that Kripke's candidate for the Liar sentence falls into the truth gap in Kripke's theory at all levels of his hierarchy, so it is not true in his theory. [We are making this judgment that it is not true from within the meta-language in which sentences are properly said to be true or else not true.] However, in the object language of the theory, one cannot truthfully say this; one cannot say the Liar Sentence is not true since the candidate expression for that, namely ~Ts, is not true, but rather falls into the truth gap.

Robert Martin and Peter Woodruff created the same way out as Kripke, though a few months earlier and in less depth.

d. Barwise and Etchemendy

Another way out says the Liar Sentence is meaningful and is true or else false, but one step of the argument in the Liar Paradox is incorrect (such as the inference from the Liar Sentence’s being false to its being true). Arthur Prior, following the informal suggestions of Jean Buridan and C. S. Peirce, takes this way out and concludes that the Liar Sentence is simply false.  So do Jon Barwise and John Etchemendy, but they go on to present a detailed, formal treatment of the Paradox that depends crucially upon using propositions rather than sentences, although the details of their treatment will not be sketched here. Their treatment says the Liar Sentence is simply false on one interpretation but simply true on another interpretation, and that the argument of the Paradox improperly exploits this ambiguity. The key ambiguity is to conflate the Liar Proposition's negating itself with its denying itself. Similarly, in ordinary language we are not careful to distinguish asserting that a sentence is false and denying that it is true.

Three positive features of their solution are that (i) it is able to solve the Strengthened Liar, (ii) its propositions are always true or false, but never both, and (iii) it shows the way out of paradox both for natural language and interpreted formal language. Yet there is a price to pay. No proposition in their system can be about the whole world, and this restriction is there for no independent reason but only because otherwise we would get a paradox.

e. Paraconsistency

A more radical way out of the Paradox is to argue that the Liar Sentence is both true and false. This solution, a version of dialethism, embraces the contradiction, then tries to limit the damage that is ordinarily a consequence of that embrace. This way out changes the classical rules of semantics and allows, among other things,  the Liar Sentence to be both true and false, and it changes the syntactic rules of classical logic and revises modus ponens to prevent there being a theorem that everything follows from a contradiction: (p&¬p) ⊢ q.

This way out uses a paraconsistent logic. That solution, which was initially promoted mostly by Graham Priest, will not be developed in this article, but it succeeds in avoiding semantic incoherence while offering a formal, detailed treatment of the Paradox. A principal virtue of this treatment is that, unlike with Barwise and Etchemendy's treatment, a sentence can be about the whole world. A principal drawback of this treatment, though, is it doesn't seem to solve the Strengthened Liar Paradox and it does violence to our intuition that sentences can’t be both true and false in the same sense in the same situation. See the last paragraph of "Paradoxes of Self-Reference," for more discussion of using paraconsistency as a way out of the Liar Paradox.

4. Conclusion

Russell, Tarski, Kripke, Priest, Barwise and Etchemendy (among others) deserve credit for providing a philosophical justification for their proposed solutions while also providing a formal treatment in symbolic logic that shows in detail both the character and implications of their proposed solutions. The theories of Russell and of Quine-Tarski do solve the Strengthened Liar, but at the cost of assigning complex “levels” to the relevant sentences, although the Quine-Tarski solution does not take Russell’s radical step of ruling out all self-reference. Kripke’s elegant and careful treatment of the Classical Liar stumbles on the Strengthened Liar and reveals why that paradox deserves its name.  Barwise and Etchemendy’s way out avoids these problems, but it requires accepting the idea that no sentence can be used to say anything about the whole world including the semantics of our language. Priest’s way out requires giving up our intuition that no context-free, unambiguous sentence is both true and false.

The interesting dispute  continues over which is the best way out of the Liar Paradox—the best way to preserve the most important intuitions we have about semantics while avoiding semantic incoherence. In this vein, Hilary Putnam draws the following conclusion:

If you want to say something about the liar sentence, in the sense of being able to give final answers to the questions “Is it meaningful or not? And if it is meaningful, is it true or false? Does it express a proposition or not? Does it have a truth-value or not? And which one?” then you will always fail. In closing, let me say that even if Tarski was wrong (as I believe he was) in supposing that ordinary language is a theory and hence can be described as “consistent” or “inconsistent,” and even if Kripke and others have shown that it is possible to construct languages that contain their own truth-predicates, still, the fact remains that the totality of our desires with respect to how a truth-predicate should behave in a semantically closed language, in particular, our desire to be able to say without paradox of an arbitrary sentence in such a language that it is true, or that it is false, or that it is neither true nor false, cannot be adequately satisfied. The very act of interpreting a language that contains a liar sentence creates a hierarchy of interpretations, and the reflection that this generates does not terminate in an answer to the questions “Is the liar sentence meaningful or meaningless, or if it is meaningful, is it true or false?” (Putnam 2000)

See also Logical Paradoxes.

5. References and Further Reading

For further reading on the Liar Paradox that provides more of an introduction to it while not presupposing a strong background in symbolic logic, the author recommends the article below by Mates, plus the first chapter of the Barwise-Etchemendy book, and then chapter 9 of the Kirkham book. The rest of this bibliography is a list of contributions to research on the Liar Paradox, and all members of the list require the reader to have significant familiarity with the techniques of symbolic logic. In the formal, symbolic tradition, other important researchers in the last quarter of the 20th century were Burge, Gupta, Herzberger, McGee, Parsons, Routley, Skyrms, van Fraassen, and Yablo.

  • Barwise, Jon and John Etchemendy. The Liar: An Essay in Truth and Circularity, Oxford University Press, 1987.
  • Beall, J.C. (2001). “Is Yablo’s Paradox Non-Circular?” Analysis 61, no. 3, pp. 176-87.
  • Burge, Tyler. “Semantical Paradox,” Journal of Philosophy, 76 (1979), 169-198.
  • Dowden, Bradley. “Accepting Inconsistencies from the Paradoxes,” Journal of Philosophical Logic, 13 (1984), 125-130.
  • Gupta, Anil. “Truth and Paradox,” Journal of Philosophical Logic, 11 (1982), 1-60. Reprinted in Martin (1984), 175-236.
  • Herzberger, Hans. “Paradoxes of Grounding in Semantics,” Journal of Philosophy, 68 (1970), 145-167.
  • Kirkham, Richard. Theories of Truth: A Critical Introduction, MIT Press, 1992.
  • Kripke, Saul. “Outline of a Theory of Truth,” Journal of Philosophy, 72 (1975), 690-716. Reprinted in (Martin 1984).
  • Martin, Robert. The Paradox of the Liar, Yale University Press, Ridgeview Press, 1970. 2nd ed. 1978.
  • Martin, Robert. Recent Essays on Truth and the Liar Paradox, Oxford University Press, 1984.
  • Martin, Robert and Peter Woodruff. “On Representing ‘True-in-L’ in L,” Philosophia, 5 (1975), 217-221.
  • Mates, Benson. “Two Antinomies,” in Skeptical Essays, The University of Chicago Press, 1981, 15-57.
  • McGee, Vann. Truth, Vagueness, and Paradox: An Essay on the Logic of Truth, Hackett Publishing, 1991.
  • Priest, Graham. “The Logic of Paradox,” Journal of Philosophical Logic, 8 (1979), 219-241; and “Logic of Paradox Revisited,” Journal of Philosophical Logic, 13 (1984), 153-179.
  • Priest, Graham, Richard Routley, and J. Norman (eds.). Paraconsistent Logic: Essays on the Inconsistent, Philosophia-Verlag, 1989.
  • Prior, Arthur. “Epimenides the Cretan,” Journal of Symbolic Logic, 23 (1958), 261-266; and “On a Family of Paradoxes,” Notre Dame Journal of Formal Logic, 2 (1961), 16-32.
  • Putnam, Hilary. Realism with a Human Face, Harvard University Press, 1990.
  • Putnam, Hilary. “Paradox Revisited I: Truth.” In Gila Sher and Richard Tieszen, eds., Between Logic and Intuition: Essays in Honor of Charles Parsons, Cambridge University Press,  (2000), 3-15.
  • Quine, W. V. O. “The Ways of Paradox,” in his The Ways of Paradox and Other Essays, rev. ed., Harvard University Press, 1976.
  • Russell, Bertrand. “Mathematical Logic as Based on the Theory of Types,” American Journal of Mathematics, 30 (1908), 222-262.
  • Russell, Bertrand. Logic and Knowledge: Essays 1901-1950, ed. by Robert C. Marsh, George Allen & Unwin Ltd. (1956).
  • Skyrms, Brian. “Return of the Liar: Three-valued Logic and the Concept of Truth,” American Philosophical Quarterly, 7 (1970), 153-161.
  • Strawson, P. F. “Truth,” in Analysis, 9, (1949).
  • Tarski, Alfred. “The Concept of Truth in Formalized Languages,” in Logic, Semantics, Metamathematics, pp. 152-278, Clarendon Press, 1956.
  • Tarski, Alfred. “The Semantic Conception of Truth and the Foundations of Semantics,” in Philosophy and Phenomenological Research, Vol. 4, No. 3 (1944), 341-376.
  • Van Fraassen, Bas. “Truth and Paradoxical Consequences,” in (Martin 1970).
  • Woodruff, Peter. “Paradox, Truth and Logic Part 1: Paradox and Truth,” Journal of Philosophical Logic, 13 (1984), 213-231.
  • Wittgenstein, Ludwig. Remarks on the Foundations of Mathematics, Basil Blackwell, 3rd edition, 1978.
  • Yablo, Stephen. (1993). “Paradox without Self-Reference.” Analysis 53: 251-52.

Author Information

Bradley Dowden
Email: dowden@csus.edu
California State University, Sacramento
U. S. A.

Bolzano, Bernard: Mathematical Knowledge

Bernard Bolzano: Philosophy of Mathematical Knowledge

BolzanoIn Bernard Bolzano’s theory of mathematical knowledge, properties such as analyticity and logical consequence are defined on the basis of a substitutional procedure that comes with a conception of logical form that prefigured contemporary treatments such as those of Quine and Tarski. Three results are particularly interesting: the elaboration of a calculus of probability, the definition of (narrow and broad) analyticity, and the definition of what it is for a set of propositions to stand in a relation of deducibility (Ableitbarkeit) with another. The main problem with assessing Bolzano's notions of analyticity and deducibility is that, although they offer a genuinely original treatment of certain kinds of semantic regularities, contrary to what one might expect they do not deliver an account of either epistemic or modal necessity. This failure suggests that Bolzano does not have a workable account of either deductive knowledge or demonstration. Yet, Bolzano’s views on deductive knowledge rest on a theory of grounding (Abfolge) and justification whose role in his theory is to provide the basis for a theory of mathematical demonstration and explanation whose historical interest is undeniable.

Table of Contents

  1. His Life and Publications
  2. The Need for a New Logic
  3. Analyticity and Deducibility
  4. Grounding
  5. Objective Proofs
  6. Conclusion
  7. References and Further Reading

1. His Life and Publications

Bernard Placidus Johann Nepomuk Bolzano was born on 5 October 1781 in Prague. He was the son of an Italian art merchant and of a German-speaking Czech mother. His early schooling was unexceptional: private tutors and education at the lyceum. In the second half of the 1790s, he studied philosophy and mathematics at the Charles-Ferdinand University. He began his theology studies in the Fall of 1800 and simultaneously wrote his first mathematical treatise. When he completed his studies in 1804, two university positions were open in Prague, one in mathematics, the other one in the “Sciences of the Catholic Religion.” He obtained both, but chose the second: Bolzano adhered to the Utilitarian principle and believed that one must always act, after considering all possibilities, in accordance with the greater good. He was hastily ordained, obtained his doctoral degree in philosophy and began work in his new university position in 1805. His professional career would be punctuated by sickness—he suffered from respiratory illness—and controversy. Bolzano’s liberal views on public matters and politics would serve him ill in a context dominated by conservatism in Austria. In 1819, he was absurdly accused of “heresy” and subjected to an investigation that would last five years after which he was forced to retire and banned from publication. From then on, he devoted himself entirely to his work.

Bolzano’s Considerations on Some Objects of Elementary Geometry (1804) received virtually no attention at the time they were published and the few commentators who have appraised his early work concur in saying that its interest is merely historical. (Russ 2004, Sebestik 1992; see also Waldegg 2001). Bolzano’s investigations in geometry did not anticipate modern axiomatic approaches to the discipline--he was attempting to prove Euclid’s parallel postulate--and did not belong to the trend that would culminate with the birth of non-Euclidean geometries, the existence of which Bolzano’s contemporary Johann Carl Friedrich Gauss (1777-1855) claimed to have discovered and whose first samples were found in the works of Nikolai Lobatchevski (1792-1856) and Janos Bolyai (1802-1860), whom Bolzano did not read. (See Sebestik 1992, 33-72 for a discussion of Bolzano’s contribution to geometry; see also Russ 2004, 13-23). As Sebestik explains (1992, 35 note), Bolzano never put into question the results to which he had come in (1804).

By contrast, Bolzano is renown for his anticipation of significant results in analysis. Three booklets that appeared in 1816-17 have drawn the attention of historians of mathematics, one of which, the Pure Analytic Proof, was reedited in 1894 and 1905. (Rusnock 2000, 56-86; 158-198) At the time of their publication however they attracted hardly any notice. Only one review is known (see Schubring 1993, 43-53). According to (Grattan-Guiness 1970), Cauchy would have plagiarized (Bolzano 1817a) in his Cours d’Analyse, but this hypothesis is disputed in (Freudenthal 1971) and (Sebestik 1992, 107ff). This might explain why Bolzano chose to resume the philosophical and methodological investigations he had initiated in the Contributions to a Better Founded Exposition of Mathematics (1810) a decade earlier. At the end of the 1830s, after he had worked out the logical basis for his system in the Theory of Science (1837), Bolzano returned once more to mathematics and spent the last years of his life working on the Theory of Quantities. The latter remained unpublished until after his death, and only excerpts appeared in print in the 19th century, most notably the Paradoxes of the Infinite (1851). The Theory of Function (1930) and the Pure Theory of Numbers (1931) were edited by the Czech mathematician Karel Rychlik and published in 1930 and 1931 respectively by a commission from the Royal Bohemian Academy of Science. All these works have now been translated into English (See Russ 2004).

2. The Need for a New Logic

Bolzano understood the main obstacle to the development of mathematics in his time to be the lack of proper logical resources. He believed syllogistic (that is, traditional Aristotelian logic) was utterly unfit for the purpose. He saw the task of the speculative part of mathematics that belongs at once to philosophy as consisting in providing a new logic following which a reform of all sciences should take place. As Bolzano conceived of it, philosophy of mathematics is one aspect of a more general concern for logic, methodology, the theory of knowledge, and, in general, the epistemological foundation of deductive sciences, “purely conceptual disciplines” as Bolzano calls them, that unfolds throughout his mathematical work and forms the foremost topic of his philosophy. The latter falls in two phases. The period of the Contributions, which extends throughout the 1810s, and the period of the Theory of Science, which was written in the course of the 1820s and published anonymously in 1837. In the Contributions, Bolzano’s undertaking remained largely programmatic and by no means definitive. By the time he was writing the Theory of Science he had revised most of his views, such as those of the multiple copula, analyticity and necessity. (See Rusnock 2000, 31-55, for discussion.) Nonetheless, the leitmotiv of Bolzano’s mature epistemology already comes through in 1810, namely his fundamental disagreement with the “Kantian Theory of Construction of Concepts through Intuitions” to which he devoted the Appendix of the Contributions. (See Rusnock 2000, 198-204 for an English translation; see also Russ 2004, 132-137). In this, Bolzano can be seen to have anticipated an important aspect of later criticisms of Kant, Russell’s for instance (1903  §§ 4, 5, 423, 433-4). As Bolzano saw it, an adequate account of demonstration excludes appeal to non-conceptual inferential steps, intuitions or any other proxy for logic.

In the Theory of Science, Bolzano’s epistemology of deductive disciplines is based on two innovations. On the one hand, properties such as analyticity or deducibility (Ableitbarkeit) are defined not for thoughts or sentences but for what Bolzano conceives to be the objective content of the former and the meaning of the latter and which he calls "propositions in themselves" (Sätze and sich) or "propositions." On the other hand, properties such as analyticity and deducibility are “formal” in that they are features of sets of propositions defined by a fixed vocabulary; they come to the fore through the application of a substitution method that consists in arbitrarily “varying” determinate components in a proposition so as derive different types of semantic regularities.

3. Analyticity and Deducibility

Bolzano’s theory of analyticity is a favored topic in the literature. (Cf. Bar-Hillel 1950; Etchemendy 1988; Künne 2006; Lapointe 2000, 2008; Morscher 2003; Neeman 1970; Proust 1981, 1989; Textor 2000, 2001) This should be no surprise. For one thing, by contrast to the Kantian definition, Bolzano’s allows us to determine not only whether a grammatical construction of the form subject-predicate is analytic, as Kant has it, but whether any construction is analytic or not. This includes hypotheticals, disjunctions, conjunctions, and so forth, but also any proposition that presents a syntactic complexity that is foreign to traditional (that is, Aristotelian) logic. Analyticity is not tied to any “syntactic” conception of “logical form.” It is a relation pertaining to the truth of propositions and not merely to their form or structure. Let ‘Aij…(S)’ stand for "The proposition S is analytic with respect to the variable components i, j…"

Aij…(S) iff:

(i)   i, j, … can be varied so as to yield at least one objectual substitution instance of S

(ii) All substitution instances of S have the same truth-value as S

where a substitution instance is “objectual” if the concept that is designated by the subject has at least one object. On this account, propositions can be analytically true or analytically false.

Although the idea that analyticity should be defined on the basis of a purely semantic criterion is in itself a great anticipation, Bolzano’s conception of analyticity fails in other respects. For one, it does not provide an account of what it means for a proposition to be true by virtue of meaning alone and to be knowable as such.  “… is analytic with respect to …” is not a semantic predicate of the type one would expect, but is a variable holding operator. A statement ascribing analyticity to a given propositional form, say "X who is a man is mortal" if it is true, is true because every substitution instance of "X who is a man is mortal" that also has objectuality is true. Bolzano’s definition of analyticity offers a fairly clear description of substitutional quantification -- to say that a propositional form is analytic is to say that all its substitution instances are true. Yet because he deals not primarily with sentences and words but with their meaning, that is, with ideas and propositions in themselves, and because there is at least one idea for every object, there is in principle a “name” for every object. For this reason, although Bolzano’s approach to quantification is substitutional, he is not liable to the reproach that his interpretation of the universal quantifier cannot account for every state of the world. The resources he has at his disposal are in principle as rich as necessary to provide a complete description of the domain the theory is about.

Bolzano’s epistemology rests on a theory of logical consequence that is twofold: an account of truth preservation that is epitomized in his notion of “deductibility” (Ableitbarkeit) on the one hand (See Siebel 1996, 2002, 2003; van Benthem 1985, 2003; Etchemendy 1990), and an account of “objective grounding” (Abfolge) on the other (see Tatzel 2002, 2003; see also (Thompson 1981; Corcoran 1975). The notion of deducibility presents a semantic account of truth-preservation that is neither trivial nor careless. The same holds for his views on probability. Likewise his attempt at a definition of grounding constitutes the basis of an account of a priori knowledge and mathematical explanations whose interest has been noticed by some authors, and in some cases even vindicated (Mancosu 1999).

As Bolzano presents it, although analyticity is defined for individual propositional forms, deducibility is a property defined for sets of those forms. Let "Dij…(T’ T’, T’’, … ; S, S’, S’’, …)" stand for "The set of propositions T’ T’, T’’ is deducible from the set of propositions S, S’, S’’ with respect to i, j,…." Bolzano defines deducibility in the following terms:

Dij…(T’ T’, T’’, … ; S, S’, S’’, …) iff

(i)         i, j, … can be varied so as to yield at least one true substitution instance of S, S’, S’’, … and T, T’, T’’, …

(ii)        whenever S, S’, S’’… is true, T, T’, T’’,… is also true.

Bolzano’s discussion of deducibility is exhaustive. It extends over thirty-six paragraphs, and he draws a series of theorems from his definition. The most significant theorems are the following:

  • ¬(Aij…(T, T’, T’’…; S, S’, S’’) → Aij…(S, S’, S’’…; T, T’, T’’…,) (asymmetry)
  • (Aij…(T, T’, T’’…; S, S’, S’’) & Aij…(R, R’, R’’…; T, T’, T’’…) → (Aij…(R, R’, R’’…; S, S’, S’’…) (transitivity)

In addition, assuming that the S, S’, S’’…, share at least one variable that make them all true at the same time, then:

  • Aij…( S, S’, S’’…; S, S’, S’’) (reflexivity)

As regard reflexivity, the assumption that the S, S’, S’’… must share at least one variable follows from the fact that every time S, S’, S’’… contain a falsehood S that does not share at least one variable idea i, j, with the conclusion T, T’, T’’,…, then there are no substitution that can make both the premises and the conclusion true at the same time, and the compatibility constraint is not fulfilled.

On Bolzano’s account, fully-fledged cases of deducibility include both formally valid arguments as well as materially valid ones, for instance:

Caius is rational

is deducible with respect to ‘Caius’, ‘man’ and ‘rational’ from

Caius is a man

Men are rational

and

Caius is rational

is deducible with respect to ‘Caius’ from

Caius is a man.

There is a sharp distinction to be drawn between arguments of the former kind and arguments of the latter. Assuming a satisfactory account of logical form, in order to know that the conclusion follows from the premises in arguments of the former kind one only needs to consider their structure or form; no other kind of knowledge is required. In the latter argument however in order to infer from the premise to the conclusion, one must know more than its form. One also needs to understand the signification of ‘man’ and ‘rational’ since in order to know that Caius is rational one also needs to know in addition to the fact that Caius is a man that all men are rational. There is good evidence that Bolzano was aware of some such distinction between arguments that preserve truth and arguments that do so in virtue of their “form.” Unfortunately, Bolzano’s definition of deducibility does not systematically uphold the distinction. Since deducibility applies across the board to all inferences that preserve truth from premises to conclusion with respect to a given set of ideas, it does not of itself guarantee that an argument be formally valid and the notion of deducibility turns out to be flawed: it makes it impossible to extend our knowledge in the way we would expect it. If we know, for instance, that all instances of modus ponens are logically valid, we can infer from two propositions whose truth we’ve recognized:

If Caius is a man, then he is mortal

Caius is a man

a new proposition:

Caius is mortal

whose truth we might not have previously known. Bolzano’s account of deducibility does not allow one to extend one’s knowledge in this way since in order to know for every substitution instance that truth is preserved from the premises to the conclusion one has to know that the premises are true and that the conclusion is true.

On Bolzano’s account, in order for a conclusion to be deducible from a given set of premises, there must be at least one substitution that makes both the premises and the conclusion true at once. He calls this the “compatibility” (Verträglichkeit) condition, a requirement that is not reflected in classical conceptions of consequence. As a result, Bolzano’s program converges with many contemporary attempts at a definition of non-classical notions of logical consequence. Given the compatibility condition, although a logical truth may follow from any (set of) true premises (with respect to certain components), nothing as opposed to everything is deducible from a contradiction. The compatibility condition invalidates the ex contradictio quod libet or explosion principle. The reason for this is that no substitution of  ‘p’ in "'q’ is deducible from ‘p and non-p’’ can fulfil the compatibility constraint; no interpretation of ‘p’ in ‘p and non-p’ can yield a true variant and hence there are no ideas that can be varied so as to make both the premises and the conclusion true at once. This has at least two remarkable upshots. First, the compatibility constraint invalidates the law of contraposition. Whenever one of S, S’, S’’… is analytically true, when all their substitution instances are true, we cannot infer from:

Dij…(T’ T’, T’’, … ; S, S’, S’’, …)

to

Dij…(¬S, ¬S’, ¬S’’, …; ¬T, ¬T’, ¬T’’…)

since ‘¬S, ¬S’, ¬S’’’ entails a contradiction, that is, an analytically false proposition. For instance,

Caius is a physician who specializes in the eyes

is deducible from

Every ophthalmologist is an ophthalmologist

Caius is an ophthalmologist

with respect to ‘ophthalmologist’. However,

It is not the case that every ophthalmologist is an ophthalmologist

It is not the case that Caius is an ophthalmologist

are not deducible with respect to the same component from:

It is not the case that Caius is a physician who specializes in the eyes.

Second, the compatibility condition makes Bolzano’s logic nonmonotonic. Whenever the premise added contains contradictory information, the conclusion no longer follows. While compatibility does not allow him to deal with all cases of defeasible inference, it allows Bolzano to account for cases that imply typicality considerations. It is typical of crows that they be black. Hence from the fact that x is a crow we can infer that x is black. On Bolzano’s account adding a premise that describes a new case that contradicts previous observation, say that this crow is not black, the conclusion no longer follows since the inference does not fulfil the compatibility condition: no substitution can make both the premises and the conclusion true at the same time.

At many places Bolzano suggests that deducibility is a type of probabilist inference, namely the limit case in which the probability of a proposition T relative to a set of premises S, S’, S’’… = 1. Bolzano also calls inferences of this type  "perfect inference." More generally, the value of a probability inference from S, S’, S’’, … to T with respect to a set of variable ideas i, j,… is determined by comparing the number of cases in which the substitution of i, j,… yields true instances of both S, S’, S’’… and T, to the number of cases in which S, S’, S’’,… are true (with respect to i, j,…). Let’s assume that Caius is to draw a ball from a container in which there are 90 black and 10 white and that the task is to determine the degree of probability of the conclusion "Caius draws a black ball.” On Bolzano’s account, in order to determine the probability of the conclusion one must first establish the number n of admissible substitution instances K1, K2, …, Kn of the premise "Caius draws a ball" with respect to ‘ball.’ The number n of acceptable substitution instances of the premise is in general a function of the following considerations: (i) the probability of each of K1, K2, …, Kn is the same; (ii) only one of K1, K2, …, Kn can be true at once; (iii) taken together, they exhaust all objectual substitution instances of the premise. In this case, since there are 100 balls in the container, there are only 100 admissible substitution instances of the premise, namely K1: "Caius draws ball number 1," K2: "Caius draws ball number 2,"…, K100: "Caius draws ball number 100." If the set of K1, K2, …, Kn = k and the number of cases in which "Caius draw a black ball" is deducible from "Caius draws a ball" is m, then the probability m of "Caius draws a black ball" is the fraction m/k = 90/100 = 9/10. In the case of deducibility the number of cases in which the substitution yields both true variants of the premises and the conclusion is identical to the number of true admissible variants of the premises, that is, m = 1. If there is no substitution that makes both the premises and the conclusion true at the same time, then the degree of probability of the conclusion is 0, that is, the conclusion is not deducible from the premises.

4. Grounding

Bolzano did not think that his account of truth preservation exhausted the topic of inference since it does not account for what is specific to knowledge we acquire in mathematics. Such knowledge he considered to be necessary and a priori, two qualities relations that are defined on the basis of the substitutional method do not have. Bolzano called “grounding” (Abfolge) the relation that defines structures in which propositions relate as grounds to their consequences. As Bolzano conceived of it, my knowing that ‘p’ grounds ‘q’ has explanatory virtue: grounding aims at epitomizing certain intuitions about scientific explanation and seeks to explain, roughly, what, according to Bolzano, the truly scientific mind ought to mean when, in the conduct of a scientific inquiry, she uses the phrase "…because…" in response the question "why …?" Since in addition the propositions that pertain to “grounding” orders such as arithmetic and geometry are invariably true and purely conceptual, then grasping the relations among propositions in the latter invariably warrants knowledge that does not rest on extra-conceptual resources, a move that allowed Bolzano to debunk the Kantian theory of pure intuition.

Bolzano’s notion of grounding is defined by a set of distinctive features. For one thing, grounding is a unique relation: for every true proposition that is not primitive, there is a unique tree-structure that relates it to the axioms from which it can be deduced. That there is such a unique objective order is an assumption on Bolzano’s part that is in many ways antiquated, but it cannot be ignored. Uniqueness follows from two distinctions Bolzano makes. On the one hand, Bolzano distinguishes between simple and complex propositions: a ground (consequence) may or may not be complex. A complex ground is composed of a number of different truths that are in turn composed of a number of different primitive concepts. On the other hand, Bolzano distinguishes between the complete ground or consequence of a proposition and the partial ground or consequence thereof. On this basis, he claims that the complete ground of a proposition is never more complex than is its complete consequence. That is, propositions involved in the complete ground of a proposition are not composed of more distinct primitive concepts than is the complete consequence. Given that Bolzano thinks that the grounding order is ultimately determined by a finite number of simple concepts, this restriction implies that the regression in the grounding order from a proposition to its ground is finite. Ultimately, the regression leads to true primitive propositions, that is, axioms whose defining characteristic is their absolute simplicity.

Note that the regression to primitive propositions is not affected by the fact that the same proposition may appear at different levels of the hierarchy. Although the grounding order is structured vertically and cannot have infinitely many distinct immediate antecedents, in order to conduct basic inductive mathematical demonstration the horizontal structure needs on its part to allow for recursions. Provided that the recurring propositions do not appear on the same branch of the tree, Bolzano is in a position to avoid loops that would make it impossible to guarantee that we ever arrive at the primitive propositions or that there be primitive propositions in the first place.

Bolzano draws a distinction between cases in which what we have is the immediate ground for the truth of a proposition and cases in which the ground is mediated (implicitly or explicitly) by other truths. When Bolzano speaks of grounding, what he has in mind is invariably immediate grounding, and he understands the notion of mediate grounding as a derivative notion. It is the transitive closure of the more primitive notion of immediate grounding. p is the mediate consequence of the propositions Ψ1, …, Ψn if and only if there is a chain of immediate consequences starting with Ψ1, …, Ψn and ending with p. p is the immediate consequence of Ψ1, …, Ψn if there are no intermediate logical step between Ψ1, …, Ψn and p.

Grounding is not reflexive. p cannot be its own ground, whether mediate or immediate. The non-reflexive character of grounding can be inferred from its asymmetry, another of Bolzano's assumption. If grounding were reflexive, then the truth that p could be grounded on itself, but given that if p grounds q it is not the case that q grounds p, this would imply a contradiction since, by substitution p could at once ground itself and not ground itself. Irreflexivity allows Bolzano to deny the traditional tenet according to which some propositions such as axioms are grounded in themselves. Bolzano explains that this is a loose way of talking, that those who maintain this idea are unaware of the putative absurdity of saying that a proposition is its own consequence and that the main motivation behind this claim is the attempt to maintain, unnecessarily, the idea that every proposition has a ground across the board. According to Bolzano however, the ground for the truth of a primitive proposition does not lie in itself but in the concepts of which this proposition consists.

One important distinction to be made between deducibility and grounding, as Bolzano conceives of them, rests in the fact that while grounding is meant to support the idea that a priori knowledge is axiomatic, that there are (true) primitive, atomic propositions from which all other propositions in the system follow as consequences, deducibility does not have such implication. Whether a proposition q is deducible from another proposition p is not contingent on q’s being ultimately derivable from the propositions from which p is derivable. That "Caius is mortal" is deducible from "Caius is a man" can be established independently of the truth that Caius is a finite being. Likewise, the possibility that deducibility be a special case of grounding is unacceptable for Bolzano. Not all cases of deducibility are cases of grounding. For instance,

It is warmer in the summer than in the winter

is deducible from

Themometers, if they function properly, are higher in the summer than in the winter

but it is not an objective consequence of the latter in Bolzano’s sense. On the contrary, the reason why thermometers are higher in the summer is that it is warmer so that, in the previous example, the order of grounding is reversed. There are cases in which true propositions that stand in a relation of deducibility also stand in a relation of grounding, what Bolzano calls "formal grounding." It is not difficult to see what could be the interest of the latter. Strictly speaking, in an inference that fits both the notion of grounding and that of deducibility, the conclusion follows both necessarily (by virtue of its being a relation of grounding) and as a matter of truth preservation (by virtue of its being an instance of deducibility) from the premises. Formal grounding however presents little interest: it is not an additional resource of Bolzano’s logic but a designation for types of inferences that present the specificity of suiting two definitions at once: I can only know that an inference fits the definition of formal grounding if I know that it fits both that of grounding and that of deducibility. Once I know that it fits both, to say that it is a case of formal grounding does not teach me much I did not already know.

It could be tempting to think that grounding is a kind of deducibility, namely the case in which the premises are systematically simpler than the conclusion. Bolzano suggests something similar when he claims that grounding might not, in the last instance, be more than an ordering of truths by virtue of which we can deduce from the smallest number of simple premises, the largest possible number of the remaining truths as conclusion. This would require us however to ignore important differences between deducibility and grounding. When I say that "The thermometer is higher in the summer" is deducible from "It is warmer in the summer," I am making a claim about the fact that every time "It is warmer in X" yields a true substitution instance, "The thermometer is higher in X" yields one as well. When I say that "The thermometer is higher in the summer" is grounded in "It is warmer in the summer" I am making a claim about determinate conceptual relations within a given theory. I am saying that given what it means to be warmer and what it means to be a thermometer, it cannot be the case that it be warm and that the thermometer not be high. Of course the theory can be wrong, but assuming that it is true, the relation is necessary since it follows from the (true) axioms of the theory. In this respect, a priori knowledge can only be achieved in deductive disciplines when we grasp the necessary relations that subsist among the (true and purely conceptual) propositions they involve. If I know that a theorem follows from an axiom or a set of them, I know so with necessity.

5. Objective Proofs

Bolzano’s peculiar understanding of grounding is liable to a series of problems,  both exegetical and theoretical. Nonetheless, the account of mathematical demonstration, what he terms "Begründungen," (objective proofs), that it underlies is of vast historical interest. Three notions form the basis of Bolzano’s account of mathematical and deductive knowledge in general: grounding (Abfolge), objective justification (objective Erkenntnisgrund) and objective proof (Begründung). The structure of the theory is the following: (i) grounding is a relation that subsists between true propositions independently of epistemic access to them. We may grasp objective grounding relations and (ii) the possibility of grasping the latter is also the condition for our having objective justifications for our beliefs, as opposed to merely “subjective” ones. Finally, (iii) objective proofs are meant to cause the agent to have objective justifications in this sense. With respect to (ii), Bolzano’s idea is explicitly Aristotelian: Bolzano believes that whenever an agent grasps p and grasps the grounding relation between p and q, she also knows the ground for the existence of q and therefore putatively why q is true, namely because p. If we follow (iii), the role of a (typically) linguistic or schematic representation of (i) is to cause the agent to have (ii). According to Bolzano, objective proofs succeed in providing agents with an objective justification for their relevant beliefs because they make the objective ground of the propositions that form the content of these beliefs epistemically accessible to the agent. As Bolzano sees it, the typical objective proof is devised so as to reliably cause the reader or hearer to have an objective justification for the truth of the proposition. The objective proof is merely ‘reliable’ since whether I do acquire objective knowledge upon surveying the proof in question depends in part on my background knowledge, in part on my overall ability to process the relevant inferences and the latter according to Bolzano’s theory of cognition is mostly a function of my having been previously acquainted with many inferences of different types. The more accustomed I am to drawing inferences, the more reliably the objective proof is likely to cause in me the relevant objective justification.

According to Bolzano, there are good reasons why we should place strong constraints on mathematical demonstration, and in everyday practice favor the objective proofs that provide us with objective mathematical knowledge. It would be wrong however to assume that on his account mathematical knowledge can only be achieved via objective proofs. Objective proofs are not the only type of demonstration in Bolzano’s theory of knowledge, nor indeed the only bona fide one. Bolzano opposes objective proofs, that is, proofs that provide an objective justification to what he calls Gewissmachungen (certifications). Certifications, according to Bolzano, are also types of demonstrations (there are many different species thereof) in the sense that they too are meant to cause agents to know a certain truth p on the basis of another one q. When an agent is caused to know that something is true on the basis of a certification, the agent has a subjective, as opposed to an objective, justification for his or her belief. Bolzano’s theory of certification and subjective justification is an indispensible element of his account of empirical knowledge. Certifications are ubiquitous in empirical sciences such as medicine. Medical diagnosis relies on certifications in Bolzano’s sense. Symptoms are typically visible effects, direct or indirect, of diseases that allow us to recognize them. When we rely on symptoms to identify a disease, we thus never know this disease through its objective ground. Likewise, subjective proofs also play an important role in Bolzano’s account of mathematical knowledge. As Bolzano sees it, in order to have an occurrent (and not a merely dispositional) cognitive attitude towards a given propositional content, an agent must somehow be causally affected. This may be brought about in many ways. Beliefs and ideas arise in our mind most of the time in a more or less sophisticated, chaotic and spontaneous way, on the basis of mental associations and/or causal interactions with the world. The availability of a linguistic object that represents the grounding relation is meant to reliably cause objective knowledge, that is, to bring one’s interlocutor to have occurent objective knowledge of a certain truth. This may however not be the best way to cause the given belief per se. It might be that in order to cause me to recognize the truth of the intermediate value theorem, my interlocutor needs resort to a more or less intuitive diagrammatic explanation, which is precisely what objective proofs exclude. Since as Bolzano conceives of it the purpose of demonstrations is primarily to cause the interlocutor to have a higher degree of confidence (Zuversicht) in one of his beliefs, and since Bolzano emphasizes the effectiveness of proofs over their providing objective justifications, objective proofs should not be seen as the only canonical or scientifically acceptable means to bring an agent to bestow confidence on a judgment. Besides, Bolzano warns us against the idea that one ought to use only logical or formal demonstrations that might end up boring the interlocutor to distraction and have a rather adverse epistemic effect. Although Bolzano claims that we ought to use objective proofs as often as possible, he also recognizes that we sometimes have to take shortcuts or simply use heuristic creativity to cause our interlocutor to bestow confidence on the truths of mathematics, especially when the interlocutor has only partial and scattered knowledge of the discipline.

Objective proof, in addition to its epistemic virtue, introduces pragmatic constraints on demonstration that are meant to steer actual practices in deductive science. The idea that mathematical demonstrations ought to reflect the grounding order entails two things. First, it requires that an agent does not deny that a proposition has an objective ground and is thus inferable from more primitive propositions every time this agent, perhaps owing to her medical condition or limited means of recognition, fails to recognize that the proposition has an objective ground. Consequently, it insures that the demonstration procedure is not short-circuited by criterion such as intuition, evidence or insight. The requirement that mathematical demonstrations be objective proofs forbids that the agent’s inability to derive a proposition from more primitive ones be compensated by a non grounding-related feature. In this relation, Mancosu speaks of the heuristic fruitfulness of Bolzano’s requirement on scientific exposition. (Mancosu 1999, 436) Although Bolzano considered that objective proofs should be favored in mathematical demonstration and despite the fact that he thought that only objective proofs have the advantage of letting us understand why a giving proposition is true, he did not think that in everyday practice mathematical demonstrations ought to be objective proofs. Bolzano thinks that there are situations in which it is legitimate to accept proofs that deliver only evidential knowledge. When it comes to setting out a mathematical theory the main objective should be to cause the agent to have more confidence in the truth of the proposition to be demonstrated than he would have otherwise or even merely to incite him to look for an objective justification by himself. Hence, given certain circumstantial epistemic constraints, Bolzano is even willing to concede that certain proofs can be reduced to a brief justification of one’s opinion. Furthermore, though this would deserve to be investigated further, it is worth mentioning that Bolzano is not averse to reverting to purely inductive means, for instance, when it comes to mathematical demonstration. This may seem odd, but Bolzano has good reasons to avoid requiring that all our mathematical proofs provide us with objective and explanatory knowledge. For one thing, asking that all mathematical proofs be objective proofs would not be a reasonable requirement and, in particular, it would not be one that is always epistemically realizable. Given the nature of grounding, it would often require us to engage in the production of linguistic objects that have immense proportions. Since they are merely probable, Bolzano does think that evidential proofs need to be supplemented by “decisive” ones. One could want to argue that the latter reduce to objective proofs. If, upon surveying an objective proof, I acquire an objective justification, I cannot doubt the truth of the conclusion, and it is therefore decisively true. But it is hard to imagine that Bolzano would have thought that the linguistic representation of an inference from deducibility would be any less decisive. Consider this inference:

Triangles have two dimensions

is deducible from

Figures have two dimensions

Triangles are figures.

Not only is the inference truth-preserving, but the conclusion is also a conceptual truth. It is composed only of concepts which, according to Bolzano, means that its negation would imply a contradiction and is therefore necessary. In mathematics and other conceptual disciplines, deducibility and grounding both have the epistemic particularity of yielding a belief that can be asserted with confidence. By contrast, according to Bolzano, though an agent need not always be mistaken whenever she asserts a proposition that stands to its premises in a mere relation of probability, she is at least liable to committing an error. Inferences whose premises are only probable can only yield a conclusion that has probability. As Bolzano sees it, confidence is a property of judgments that are indefeasible. The conclusion (perfectly) deduced from a set of a priori propositions cannot be defeated if only because, if I know its ground, I also know why it is true and necessarily so. Similarly, if p is true and if I know that q is deducible from p (and this holds a fortiori in the case in which p and q are conceptual truths), I have a warrant, namely the fact that I know that truth is preserved from premises to conclusion, and I cannot be mistaken about the truth of q.

6. Conclusion

The importance of Bolzano’s contribution to semantics can hardly be overestimated. The same holds for his contribution to the theoretical basis of mathematical practice. Far from ignoring epistemic and pragmatic constraint, Bolzano discusses them in detail, thus providing a comprehensive basis for a theory of mathematical knowledge that was aimed at supporting work in the discipline. As a mathematician, Bolzano was attuned to philosophical concerns that escaped the attention of most of his contemporaries and many of his successors. His theory is historically and philosophically interesting, and  it deserves to be investigated further.

7. References and Further Reading

  • Bar-Hillel, Yehoshua (1950) “Bolzano's Definition of Analytic Propositions” Methodos,  32-55. [Republished in Theoria 16, 1950, pp. 91-117; reprinted in Aspects of language: Essays and Lectures on Philosophy of Language, Linguistic Philosophy and Methodology of Linguistics, Jerusalem, The Magnes Press 1970 pp. 3-28].
  • Benthem, Johan van (2003) “Is There Still Logic in Bolzano's Key?” in Bernard Bolzanos Leistungen in Logik, Mathematik und Physik, Edgar Morscher (ed.) Sankt Augustin, Academia, 11-34.
  • Benthem, Johan van (1985) “The Variety of Consequence, According to Bolzano”, Studia Logica 44/4, 389-403.
  • Benthem, Johan van (1984) Lessons from Bolzano. Stanford, Center for the Study of Language and Information, Stanford University, 1984.
  • Bolzano, Bernard (1969-…) Bernard Bolzano-Gesamtausgabe, dir. E. Winter, J. Berg, F. Kambartel, J. Louzil, B. van Rootselaar, Stuttgart-Bad Cannstatt, Fromann-Holzboog,  2 A, 12.1, Introduction par Jan Berg.
  • Bolzano, Bernard (1976) Ausgewählte Schriften, Winter, Eduard (ed.), Berlin, Union Verlag.
  • Bolzano, Bernard (1851) Paradoxien des Unendlichen, (reprint) Wissenschaftliche Buchgesellschaft, 1964. [Dr Bernard Bolzano’s Paradoxien des Unendlichen herausgegeben aus dem schriftlichem Nachlasse des Verfassers von Dr Fr. Příhonský, Leipzig, Reclam. (Höfler et Hahn (Eds.), Leipzig, Meiner, 1920)]
  • Bolzano, Bernard (1948) Gemoetrishche Arbeiten [Geometrical Works], Spisy Bernada Bolzana, Prague, Royal Bohemian Academy of Science.
  • Bolzano, Bernard (1837) Wissenschaftslehre, Sulzbach, Seidel.
  • Bolzano, Bernard (1931) Reine Zahlenlehre [Pure Theory of Numbers], Spisy Bernada Bolzana, Prague, Royal Bohemian Academy of Science.
  • Bolzano, Bernard (1930) Funktionenlehre [Theory of Function] Spisy Bernada Bolzana, Prague, Royal Bohemian Academy of Science;
  • Bolzano, Berbard (1817a) Rein Analytischer Beweis des Lehrsatzes,dass zwischen je zwey Werthe, die ein entgegengesetzes Resultat gewähren, wenigstens eine reelle Wurzel der Gleichung liege, Prague, Haase. 2nd edition, Leipzig, Engelmann, 1905; Facsimile, Berlin, Mayer & Mueller, 1894.
  • Bolzano, Bernard (1817b) Die drey Probleme der Rectification, der Complanation und der Cubirung, ohne Betrachtung des unendlich Kleinen, Leipzig, Kummer.
  • Bolzano, Bernard (1816) Der binomische Lehrsatz und als Folgerung aus ihm der polynomische, und die Reihen, die zur Berechnung der Logarithmen une Exponentialgrösse dienen, Prague, Enders.
  • Bolzano, Bernard (1812) Etwas aus der Logik, Bolzano Gesamtausgabe, Gesamtausgabe, Stuttgart, Frohmann-Holzboog, vol. 2 B 5, p.140ff.
  • Bolzano, Bernard (1810) Beyträge zu einer begründeteren Darstellung der  Mathematik; Widtmann, Prague. (Darmstadt, Wissenschaftliche Buchgesellschaft,1974).
  • Coffa, Alberto (1991) The semantic tradition fro Kant to Carnap, Cambridge, Cambridge University Press.
  • Dubucs, Jacques & Lapointe, Sandra (2006) “On Bolzano’s Alleged Explicativism,” Synthese 150/2, 229–46.
  • Etchemendy, John (2008) "Reflections on Consequence," in (Patterson 2008), 263-299.
  • Etchemendy, John (1990) The Concept of Logical Consequence, Cambridge, Harvard University Press.
  • Etchemendy, John (1988) “Models, Semantics, and Logical Truth”, Linguistics and Philosophy, 11, 91-106.
  • Freudenthal, H (1971) (“Did Cauchy Plagiarize Bolzano?”, Archives for the History of Exact Sciences 375-92.
  • Grattan-Guiness, Ivan (1970) “Bolzano, Cauchy and the 'New Analysis' of the Early Nineteenth Century,” Archives for the History of Exact Sciences, 6, 372-400.
  • Künne Wolfgang (2006) “Analyticity and logical truth: from Bolzano to Quine”, in (Textor 2006), 184-249.
  • Lapointe, Sandra (2008), Qu’est-ce que l’analyse?, Paris, Vrin.
  • Lapointe, Sandra (2007) “Bolzano's Semantics and His Critique of the Decompositional Conception of Analysis” in The Analytic Turn, Michael Beaney (Ed.), London, Routledge, pp.219–234.
  • Lapointe, Sandra (2000). Analyticité, Universalité et Quantification chez Bolzano. Les Études Philosophiques, 2000/4, 455–470.
  • Morscher, Edgar (2003) “La Définition Bolzanienne de l'Analyticité lLgique”, Philosophiques 30/1, 149-169.
  • Neeman, Ursula (1970), “Analytic and Synthetic Propositions in Kant and Bolzano” Ratio 12, 1-25.
  • Patterson, Douglas (ed.) (2008) News Essays on Tarski’s Philosophy, Oxford, Oxford.
  • Příhonský, František (1850) Neuer Anti-Kant: oder Prüfung der Kritik der reinen Vernunft nach den in Bolzanos Wissenschaftslehre niedergelegten Begriffen, Bautzen, Hiecke.
  • Proust, Joëlle (1989) Questions of Form. Logic and the Analytic Proposition from Kant to Carnap. Minneapolis: University of Minnesota Press.
  • Proust, Joëlle (1981) “Bolzano's analytic revisited”, Monist, 214-230.
  • Rusnock, Paul (2000) Bolzano's philosophy and the emergence of modern mathematics, Amsterdam, Rodopi.
  • Russ, Steve (2004) The Mathematical Works of Bernard Bolzano, Oxford, Oxford Univewrsity Press.
  • Russell, Bertrand (1903) The Principles of Mathematics, Cambridge, Cambridge University Press.
  • Sebestik, Jan (1992) Logique et mathématique chez Bernard Bolzano, Paris, Vrin.
  • Schubring, Gert (1993) “Bernard Bolzano. Not as Unknown to His Contemporaries as Is Commonly Believed?” Historia Mathematica, 20, 43-53.
  • Siebel, Mark (2003) “La notion bolzanienne de déductibilité” Philosophiques, 30/1, 171-189.
  • Siebel, Mark (2002) “Bolzano's concept of consequence” Monist, 85, 580-599.
  • Siebel, Mark (1996) Der Begriff der Ableitbarkeit bei Bolzano, Sankt Augustin, Academia Verlag.
  • Tatzel, Armin (2003) “La théorie bolzanienne du fondement et de la consequence” Philosophiques 30/1, 191-217.
  • Tatzel, Armin (2002) “Bolzano's theory of ground and consequence” Notre Dame Journal of Formal Logic 43, 1-25.
  • Textor, Mark (ed.) (2006) The Austrian Contribution to Analytic Philosoph, New York, Routledge.
  • Textor,  Mark, (2001) "Logically analytic propositions “a posteriori”?” History of Philosophy Quarterly, 18, 91-113.
  • Textor, Mark (2000) "Bolzano et Husserl sur l'analyticité," Les Études Philosophiques 2000/4 435–454.
  • Waldegg, Guillermina, (2001) “Ontological Convictions and Epistemological Obstacles in Bolzano's Elementary Geometry”, Science and Education, 10/4 409-418.

Author Information

Sandra LaPointe
Email: sandra.lapointe@mac.com
Kansas State University
U. S. A.

Whitehead, Alfred North

Alfred North Whitehead Whitehead(1861–1947)

Alfred North Whitehead was a notable mathematician, logician, educator and philosopher. The staggering complexity of Whitehead’s thought, coupled with the extraordinary literary quality of his writing, have conspired to make Whitehead (in an oft-repeated saying) one of the most-quoted but least-read philosophers in the Western canon. While he is widely recognized for his collaborative work with Bertrand Russell on the Principia Mathematica, he also made highly innovative contributions to philosophy, especially in the area of process metaphysics. Whitehead was an Englishman by birth and a mathematician by formal education. He was highly regarded by his students as a teacher and noted as a conscientious and hard-working administrator. The volume of his mathematical publication was never great, and much of his work has been eclipsed by more contemporary developments in the fields in which he specialized. Yet many of his works continue to stand out as examples of expository clarity without ever sacrificing logical rigor, while his theory of “extensive abstraction” is considered to be foundational in contemporary field of formal spatial relations known as “mereotopology.”

Whitehead’s decades-long focus on the logical and algebraic issues of space and geometry which led to his work on extension, became an integral part of an explosion of profoundly original philosophical work He began publishing even as his career as an academic mathematician was reaching a close. The first wave of these philosophical works included his Enquiry into the Principles of Natural Knowledge, The Concept of Nature, and The Principle of Relativity, published between 1919 and 1922. These books address the philosophies of science and nature, and include an important critique of the problem of measurement raised by Albert Einstein’s general theory of relativity. They also present an alternative theory of space and gravity. Whitehead built his system around an event-based ontology that interpreted time as essentially extensive rather than point-like.

Facing mandatory retirement in England, Whitehead accepted a position at Harvard in 1924, where he continued his philosophical output. His Science and the Modern World offers a careful critique of orthodox scientific materialism and presents his first worked-out version of the related fallacies of “misplaced concreteness” and “simple location.” The first fallacy is the error of treating an abstraction as though it were concretely real. The second is the error of assuming that anything that is real must have a simple spatial location. But the pinnacle of Whitehead’s metaphysical work came with his monumental Process and Reality in 1929 and his Adventures of Ideas in 1933. The first of these books gives a comprehensive and multi-layered categoreal system of internal and external relations that analyzes the logic of becoming an extension within the context of a solution to the problem of the one and the many, while also providing a ground for his philosophy of nature. The second is an outline of a philosophy of history and culture within the framework of his metaphysical scheme.

Table of Contents

  1. Biography
  2. Thought and Writings
    1. Major Thematic Structures
    2. Mathematical Works
    3. Writings on Education
    4. Philosophy of Nature
    5. Metaphysical Works
  3. Influence and Legacy
  4. References and Further Reading
    1. Primary Sources
    2. Secondary Sources

1. Biography

Alfred North Whitehead was born on February 15th, 1861 at Ramsgate in Kent, England, to Alfred and Maria Whitehead. Thought by his parents to be too delicate for the rough and tumble world of the English public school system, young Alfred was initially tutored at home. Ironically, when he was finally placed in public school, Whitehead became both head boy of his house and captain of his school’s rugby team. Whitehead always looked upon his days as a boy as a rather idyllic time. The education he received at home was always congenial to his natural habit of thinking, and he was able to spend long periods of time walking about in English country settings that were rich with history.

While Whitehead always enjoyed the classics, his true strength was with mathematics. Because of both its quality, and the unique opportunity to take the entrance examinations early, Alfred tested for Trinity College, Cambridge, in 1879, a year before he would otherwise have been allowed to enter. Whitehead’s focus was in mathematics, as were those of about half the hopefuls that were taking the competitive exams that year. While not in the very top tier, Whitehead’s exam scores were nevertheless good enough to gain him entrance into Trinity for the school year beginning in 1880, along with a £50 scholarship. While the money was certainly important, the scholarship itself qualified Whitehead for further rewards and considerations, and set him on the path to eventually being elected a Fellow of Trinity.

This happened in 1884, with the completion of his undergraduate work and his high standing in the finals examinations in mathematics for that year. Whitehead’s early career was focused on teaching, and it is known that he taught at Trinity during every term from 1884 to 1910. He traveled to Germany during an off-season at Cambridge (probably 1885), in part to learn more of the work of such German mathematicians as Felix Klein. Whitehead was also an ongoing member of various intellectual groups at Cambridge during this period. But he published nothing of note, and while he was universally praised as a teacher, the youthful Alfred displayed little promise as a researcher.

In 1891, when he was thirty years of age, Whitehead married Evelyn Wade. Evelyn was in every respect the perfect wife and partner for Alfred. While not conventionally intellectual, Evelyn was still an extremely bright woman, fiercely protective of Alfred and his work, and a true home-maker in the finest sense of the term. Although Evelyn herself was never fully accepted into the social structures of Cambridge society, she always ensured that Alfred lived in a comfortable, tastefully appointed home, and saw to it that he had the space and opportunity to entertain fellow scholars and other Cambrians in a fashion that always reflected well upon the mathematician.

It is also in this period that Whitehead began work on his first major publication, his Treatise on Universal Algebra. Perhaps with his new status as a family man, Whitehead felt the need to better establish himself as a Cambridge scholar. The book would ultimately be of minimal influence in the mathematical community. Indeed, the mathematical discipline that goes by that name shares only its name with Whitehead’s work, and is otherwise a very different area of inquiry. Still, the book established Whitehead’s reputation as a scholar of note, and was the basis for his 1903 election as a Fellow of the Royal Society.

It was after the publication of this work that Whitehead began the lengthy collaboration with his student, and ultimately Trinity Fellow, Bertrand Russell, on that monumental work that would become the Principia Mathematica. However, the final stages of this collaboration would not occur within the precincts of Cambridge. By 1910, Whitehead had been at Trinity College for thirty years, and he felt his creativity was being stifled. But it was also in this year that Whitehead’s friend and colleague Andrew Forsyth’s long-time affair with a married woman turned into a public indiscretion. It was expected that Forsyth would lose his Cambridge professorship, but the school took the extra step of withdrawing his Trinity Fellowship as well. Publicly in protest of this extravagant action, Whitehead resigned his own professorship (though not his Fellowship) as well. Privately, it was the excuse he needed to shake up his own life.

At the age of 49 and lacking even the promise of a job, Whitehead moved his family to London, where he was unemployed for the academic year of 1910 – 11. It was Evelyn who borrowed or bullied the money from their acquaintances that kept the family afloat during that time. Alfred finally secured a lectureship at University College, but the position offered no chance of growth or advancement for him. Finally in 1914, the Imperial College of Science and Technology in London appointed him as a professor of applied Mathematics.

It was here that Whitehead’s initial burst of philosophical creativity occurred. His decades of research into logic and spatial reasoning expressed itself in a series of three profoundly original books on the subjects of science, nature, and Einstein’s theory of relativity. At the same time, Whitehead maintained his teaching load while also assuming an increasing number of significant administrative duties. He was universally praised for his skill in all three of these general activities. However, by 1921 Whitehead was sixty years old and facing mandatory retirement within the English academic system. He would only be permitted to work until his sixty-fifth birthday, and then only with an annual dispensation from Imperial College. So it was that in 1924, Whitehead accepted an appointment as a professor of philosophy at Harvard University.

While Whitehead’s work at Imperial College is impressive, the explosion of works that came during his Harvard years is absolutely astounding. These publications include Science and the Modern World, Process and Reality, and Adventures of Ideas.

Whitehead continued to teach at Harvard until his retirement in 1937. He had been elected to the British Academy in 1931, and awarded the Order of Merit in 1945. He died peacefully on December 30th, 1947. Per the explicit instructions in his will, Evelyn Whitehead burned all of his unpublished papers. This action has been the source of boundless regret for Whitehead scholars, but it was Whitehead’s belief that evaluations of his thought should be based exclusively on his published work.

2. Thought and Writings

a. Major Thematic Structures

The thematic and historical analyses of Whitehead’s work largely coincide. However, these two approaches naturally lend themselves to slightly different emphases, and there are important historical overlaps of the dominating themes of his thought. So it is worthwhile to view these themes ahistorically prior to showing their temporal development.

The first of these thematic structures might reasonably be called “the problem of space.” The confluence of several trends in mathematical research set this problem at the very forefront of Whitehead’s own inquiries. James Clerk Maxwell’s Treatise on electromagnetism had been published in 1873, and Maxwell himself taught at Cambridge from 1871 until his death in 1879. The topic was a major subject of interest at Cambridge, and Whitehead wrote his Trinity Fellowship dissertation on Maxwell’s theory. During the same period, William Clifford in England, and Felix Klein and Wilhelm Killing in Germany were advancing the study of spaces of constant curvature. Whitehead was well aware of their work, as well as that of Hermann Grassmann, whose ideas would later become of central importance in tensor analysis.

The second major trend of Whitehead’s thought can be usefully abbreviated as “the problem of history,” although a more accurate descriptive phrase would be “the problem of the accretion of value.” Of the two themes, this one can be the more difficult to discern within Whitehead’s corpus, partly because it is often implicit and does not lend itself to formalized analysis. In its more obvious forms, this theme first appears in Whitehead’s writings on education. However, even in his earliest works, Whitehead’s concern with the function of symbolism as an instrument in the growth of knowledge shows a concern for the accretion of value. Nevertheless, it is primarily with his later philosophical work that this topic emerges as a central element and primary focus of his thought.

b. The Early Mathematical Works

Whitehead’s first major publication was his A Treatise on Universal Algebra with Applications (“UA,” 1898.) (Whenever appropriate, common abbreviations will be given, along with the year of publication, for Whitehead’s major works.) Originally intended as a two-volume work, the second volume never appeared as Whitehead’s thinking on the subject continued to evolve, and as the plans for Principia Mathematica eventually came to incorporate many of the objectives of this volume. Despite the “algebra” in the title, the work is primarily on the foundations of geometry and formal spatial relations. UA offers little in the way of original research by Whitehead. Rather, the work is primarily expository in character, drawing together a number of previously divergent and scattered themes of mathematical investigation into the nature of spatial relations and their underlying logic, and presenting them in a systematic form.

While the book helped establish Whitehead’s reputation as a scholar and was the basis of his election as a Fellow of the Royal Society, UA had little direct impact on mathematical research either then or later. Part of the problem was the timing and approach of Whitehead’s method. For while he was very explicit about the need for the rigorous development of symbolic logic, Whitehead’s logic was “algebraic” in character. That is to say, Whitehead's focus was on relational systems of order and structure preserving transformations. In contrast, the approaches of Giuseppe Peano and Gottlob Frege, with their emphasis on proof and semantic relations, soon became the focus of mathematical attention. While these techniques were soon to become of central importance for Whitehead’s own work, the centrality of algebraic methods to Whitehead’s thinking is always in evidence, especially in his philosophy of nature and metaphysics. The emphasis on structural relations in these works is a key component to understanding his arguments.

In addition, UA itself was one in a rising chorus of voices that had begun to take the work of Hermann Grassmann seriously. Grassmann algebras would come to play a vital role in tensor analysis and general relativity. Finally, the opening discussion of UA regarding the importance and uses of formal symbolism remains of philosophical interest, both in its own right and as an important element in Whitehead’s later thought.

Other early works by Whitehead include his two short books, the Axioms of Projective Geometry (1906) and the Axioms of Descriptive Geometry (1907). These works take a much more explicitly logical approach to their subject matter, as opposed to the algebraic techniques of Whitehead’s first book. However, it remains the case that these two works are not about presenting cutting edge research so much as they are about the clear and systematic development of existing materials. As suggested by their titles, the approach is axiomatic, with the axioms chosen for their illustrative and intuitive value, rather than their strictly logical parsimony. As such, these books continue to serve as clear and concise introductions to their subject matters.

Even as he was writing the two Axioms books, Whitehead was well into the collaboration with Bertrand Russell that would lead to the three volumes of the Principia Mathematica. Although most of the Principia was written by Russell, the work itself was a truly collaborative endeavor, as is demonstrated by the extant correspondence between the two. The intention of the Principia was to deduce the whole of arithmetic from absolutely fundamental logical principles. But Whitehead’s role in the project, besides working with Russell on the vast array of details in the first three volumes, was to be the principal author of a fourth volume whose focus would be the logical foundations of geometry. Thus, what Whitehead had originally intended to be the second volume of UA had transformed into the fourth volume of the Principia Mathematica, and like that earlier planned volume, the fourth part of Principia Mathematica never appeared. It would not be until Whitehead’s published work on the theory of extension, work that never appeared independently but always as a part of a larger philosophical enterprise, that his research into the foundations of geometry would finally pay off.

c. Writings on Education

By the time the Principia was published, Whitehead had left his teaching position at Trinity, and eventually secured a lectureship at London’s University College. It was in these London years that Whitehead published a number of essays and addresses on the theory of education. But it would be a mistake to suppose that his concern with education began with the more teaching-oriented (as opposed to research-oriented) positions he occupied after departing Cambridge. Whitehead had long been noted as an exceptional lecturer by his students at Cambridge. He also took on less popular teaching duties, such as teaching at the non-degree conferring women’s institutions associated with Cambridge of Girton and Newham colleges.

Moreover, the concern for the conveyance of ideas is evident from the earliest of Whitehead’s writings. The very opening pages of UA are devoted to a discussion of the reasons and economies of well-chosen symbols as aids to the advancement of thought. Or again, the intention underlying the two Axioms books was not so much the advancement of research as the communication of achieved developments in mathematics. Whitehead’s book, An Introduction to Mathematics (1911), published in the midst of the effort to get the Principia out, had no research agenda per se. This book was again entirely devoted toward introducing students to the character of mathematical thought, to the methods of abstraction, the nature of variables and functions, and to offer some sense of the power and generality of these formalisms.

Whitehead’s essays that specifically address education often do so with the explicit desire to revise the teaching of mathematics in England. But they also argue, both explicitly and implicitly, for a balance of liberal education devoted to the opening of the mind, with technical education intended to facilitate the vocational aptitudes of the student. Education for Whitehead was never just the mere memorization of ancient stories and empty abstractions, any more than it was just the technical training of the working class. It always entailed the growth of the student as a fully functioning human being. In this respect, as well as others, Whitehead’s arguments compare favorably with those of John Dewey [[hyperlink]].

Whitehead never systematized his educational thought the way Dewey did, so these ideas must be gleaned from his various essays and looked for as an implicit foundation to such larger works as his Adventures of Ideas (see below). Many of Whitehead’s essays on education were collected together in The Aims of Education, published in 1929, as well as his Essays in Science and Philosophy, published in 1948.

d. The Philosophy of Nature

Whitehead’s interest in the problem of space was, at least from his days as a graduate student at Cambridge, more than just an interest in the purely formal or mathematical aspects of geometry. It is to be recalled that his dissertation was on Maxwell’s theory of electromagnetism, which was a major development in the ideas that led to Einstein’s theories of special and general relativity. The famous Michelson-Morely experiment to measure the so-called “Ether drift” was a response to Maxwell’s theory of electromagnetism. Einstein himself offers only a generic nod toward the experiments regarding space and light in his 1905 paper on special relativity. The problem Einstein specifically cites in that paper is the lack of symmetry then to be found in theories of space and the behavior of electromagnetic phenomena. By 1910, when the first volume of the Principia Mathematica was being published, Hermann Minkowski had reorganized the mathematics of Einstein’s special relativity into a four-dimensional non-Euclidean manifold. By 1914, two years before the publication of Einstein’s paper on general relativity, theoretical developments had advanced to the extent that an expedition to the Crimea was planned to observe the predicted bending of stellar light around the sun during an eclipse. This expedition was cancelled with the eruption of the First World War.

These developments helped conspire to prevent Whitehead’s planned fourth volume of the Principia from ever appearing. A few papers appeared during the war years, in which a relational theory of space begins to emerge. What is perhaps most notable about these papers is that they are no longer specifically mathematical in nature, but are explicitly philosophical. Finally, in 1919 and 1920, Whitehead’s thought appeared in print with the publications of two books, An Enquiry into the Principles of Natural Knowledge (“PNK,” 1919) and The Concept of Nature (“CN,” 1920).

While PNK is much more formally technical than CN, both books share a common and radical view of nature and science that rejects the identification of nature with the mathematical tools used to characterize its relational structures. Nature for Whitehead is that which is experienced through the senses. For this reason, Whitehead argues that there are no such things as “points” of either time or space. An infinitesimal point is a high abstraction with no experiential reality, while time and space are irreducibly extensional in character.

To account for the effectiveness of mathematical abstractions in their application to natural knowledge, Whitehead introduced his theory of “extensive abstraction.” By using the logical and topological structures of concentric part-whole relations, Whitehead argued that abstract entities such as geometric points could be derived from the concrete, extensive relations of space and time. These abstract entities, in their turn, could be shown to be significant of the nature they had been abstractively derived from. Moreover, since these abstract entities were formally easier to use, their significance of nature could be retained through their various deductive relations, thereby giving evidence for further natural significances by this detour through purely abstract relations.

Whitehead also rejected “objects” as abstractions, and argued that the fundamental realities of both experience and nature are events. Events are themselves irreducibly extended entities, where the temporal / durational extension is primary. “Objects” are the idealized significances that retain a stable meaning through an event or family of events.

It is important to note here that Whitehead is arguing for a kind of empiricism. But, as Victor Lowe has noted, this empiricism is more akin to the ideas of William James than it is to the logical positivism of Whitehead’s day. In other words, Whitehead is arguing for a kind of Jamesian “radical empiricism,” in which sense-data are abstractions, and the basic deliverances of raw experience include such things as relations and complex events.

These ideas were further developed with the publication of Whitehead’s The Principles of Relativity with Applications to Natural Science (“R,” 1922). Here Whitehead proposed an alternative physical theory of space and gravity to Einstein’s general relativity. Whitehead’s theory has commonly been classified as “quasi-linear” in the physics literature, when it should properly be describes as “bimetric.” Einstein’s theory collapses the physical and the spatial into a single metric, so that gravity and space are essentially identified. Whitehead pointed out that this then loses the logical relations necessary to make meaningful cosmological measurements. In order to make meaningful measurements of space, we must know the geometry of that space so that the congruence relations of our measurement instruments can be projected through that space while retaining their significance. Since Einstein’s theory loses the distinction between the physical and the geometrical, the only way we can know the geometry of the space we are trying to measure is if we first know the distributions of matter and energy throughout the cosmos that affect that geometry. But we can only know these distributions if we can first make accurate measurements of space. Thus, as Whitehead argued, we are left in the position of first having to know everything before we can know anything.

Whitehead argued that the solution to this problem was to separate the necessary relations of geometry from the contingent relations of physics, so that one’s theory of space and gravity is “bimetric,” or is built from the two metrics of geometry and physics. Unfortunately, Whitehead never used the term “bimetric,” and his theory has often been misinterpreted. Questions of the viability of Whitehead’s specific theory have needlessly distracted both philosophers and physicists from the real issue of the class of theories of space and gravity that Whitehead was arguing for. Numerous viable bimetric alternatives to Einstein’s theory of relativity are currently known in the physics literature. But because Whitehead’s theory has been misclassified and its central arguments poorly understood, the connections between Whitehead’s philosophical arguments and these physical theories have largely gone unnoticed.

e. The Metaphysical Works

The problems Whitehead had engaged with his triad of works on the philosophy of nature and science required a complete re-evaluation of the assumptions of modern science. To this end, Whitehead published Science in the Modern World (“SMW,” 1925). This work had both a critical and a constructive aspect, although the critical themes occupied most of Whitehead’s attention. Central to those critical themes was Whitehead’s challenge to dogmatic scientific materialism developed through an analysis of the historical developments and contingencies of that belief. In addition, he continued with the themes of his earlier triad, arguing that objects in general, and matter in particular, are abstractions. What are most real are events and their mutual involvements in relational structures.

Already in PNK, Whitehead had characterized electromagnetic phenomena by saying that while such phenomena could be related to specific vector quantities at each specific point of space, they express “at all points one definite physical fact” (PNK, 29). Physical facts such as electromagnetic phenomena are single, relational wholes, but they are spread out across the cosmos. In SMW Whitehead called the failure to appreciate this holism and the relational connectedness of reality, “the fallacy of simple location.” According to Whitehead, much of contemporary science, driven as it was by the dogma of materialism, was committed to the fallacy that only such things as could be localized at a mathematically simple “point” of space and time were genuinely real. Relations and connections were, in this dogmatic view, secondary to and parasitic upon such simply located entities. Whitehead saw this as reversing the facts of nature and experience, and devoted considerable space in SMW to criticizing it.

A second and related fallacy of contemporary science was what Whitehead identified in SMW as, “the fallacy of misplaced concreteness.” While misplaced concreteness could include treating entities with a simple location as more real than those of a field of relations, it also went beyond this. Misplaced concreteness included treating “points” of space or time as more real than the extensional relations that are the genuine deliverances of experience. Thus, this fallacy resulted in treating abstractions as though they were concretely real. In Whitehead’s view, all of contemporary physics was infected by this fallacy, and the resultant philosophy of nature had reversed the roles of the concrete and the abstract.

The critical aspects of SMW were ideas that Whitehead had already expressed (in different forms) in his previous publications, only now with more refined clarity and persuasiveness. On the other hand, the constructive arguments in SMW are astonishing in their scope and subtlety, and are the first presentation of his mature metaphysical thinking. For example, the word “prehension,” which Whitehead defines as “uncognitive apprehension” (SMW 69) makes its first systematic appearance in Whitehead’s writings as he refines and develops the kinds and layers of relational connections between people and the surrounding world. As the “uncognitive” in the above is intended to show, these relations are not always or exclusively knowledge based, yet they are a form of “grasping” of aspects of the world. Our connection to the world begins with a “pre-epistemic” prehension of it, from which the process of abstraction is able to distill valid knowledge of the world. But that knowledge is abstract and only significant of the world; it does not stand in any simple one-to-one relation with the world. In particular, this pre-epistemic grasp of the world is the source of our quasi- a priori knowledge of space which enables us to know of those uniformities that make cosmological measurements, and the general conduct of science, possible.

SMW goes far beyond the purely epistemic program of Whitehead’s philosophy of nature. The final three chapters, entitled “God,” “Religion and Science,” and “Requisites for Social Progress,” clearly announce the explicit emergence of the second major thematic strand of Whitehead’s thought, the “problem of history” or “the accretion of value.” Moreover, these topics are engaged with the same thoroughly relational approach that Whitehead previously used with nature and science.

Despite the foreshadowing of these last chapters of SMW, Whitehead’s next book may well have come as a surprise to his academic colleagues. Whitehead’s brief Religion in the Making (“RM,” 1926) tackles no part of his earlier thematic problem of space, but instead focuses entirely on the second thematic of history and value. Whitehead defines religion as “what the individual does with his own solitariness” (RM 16). Yet it is still Whitehead the algebraist who is constructing this definition. Solitariness is understood as a multi-layered relational modality of the individual in and toward the world. In addition, this relational mode cannot be understood in separation from its history. On this point, Whitehead compares religion with arithmetic. Thus, an understanding of the latter makes no essential reference to its history, whereas for religion such a reference is vital. Moreover, as Whitehead states, “You use arithmetic, but you are religious” (RM 15).

Whitehead also argues that, “The purpose of God is the attainment of value in the temporal world,” and “Value is inherent in actuality itself” (RM 100). Whitehead’s use of the word “God” in the foregoing invites a wide range of habitual assumptions about his meaning, most, if not all, of which will probably be mistaken. The key element for Whitehead is value. God, like arithmetic, is discussed in terms of something which has a purpose. On the other hand, value is like being religious in that it is inherent. It is something that is rather than something that is used.

Shortly after this work, there appeared another book whose brevity betrays its importance, Symbolism its Meaning and Effect (“S,” 1927). Whitehead’s explicit interest in symbols was present in his earliest publication. But in conjunction with his theory of prehension, the theory of symbols came to take on an even greater importance for him. Our “uncognitive” sense-perceptions are directly caught up in our symbolic awareness as is shown by the immediacy with which we move beyond what is directly given to our senses. Whitehead uses the example of a puppy dog that sees a chair as a chair rather than as a patch of color, even though the latter is all that impinges on the dog’s retina. (Whitehead may not have known that dogs are color blind, but this does not significantly affect his example.) Thus, this work further develops Whitehead’s theories of perception and awareness, and does so in a manner that is relatively non-technical. Because of the centrality of the theory of symbols and perception to Whitehead’s later philosophy, this clarity of exposition makes this book a vital stepping stone to what followed.

What followed was Process and Reality (“PR,” 1929). This book is easily one of the most dense and difficult works in the entire Western canon. The book is rife with technical terms of Whitehead’s own invention, necessitated by his struggle to push beyond the inherited limits of the available concepts toward a comprehensive vision of the logical structures of becoming. It is here that we see the problem of space receive its ultimate payoff in Whitehead’s thought. But this payoff comes in the form of a fully relational metaphysical scheme that draws upon his theory of symbols and perception in the most essential manner possible. At the same time, PR plants the seeds for the further engagement of the problem of the accretion of value that is to come in his later work. Because each process of becoming must be considered holistically as an essentially organic unity, Whitehead often refers to his theory as the “philosophy of organism.”

PR invites controversy while defying brief exposition. Many of the relational ideas Whitehead develops are holistic in character, and thus do not lend themselves to the linear presentation of language. Moreover, the language Whitehead needs to build his holistic image of the world is often biological or mentalistic in character, which can be jarring when the topic being discussed is something like an electron. Moreover, Whitehead the algebraist was an intrinsically relational thinker, and explicitly characterized the subject / predicate mode of language as a “high abstraction.” Nevertheless, there are some basic ideas which can be quickly set out.

The first of these is that PR is not about time per se. This has been a subject of much confusion. But Whitehead himself points out that physical time as such only comes about with “reflection” of the “divisibility” of his two major relational types into one another (PR 288 – 9). Moreover, throughout PR, Whitehead continues to endorse the theory of nature found in his earlier triad of books on the subject. So the first step in gaining a handle on PR is to recognize that it is better thought of as addressing the logic of becoming, whereas his books from 1919 – 1922 address the “nature” of time.

The basic units of becoming for Whitehead are “actual occasions.” Actual occasions are “drops of experience,” and relate to the world into which they are emerging by “feeling” that relatedness and translating it into the occasion’s concrete reality. When first encountered, this mode of expression is likely to seem peculiar if not downright outrageous. One thing to note here is that Whitehead is not talking about any sort of high-level cognition. When he speaks of “feeling” he means an immediacy of concrete relatedness that is vastly different from any sort of “knowing,” yet which exists on a relational spectrum where cognitive modes can emerge from sufficiently complex collections of occasions that interrelate within a systematic whole. Also, feeling is a far more basic form of relatedness than can be represented by formal algebraic or geometrical schemata. These latter are intrinsically abstract, and to take them as basic would be to commit the fallacy of misplaced concreteness. But feeling is not abstract. Rather, it is the first and most concrete manifestation of an occasion’s relational engagement with reality.

This focus on concrete modes of relatedness is essential because an actual occasion is itself a coming into being of the concrete. The nature of this “concrescence,” using Whitehead’s term, is a matter of the occasion’s creatively internalizing its relatedness to the rest of the world by feeling that world, and in turn uniquely expressing its concreteness through its extensive connectedness with that world. Thus an electron in a field of forces “feels” the electrical charges acting upon it, and translates this “experience” into its own electronic modes of concreteness. Only later do we schematize these relations with the abstract algebraic and geometrical forms of physical science. For the electron, the interaction is irreducibly concrete.

Actual occasions are fundamentally atomic in character, which leads to the next interpretive difficulty. In his previous works, events were essentially extended and continuous. And when Whitehead speaks of an “event” in PR without any other qualifying adjectives, he still means the extensive variety found in his earlier works (PR 73). But PR deals with a different set of problems from that previous triad, and it cannot take such continuity for granted. For one thing, Whitehead treats Zeno's Paradoxes very seriously and argues that one cannot resolve these paradoxes if one starts from the assumption of continuity, because it is then impossible to make sense of anything coming immediately before or immediately after anything else. Between any two points of a continuum such as the real number line there are an infinite number of other points, thus rendering the concept of the “next” point meaningless. But it is precisely this concept of the “next occasion” that Whitehead requires to render intelligible the relational structures of his metaphysics. If there are infinitely many occasions between any two occasions, even ones that are nominally “close” together, then it becomes impossible to say how it is that later occasions feel their predecessors – there is an unbounded infinity of other occasions intervening in such influences, and changing it in what are now undeterminable ways. Therefore, Whitehead argued, continuity is not something which is “given;” rather it is something which is achieved. Each occasion makes itself continuous with its past in the manner in which it feels that past and creatively incorporates the past into its own concrescence, its coming into being.

Thus, Whitehead argues against the “continuity of becoming” and in favor of the “becoming of continuity” (PR 68 – 9). Occasions become atomically, but once they have become they incorporate themselves into the continuity of the universe by feeling the concreteness of what has come before and making that concreteness a part of the occasion’s own internal makeup. The continuity of space and durations in Whitehead’s earlier triad does not conflict with his metaphysical atomism, because those earlier works were dealing with physical nature in which continuity has already come into being, while PR is dealing with relational structures that are logically and metaphysically prior to nature.

Most authors believe that the sense of “atomic” being used here is similar to, if not synonymous with, “microscopic.” However, there are reasons why one might want to resist such an interpretation. To begin with, it teeters on the edge of the fallacy of simple location to assume that by “atomic” Whitehead means “very small.” An electron, which Whitehead often refers to as an “electronic occasion,” may have a tiny region of most highly focused effects. But the electromagnetic field that spreads out from that electron reaches far beyond that narrow focus. The electron “feels” and is “felt” throughout this field of influence which is not spatially limited. Moreover, Whitehead clearly states that space and time are derivative notions from extension whereas, “To be an actual occasion in the physical world means that the entity in question is a relatum in this scheme of extensive connection” (PR 288 – 9). The quality of being microscopic is something that only emerges after one has a fully developed notion of space, while actual occasions are logically prior to space and a part of the extensive relations from which space itself is derived. Thus it is at least arguably the case that the sense of “atomic” that Whitehead is employing hearkens back more to the original Greek meaning of “irreducible” than to the microscopic sense that pervades physical science. In other words, the “atomic” nature of what is actual is directly connected to its relational holism.

The structure of PR is also worth attention, for each of the five major parts offers a significant perspective on the whole. Part I gives Whitehead’s defense of speculative philosophy and sets out the “categoreal scheme” underlying PR. The second part applies these categories to a variety of historical and thematic topics. Part three gives the theory of prehensions as these manifest themselves with and through the categories, and is often called the “genetic account.” The theory of extension, or the “coordinate account,” constitutes part four and represents the ultimate development of Whitehead’s rigorous thought on the nature of space. The last and final part presents both a theory of the dialectic of opposites, and the minimalist role of God in Whitehead’s system as the foundation of coherence in the world’s processes of becoming.

Two of the features of part I that stand out are Whitehead’s defense of speculative philosophy, and his proposed resolution of the traditional problem of the One and the Many. “Speculative philosophy” for Whitehead is a phrase he uses interchangeably with “metaphysics.” However, what Whitehead means is a speculative program in the most scientifically honorific sense of the term. Rejecting any form of dogmatism, Whitehead states that his purpose is to, “frame a coherent, logical, necessary system of general ideas in terms of which every element of our experience can be interpreted” (PR 3). The second feature, the solution to the problem of the “one and the many,” is often summarized as, “The many become one, and increase by one.” This means that the many occasions of the universe that have already become contribute their atomic reality to the becoming of a new occasion (“the many become one”). However, this occasion, upon fully realizing in its own atomic character, now contributes that reality to the previously achieved realities of the other occasions (“and increase by one”).

The atomic becoming of an actual occasion is achieved by that occasion’s “prehensive” relations and its “extensive” relations. An actual occasion’s holistically felt and non-sequentially internalized concrete evaluations of its relationships to the rest of the world is the subject matter of the theory of “prehension,” part III of PR. This is easily one of the most difficult and complex portions of that work. The development that Whitehead is describing is so holistic and anti-sequential that it might appropriately be compared to James Joyce’s Finnegan’s Wake. An actual occasion “prehends” its world (relationally takes that world in) by feeling the “objective data” of past occasions which the new occasion utilizes in its own concrescence. This data is prehended in an atemporal and nonlinear manner, and is creatively combined into the occasion’s own manifest self-realization. This is to say that the becoming of the occasion is also informed by a densely teleological sense of the occasion’s own ultimate actuality, its “subjective aim” or what Whitehead calls the occasion’s “superject.” Once it has become fully actualized, the occasion as superject becomes an objective datum for those occasions which follow it, and the process begins again.

This same process of concrescence is described in its extensive characters in part IV, where the mereological (formal relations of part and whole) as well as topological (non-metrical relations of neighborhood and connection) characteristics of extension are developed. Unlike the subtle discussion of prehensions, Whitehead’s theory of extension reads very much like a text book on the logic of spatial relations. Indeed, a great deal of contemporary work in artificial intelligence and spatial reasoning identifies this section of PR as foundational to this field of research, which often goes by the intimidating title of “mereotopology.”

The holistic character of prehension and the analytical nature of extension invite the reader to interpret the former as a theory of “internal relations” and the latter as a theory of “external relations.” Put simply, external relations treat the self-identity of a thing as the first, analytically given fact, while internal relations treat it as the final, synthetically developed result. But Whitehead explicitly associates internal relations with extension, and externality with that of prehension. This seeming paradox can be resolved by noting that, even though prehension is the process of the actual occasion’s “internalizing” the rest of reality as it composes its own self-identity, the achieved result (the superject) is the atomic realization of that occasion in its ultimate externality to the rest of the world. On the other hand, the mereological relations of part and whole from which extension is built, are themselves so intrinsically correlative to one another that each only meaningfully expresses its own relational structures to the extent that it completely internalizes the other.

Whitehead was never one to revisit a problem once he felt he had addressed it adequately. With the publication of PR and the final version of his theory of extension, Whitehead never returned to the ‘problem of space’ except on those limited occasions when his later work required that he mention those earlier developments. Those later works were effectively focused upon the ‘problem of history’ to the exclusion of all else. The primary book on this topic is Adventures of Ideas (“AI,” 1933).

AI is a pithy and engaging book whose opening pages entice the reader with clear and evidently non-technical language. But it is a book that needs to be approached with care. Whitehead assumes, without explanation, knowledge on the part of his readers of the metaphysical scheme of PR, and resorts to the terminology of that book whenever the argument requires it. Indeed, AI is the application of Whitehead’s process metaphysics to the “problem of history.” Whitehead surveys numerous cultural forms from a thoroughly relational perspective, analyzing the ways in which these connections contribute both to the rigidities of culture and the possibilities for novelty in various “adventures” in the accumulation of meanings and values. Many of the forces in this adventure of meaning are blind and senseless, thus presenting the challenge of becoming more deliberate in our processes of building and changing them.

In line with this, two other works bear mentioning: The Function of Reason (“FR,” 1929) and Modes of Thought (“MT,” 1938). FR presents an updated version of Aristotle’s three classes of soul (the vegetative, the animate, and the rational); only in Whitehead’s case, the classifications are, as the title states, functional rather than facultative. Thus, for Whitehead, the function of reason is “promote the art of life,” which is a three-fold function of “(i) to live, (ii) to live well, (iii) to live better” (FR 4, 8). Thus, reason for Whitehead is intrinsically organic in both origin and purpose. But the achievement of a truly reasonable life is a matter that involves more than just the logical organization of propositional knowledge. It is a matter of full and sensitive engagement with the entire lived world. This is the topic of MT, Whitehead’s final major publication. In arguing for a multiplicity of modes of thought, Whitehead offered his final great rebellion against the excessive focus on language that dominated the philosophical thought of his day. In this work, Whitehead also offered his final insight as to the purpose and function of philosophy itself. “The use of philosophy,” Whitehead concluded, “is to maintain an active novelty of fundamental ideas illuminating the social system. It reverses the slow descent of accepted thought towards the inactive commonplace.” In this respect, “philosophy is akin to poetry” (MT 174).

3. Influence and Legacy

Evaluating Whitehead’s influence is a difficult matter. While Whitehead’s influence has never been great, in the opening years of the 21st century it appears to be growing in a broad range of otherwise divergent disciplines. Fulfilling his own vision of the use of philosophy, Whitehead’s ideas are a rich trove of alternative approaches to traditional problems. His thoroughgoing relational and process orientation offers numerous opportunities to reimagine the ways in which the world is connected and how those connections manifest themselves.

The most prominent area of ongoing Whiteheadian influence is within process theology. While Whitehead’s explicit philosophical treatments of God seldom went beyond that of an ideal principle of maximal coherence, many others have developed these ideas further. Writers such as Charles Hartshorne and John Cobb have speculated on, and argued for, a much more robust, ontological conception of God. Nothing in Whitehead’s own writings require such developments, but neither are they in any way precluded. The God of process theology tends to be far more personal and much more of a co-participant in the creative process of the universe than that which one often finds in orthodox religions.

Within philosophy itself, Whitehead’s influence has been smaller and much more diffuse. Yet those influences are likely to crop up in what seem, on the surface at least, to be improbable places. The literature here is too vast to enumerate, but it includes researches from all of the major philosophical schools including pragmatism, analytical, and continental thought. The topics engaged include ontology, phenomenology, personalism, philosophical anthropology, ethics, political theory, economics, etc.

There are also a variety of ways in which Whitehead’s work continues to influence scientific research. This influence is, again, typically found only in the work of widely scattered individuals. However, one area where this is not the case is Whitehead’s theory of extension. Whitehead’s work on the logical basis of geometry is widely cited as foundational in the study of mereotopology, which in turn is of fundamental importance in the study of spatial reasoning, especially in the context of artificial intelligence.

There is also a growing interest in Whitehead’s work within physics, where it is proving to be a valuable source of ideas to help re-conceive the nature of physical relations. This is particularly true of such bizarre phenomena as quantum entanglement, which seems to violate orthodox notions of mechanistic interaction. There is a renewed interest in Whitehead’s arguments regarding relativity, particularly because of their potential tie-in with other bimetric theories of space and gravity. Other areas of interest include biology, where Whitehead’s holistic relationalism again offers alternative models of explanation.

4. References and Further Reading

Those of Whitehead’s primary texts which have been mentioned in the article are listed below in chronological order. More technical works have been “starred” with an asterisk. Original publication dates are given, as well as more recent printings. Of these more recent printings, those done by Dover Publications have been favored because they retain the pagination of the original imprints. On the other hand, the volume of the secondary literature on Whitehead is truly astounding, and a comprehensive list would go far beyond the limits of this article. So while the secondary works listed below can hardly be viewed as definitive, they do offer a useful starting place. The secondary sources are divided into two groups, those that are relatively more accessible and those that are relatively more technical.

a. Primary Sources

  • *A Treatise on Universal Algebra (Cambridge: Cambridge University Press, 1898.)
  • *The Axioms of Projective Geometry (Cambridge: Cambridge University Press, 1906.)
  • *The Axioms of Descriptive Geometry, (Cambridge: Cambridge University Press, 1907. Mineaola: Dover Phoenix Editions, 2005.)
    • The two Axioms books are models of expository clarity, yet they are still books on formal mathematics. Hence, they have been reluctantly “starred.”
  • *Principia Mathematica, volumes I – III, with Bertrand Russell (Cambridge: Cambridge University Press, 1910 – 1913.)
  • An Introduction to Mathematics (London: Home University Library of Modern Knowledge, 1911. Oxford: Oxford University Press, 1958.)
  • *An Enquiry into the Principles of Natural Knowledge (Cambridge: Cambridge University Press, 1919.)
  • The Concept of Nature (Cambridge: Cambridge University Press, 1920. Mineola: Dover, May 2004.)
  • *The Principle of Relativity with Applications to Physical Science (Cambridge: Cambridge University Press, 1922. Mineola: Dover Phoenix Editions, 2004.)
  • Science and the Modern World (New York: The Macmillan Company, 1925. New York: The Free Press, 1967.)
  • Religion in the Making (New York: The Macmillan Company, 1926. New York: Fordham University Press, 1996.)
    • This later edition is particularly useful because of the detailed glossary of terms at the end of the text.
  • Symbolism, Its Meaning and Effect (New York: The Macmillan Company, 1927. New York: Fordham University Press, 1985.)
  • The Aims of Education (New York: The Macmillan Company, 1929. New York: The Free Press, 1967.)
  • **Process and Reality (New York: The Macmillan Company 1929. New York: The Free Press, 1978.)
    • Easily one of the most difficult books in the entire Western philosophical canon, this volume earns two asterisks.
  • The Function of Reason (Princeton: Princeton University Press, 1929. Boston: Beacon Press, 1962.)
  • *Adventures of Ideas (New York: The Macmillan Company, 1933. New York: The Free Press, 1985.)
  • Modes of Thought (New York: The Macmillan Company, 1938. New York: The Free Press, 1968.)
  • Essays in Science and Philosophy (New York: Philosophical Library Inc., 1948.)

b. Secondary Sources

(Relatively more accessible secondary texts:)

  • Eastman, Timothy E. and Keeton, Hank (editors): Physics and Whitehead: Quantum, Process, and Experience (Albany: State University of New York Press, January 2004.)
    • This is an important recent survey of some of the ways in which Whitehead’s thought is being employed in contemporary physics.
  • Kraus, Elizabeth M.: The Metaphysics of Experience (New York: Fordham University Press, April 1979.)
    • This book is a particularly useful companion to PR because of the care with which Kraus has flow-charted the relational structures of Whitehead’s argument.
  • Lowe, Victor: Alfred North Whitehead: The Man and his Work, volumes I and II (Baltimore: The Johns Hopkins Press, 1985 & 1990.)
    • These volumes are the definitive biography of Whitehead.
  • Mesle, C. Robert & Cobb, John B.: Process Theology: A Basic Introduction (Atlanta: Chalice Press, September 1994.)
    • This is a solid and very readable survey of contemporary process theology.
  • Schilpp, Paul Arthur, editor: The Philosophy of Alfred North Whitehead, “The Library of Living Philosophers,” (LaSalle: Open Court Publishing Company, 1951.)
    • This book is a collection of essays on Whitehead’s work by his contemporaries.

(Relatively more technical secondary texts:)

  • Casati, Roberto and Varzi, Achille C.: Parts and Places: The Structures of Spatial Representation (Cambridge, MA: The MIT Press, 1999.)
    • This text is a college level introduction to mereotopology, and includes an extensive bibliography on the subject and its history.
  • Ford, Lewis: Emergence of Whitehead's Metaphysics, 1925-1929 (Albany: SUNY Press, 1985.)
    • This book is an examination of the historical development of Whitehead’s metaphysical ideas.
  • Hall, David L.: The Civilization of Experience, A Whitehedian Theory of Culture (New York: Fordham University Press, New 1973.)
    • Hall’s work attempts, among other things, to derive an ethical theory from Whitehead’s metaphysics.
  • Jones, Judith A. Intensity: An Essay in Whiteheadian Ontology (Nashville: Vanderbilt University Press, 1998.)
    • This work is widely considered to be one of the most important pieces of secondary literature on Whitehead.
  • Nobo, Jorge Luis.: Whitehead’s Metaphysics of Extension and Solidarity (Albany: SUNY Press, 1986.)
  • Palter, William: Whitehead's Philosophy of Science (Chicago: University of Chicago Press, June 1960.)
    • This work is widely viewed as the definitive text on Whitehead’s theory of science and nature.

Author Information

Gary L. Herstein
Email: gherstein@netzero.net
Southern Illinois University at Carbondale

Logical Consequence, Model-Theoretic Conceptions

Model-Theoretic Conceptions of
Logical Consequence

One sentence X is said to be a logical consequence of a set K of sentences, if and only if, in virtue of logic alone, it is impossible for all the sentences in the set to be true without X being true as well. One well-known specification of this informal characterization is the model-theoretic conception of logical consequence: a sentence X is a logical consequence of a set K of sentences if and only if all models of K are models of X. The model-theoretic characterization is a theoretical definition of logical consequence. It has been argued that this conception of logical consequence is more basic than the characterization in terms of deducibility in a deductive system. The correctness of the model-theoretic characterization of logical consequence, and the adequacy of the notion of a logical constant it utilizes are matters of contemporary debate.

Table of Contents

  1. Introduction
  2. Linguistic Preliminaries: the Language M
    1. Syntax of M
    2. Semantics for M
  3. What is a Logic?
  4. Model-Theoretic Consequence
    1. Truth in a structure
    2. Satisfaction revisited
    3. A formalized definition of truth for Language M
    4. Model-theoretic consequence defined
  5. The Status of the Model-Theoretic Characterization of Logical Consequence
    1. The model-theoretic characterization is a theoretical definition of logical consequence
    2. The common concept of logical consequence
    3. What is a logical constant?
  6. Conclusion
  7. References and Further Reading

1. Introduction

One sentence X is said to be a logical consequence of a set of sentences, if and only if, in virtue of logic alone, it is impossible for all the sentences in K to be true without X being true as well. One well-known specification of this informal characterization, due to Tarski (1936), is: X is a logical consequence of K if and only if there is no possible interpretation of the non-logical terminology of the language L according to which all the sentences in K are true and X is false. A possible interpretation of the non-logical terminology of L according to which sentences are true or false is a reading of the non-logical terms according to which the sentences receive a truth-value (that is, are either true or false) in a situation that is not ruled out by the semantic properties of the logical constants. The philosophical locus of the technical development of 'possible interpretation' in terms of models is Tarski (1936). A model for a language L is the theoretical development of a possible interpretation of non-logical terminology of L according to which the sentences of L receive a truth-value. The characterization of logical consequence in terms of models is called the Tarskian or model-theoretic characterization of logical consequence. It may be stated as follows.

X is a logical consequence of K if and only if all models of K are models of X.

See the entry, Logical Consequence, Philosophical Considerations, for discussion of Tarski's development of the model-theoretic characterization of logical consequence in light of the ordinary conception.

We begin by giving an interpreted language M. Next, logical consequence is defined model-theoretically. Finally, the status of this characterization is discussed, and criticisms of it are entertained.

2. Linguistic Preliminaries: the Language M

Here we define a simple language M, a language about the McKeon family, by first sketching what strings qualify as well-formed formulas (wffs) in M. Next we define sentences from formulas, and then give an account of truth in M, that is we describe the conditions in which M-sentences are true.

a. Syntax of M

Building blocks of formulas

Terms

Individual names—'beth', 'kelly', 'matt', 'paige', 'shannon', 'evan', and 'w1', 'w2', 'w3 ', etc.

Variables—'x', 'y', 'z', 'x1', 'y1 ', 'z1', 'x2', 'y2', 'z2', etc.

Predicates

1-place predicates—'Female', 'Male'

2-place predicates—'Parent', 'Brother', 'Sister', 'Married', 'OlderThan', 'Admires', '='.

Blueprints of well-formed formulas (wffs)

Atomic formulas: An atomic wff is any of the above n-place predicates followed by n terms which are enclosed in parentheses and separated by commas.

Formulas: The general notion of a well-formed formula (wff) is defined recursively as follows:

(1) All atomic wffs are wffs.
(2) If α is a wff, so is ''.
(3) If α and β are wffs, so is '(α & β)'.
(4) If α and β are wffs, so is 'v β)'.
(5) If α and β are wffs, so is '(α → β)'.
(6) If Ψ is a wff and v is a variable, then 'vΨ' is a wff.
(7) If Ψ is a wff and v is a variable, then 'vΨ' is a wff.
Finally, no string of symbols is a well-formed formula of M unless the string can be derived from (1)-(7).

The signs '~', '&', 'v', and '→', are called sentential connectives. The signs '∀' and '∃' are called quantifiers.

It will prove convenient to have available in M an infinite number of individual names as well as variables. The strings 'Parent(beth, paige)' and 'Male(x)' are examples of atomic wffs. We allow the identity symbol in an atomic formula to occur in between two terms, e.g., instead of '=(evan, evan)' we allow '(evan = evan)'. The symbols '~', '&', 'v', and '→' correspond to the English words 'not', 'and', 'or' and 'if...then', respectively. '∃' is our symbol for an existential quantifier and '∀' represents the universal quantifier. 'vΨ' and 'vΨ' correspond to for some v, Ψ, and for all v, Ψ, respectively. For every quantifier, its scope is the smallest part of the wff in which it is contained that is itself a wff. An occurrence of a variable v is a bound occurrence iff it is in the scope of some quantifier of the form 'v' or the form 'v', and is free otherwise. For example, the occurrence of 'x' is free in 'Male(x)' and in '∃y Married(y, x)'. The occurrences of 'y' in the second formula are bound because they are in the scope of the existential quantifier. A wff with at least one free variable is an open wff, and a closed formula is one with no free variables. A sentence is a closed wff. For example, 'Female(kelly)' and '∃y∃x Married(y, x)' are sentences but 'OlderThan(kelly, y)' and '(∃x Male(x) & Female(z))' are not. So, not all of the wffs of M are sentences. As noted below, this will affect our definition of truth for M.

b. Semantics for M

We now provide a semantics for M. This is done in two steps. First, we specify a domain of discourse, that is, the chunk of the world that our language M is about, and interpret M's predicates and names in terms of the elements composing the domain. Then we state the conditions under which each type of M-sentence is true. To each of the above syntactic rules (1-7) there corresponds a semantic rule that stipulates the conditions in which the sentence constructed using the syntactic rule is true. The principle of bivalence is assumed and so 'not true' and 'false' are used interchangeably. In effect, the interpretation of M determines a truth-value (true, false) for each and every sentence of M.

Domain D—The McKeons: Matt, Beth, Shannon, Kelly, Paige, and Evan.

Here are the referents and extensions of the names and predicates of M.

Terms: 'matt' refers to Matt, 'beth' refers to Beth, 'shannon' refers to Shannon, etc.

Predicates. The meaning of a predicate is identified with its extension, that is the set (possibly empty) of elements from the domain D the predicate is true of. The extension of a one-place predicate is a set of elements from D, the extension of a two-place predicate is a set of ordered pairs of elements from D.

The extension of 'Male' is {Matt, Evan}.

The extension of 'Female' is {Beth, Shannon, Kelly, Paige}.

The extension of 'Parent' is {<Matt, Shannon>, <Matt, Kelly>, <Matt, Paige>, <Matt, Evan>, <Beth, Shannon>, <Beth, Kelly>, <Beth, Paige>, <Beth, Evan>}.

The extension of 'Married' is {<Matt, Beth>, <Beth, Matt>}.

The extension of 'Sister' is {<Shannon, Kelly>, <Kelly, Shannon>, <Shannon, Paige>, <Paige, Shannon>, <Kelly, Paige>, <Paige, Kelly>, <Kelly, Evan>, <Paige, Evan>, <Shannon, Evan>}.

The extension of 'Brother' is {<Evan, Shannon>, <Evan, Kelly>, <Evan, Paige>}.

The extension of 'OlderThan' is {<Beth, Matt>, <Beth, Shannon>, <Beth, Kelly>, <Beth, Paige>, <Beth, Evan>, <Matt, Shannon>, <Matt, Kelly>, <Matt, Paige>, <Matt, Evan>, <Shannon, Kelly>, <Shannon, Paige>, <Shannon, Evan>, <Kelly, Paige>, <Kelly, Evan>, <Paige, Evan>}.

The extension of 'Admires' is {<Matt, Beth>, <Shannon, Matt>, <Shannon, Beth>, <Kelly, Beth>, <Kelly, Matt>, <Kelly, Shannon>, <Paige, Beth>, <Paige, Matt>, <Paige, Shannon>, <Paige, Kelly>, <Evan, Beth>, <Evan, Matt>, <Evan, Shannon>, <Evan, Kelly>, <Evan, Paige>}.

The extension of '=' is {<Matt, Matt>, <Beth, Beth>, <Shannon, Shannon>, <Kelly, Kelly>, <Paige, Paige>, <Evan, Evan>}.

(I) An atomic sentence with a one-place predicate is true iff the referent of the term is a member of the extension of the predicate, and an atomic sentence with a two-place predicate is true iff the ordered pair formed from the referents of the terms in order is a member of the extension of the predicate.

The atomic sentence 'Female(kelly)' is true because, as indicated above, the referent of 'kelly' is in the extension of the property designated by 'Female'. The atomic sentence 'Married(shannon, kelly)' is false because the ordered pair <Shannon, Kelly> is not in the extension of the relation designated by 'Married'.

Let α and β be any M-sentences.

(II) '' is true iff α is false.
(III) '(α & β)' is true when both α and β are true; otherwise '(α & β)' is false.
(IV) 'v β)' is true when at least one of α and β is true; otherwise 'v β)' is false.
(V) '(α → β)' is true if and only if (iff) α is false or β is true. So, '(α → β)' is false just in case α is true and β is false.

The meanings for '~' and '&' roughly correspond to the meanings of 'not' and 'and' as ordinarily used. We call '' and '(α & β)' negation and conjunction formulas, respectively. The formula '(~α v β)' is called a disjunction and the meaning of 'v' corresponds to inclusive or. There are a variety of conditionals in English (e.g., causal, counterfactual, logical), each type having a distinct meaning. The conditional defined by (V) above is called the material conditional. One way of following (V) is to see that the truth conditions for '(α → β)' are the same as for '~(α & ~β)'.

By (II) '~Married(shannon, kelly)' is true because, as noted above, 'Married(shannon, kelly)' is false. (II) also tells us that '~Female(kelly)' is false since 'Female(kelly)' is true. According to (III), '(~Married(shannon, kelly) & Female(kelly))' is true because '~Married(shannon, kelly)' is true and 'Female(kelly)' is true. And '(Male(shannon) & Female(shannon))' is false because 'Male(shannon)' is false. (IV) confirms that '(Female(kelly) v Married(evan, evan))' is true because, even though 'Married(evan, evan)' is false, 'Female(kelly)' is true. From (V) we know that the sentence '(~(beth = beth) → Male(shannon))' is true because '~(beth = beth)' is false. If α is false then '(α → β)' is true regardless of whether or not β is true. The sentence '(Female(beth) → Male(shannon))' is false because 'Female(beth)' is true and 'Male(shannon)' is false.

Before describing the truth conditions for quantified sentences we need to say something about the notion of satisfaction. We've defined truth only for the formulas of M that are sentences. So, the notions of truth and falsity are not applicable to non-sentences such as 'Male(x)' and '((x = x) → Female(x))' in which 'x' occurs free. However, objects may satisfy wffs that are non-sentences. We introduce the notion of satisfaction with some examples. An object satisfies 'Male(x)' just in case that object is male. Matt satisfies 'Male(x)', Beth does not. This is the case because replacing 'x' in 'Male(x)' with 'matt' yields a truth while replacing the variable with 'beth' yields a falsehood. An object satisfies '((x = x) → Female(x))' if and only if it is either not identical with itself or is a female. Beth satisfies this wff (we get a truth when 'beth' is substituted for the variable in all of its occurrences), Matt does not (putting 'matt' in for 'x' wherever it occurs results in a falsehood). As a first approximation, we say that an object with a name, say 'a', satisfies a wff 'Ψv' in which at most v occurs free if and only if the sentence that results by replacing v in all of its occurrences with 'a' is true. 'Male(x)' is neither true nor false because it is not a sentence, but it is either satisfiable or not by a given object. Now we define the truth conditions for quantifications, utilizing the notion of satisfaction. The notion of satisfaction will be revisited below when we formalize the semantics for M and give the model-theoretic characterization of logical consequence.

Let Ψ be any formula of M in which at most v occurs free.

(VI) 'vΨ' is true just in case there is at least one individual in the domain of quantification (e.g. at least one McKeon) that satisfies Ψ.
(VII) 'vΨ' is true just in case every individual in the domain of quantification (e.g. every McKeon) satisfies Ψ.

Here are some examples. '∃x(Male(x) & Married(x, beth))' is true because Matt satisfies '(Male(x) & Married(x, beth))'; replacing 'x' wherever it appears in the wff with 'matt' results in a true sentence. The sentence '∃xOlderThan(x, x)' is false because no McKeon satisfies 'OlderThan(x, x)', that is replacing 'x' in 'OlderThan(x, x)' with the name of a McKeon always yields a falsehood.

The universal quantification '∀x( OlderThan(x, paige) → Male(x))' is false for there is a McKeon who doesn't satisfy '(OlderThan(x, paige) → Male(x))'. For example, Shannon does not satisfy '(OlderThan(x, paige) → Male(x))' because Shannon satisfies 'OlderThan(x, paige)' but not 'Male(x)'. The sentence '∀x(x = x)' is true because all McKeons satisfy 'x = x'; replacing 'x' with the name of any McKeon results in a true sentence.

Note that in the explanation of satisfaction we suppose that an object satisfies a wff only if the object is named. But we don't want to presuppose that all objects in the domain of discourse are named. For the purposes of an example, suppose that the McKeons adopt a baby boy, but haven't named him yet. Then, '∃x Brother(x, evan)' is true because the adopted child satisfies 'Brother(x, evan)', even though we can't replace 'x' with the child's name to get a truth. To get around this is easy enough. We have added a list of names, 'w1', 'w2', 'w3', etc., to M, and we may say that any unnamed object satisfies 'Ψv' iff the replacement of v with a previously unused wi assigned as a name of this object results in a true sentence. In the above scenerio, '∃xBrother(x, evan)' is true because, ultimately, treating 'w1' as a temporary name of the child, 'Brother(w1, evan)' is true. Of course, the meanings of the predicates would have to be amended in order to reflect the addition of a new person to the domain of McKeons.

3. What is a Logic?

We have characterized an interpreted formal language M by defining what qualifies as a sentence of M and by specifying the conditions under which any M-sentence is true. The received view of logical consequence entails that the logical consequence relation in M turns on the nature of the logical constants in the relevant M-sentences. We shall regard just the sentential connectives, the quantifiers of M, and the identity predicate as logical constants (the language M is a first-order language). For discussion of the notion of a logical constant see Section 5c below.

At the start of this article, it is said that a sentence X is a logical consequence of a set K of sentences, if and only if, in virtue of logic alone, it is impossible for all the sentences in K to be true without X being true as well. A model-theoretic conception of logical consequence in language M clarifies this intuitive characterization of logical consequence by appealing to the semantic properties of the logical constants, represented in the above truth clauses (I)-(VII). In contrast, a deductive-theoretic conception clarifies logical consequence in M, conceived of in terms of deducibility, by appealing to the inferential properties of logical constants portrayed as intuitively valid principles of inference, that is, principles justifying steps in deductions. See Logical Consequence, Deductive-Theoretic Conceptions for a deductive-theoretic characterization of logical consequence in terms of a deductive system, and foror a discussion on the relationship between the logical consequence relation and the model-theoretic and deductive-theoretic conceptions of it.

Following Shapiro (1991, p. 3), we define a logic to be a formal language L plus either a model-theoretic or a deductive-theoretic account of logical consequence. A language with both characterizations is a full logic just in case the two characterizations coincide. The logic for M developed below may be viewed as a classical logic or a first-order theory.

4. Model-Theoretic Consequence

The technical machinery to follow is designed to clarify how it is that sentences receive truth-values owing to interpretations of them. We begin by introducing the notion of a structure. Then we revisit the notion of satisfaction in order to make it more precise, and link structures and satisfaction to model-theoretic consequence. We offer a modernized version of the model-theoretic characterization of logical consequence sketched by Tarski and so deviate from the details of Tarski's presentation in his (1936).

a. Truth in a structure

Relative to our language M, a structure U is an ordered pair <D, I>.

(1) D, a non-empty set of elements, is the domain of discourse. Two things to highlight here. First, the domain D of a structure for M may be any set of entities, e.g. the dogs living in Connecticut, the toothbrushes on Earth, the natural numbers, the twelve apostles, etc. Second, we require that D not be the empty set.
(2) I is a function that assigns to each individual constant of M an element of D, and to each n-place predicate of M a subset of Dn (that is, the set of n-tuples taken from D). In essence, I interprets the individual constants and predicates of M, linking them to elements and sets of n-tuples of elements from of D. For individual constants c and predicates P, the element IU(c) is the element of D designated by c under IU, and IU(P) is the set of entities assigned by IU as the extension of P.

By 'structure' we mean an L-structure for some first-order language L. The intended structure for a language L is the course-grained representation of the piece of the world that we intend L to be about. The intended domain D and its subsets represent the chunk of the world L is being used to talk about and quantify over. The intended interpretation of L's constants and predicates assigns the actual denotations to L's constants and the actual extensions to the predicates. The above semantics for our language M, may be viewed, in part, as an informal portrayal of the intended structure of M, which we refer to as UM. That is, we take M to be a tool for talking about the McKeon family with respect to gender, who is older than whom, who admires whom, etc. To make things formally prim and proper we should represent the interpretation of constants as IUM(matt) = Matt, IUM(beth) = Beth, and so on. And the interpretation of predicates can look like IUM(Male) = {Matt, Evan}, IUM(Female) = {Beth, Shannon, Kelly, Paige}, and so on. We assume that this has been done.

A structure U for a language L (that is, an L-structure) represents one way that a language can be used to talk about a state of affairs. Crudely, the domain D and the subsets recovered from D constitute a rudimentary representation of a state of affairs, and the interpretation of L's predicates and individual constants makes the language L about the relevant state of affairs. Since a language can be assigned different structures, it can be used to talk about different states of affairs. The class of L-structures represents all the states of affairs that the language L can be used to talk about. For example, consider the following M-structure U'.

D = the set of natural numbers

IU'(beth) = 2
IU'(matt) = 3
IU'(shannon) = 5
IU'(kelly) = 7
IU'(paige) = 11
IU'(evan) = 10
I U'(Male) = {d D | d is prime}
I U'(Female) = {d D | d is even}
I U'(Parent) = ∅
I U'(Married) = {<d, d'> D2 | d + 1 = d' }
I U'(Sister) = ∅
I U'(Brother) = {<d, d'> D2 | d < d' }
I U'(OlderThan) = {<d, d'> D2 | d > d' }
I U'(Admires) = ∅
I U'(=) = {<d, d'> D2 | d = d' }

 

In specifying the domain D and the values of the interpretation function defined on M's predicates we make use of brace notation, instead of the earlier list notation, to pick out sets. For example, we write

{d D | d is even}

to say "the set of all elements d of D such that d is even." And

{<d, d'> D2 | d > d'}

reads: "The set of ordered pairs of elements d, d' of D such that d > d'." Consider: the sentence

OlderThan(beth, matt)

is true in the intended structure UM for <IUM(beth), IUM(matt)> is in IUM(OlderThan). But the sentence is false in U' for <IU'(beth), IU'(matt)> is not in IU'(OlderThan) (because 2 is not greater than 3). The sentence

(Female(beth) & Male(beth))

is not true in UM but is true in U' for IU'(beth) is in IU'(Female) and in IU'(Male) (because 2 is an even prime). In order to avoid confusion it is worth highlighting that when we say that the sentence '(Female(beth) & Male(beth))' is true in one structure and false in another we are saying that one and the same wff with no free variables is true in one state of affairs on an interpretation and false in another state of affairs on another interpretation.

b. Satisfaction revisited

Note the general strategy of giving the semantics of the sentential connectives: the truth of a compound sentence formed with any of them is determined by its component well-formed formulas (wffs), which are themselves (simpler) sentences. However, this strategy needs to be altered when it comes to quantificational sentences. For quantificational sentences are built out of open wffs and, as noted above, these component wffs do not admit of truth and falsity. Therefore, we can't think of the truth of, say,

∃x(Female(x) & OlderThan(x, paige))

in terms of the truth of '(Female(x) & OlderThan(x, paige))' for some McKeon x. What we need is a truth-relevant property of open formulas that we may appeal to in explaining the truth-value of the compound quantifications formed from them. Tarski is credited with the solution, first hinted at in the following.

The possibility suggests itself, however, of introducing a more general concept which is applicable to any sentential function [open or closed wff] can be recursively defined, and, when applied to sentences leads us directly to the concept of truth. These requirements are met by the notion of satisfaction of a given sentential function by given objects. (Tarski 1933, p. 189)

The needed property is satisfaction. The truth of the above existential quantification will depend on there being an object that satisfies both 'Female(x)' and 'OlderThan(x, paige)'. Earlier we introduced the concept of satisfaction by describing the conditions in which one element satisfies an open formula with one free variable. Now we want to develop a picture of what it means for objects to satisfy a wff with n free variables for any n ≥ 0. We begin by introducing the notion of a variable assignment.

A variable assignment is a function g from a set of variables (its domain) to a set of objects (its range). We shall say that the variable assignment g is suitable for a well-formed formula (wff) Ψ of M if every free variable in Ψ is in the domain of g. In order for a variable assignment to satisfy a wff it must be suitable for the formula. For a variable assignment g that is suitable for Ψ, g satisfies Ψ in U iff the object(s) g assigns to the free variable(s) in Ψ satisfy Ψ. Unlike the earlier first-step characterization of satisfaction, there is no appeal to names for the entities assigned to the variables. This has the advantage of not requiring that new names be added to a language that does not have names for everything in the domain. In specifying a variable assignment g, we write α/v, β/v', χ/v'', ... to indicate that g(v) = α, g(v' ) = β, g(v'' ) = χ, etc. We understand

U ⊨ Ψ[g]

to mean that g satisfies Ψ in U.

UM ⊨ OlderThan(x, y)[Shannon/x, Paige/y]

This is true: the variable assignment g, identified with [Shannon/x, Paige/y], satisfies 'Olderthan(x, y)' because Shannon is older than Paige.

UM ⊨ Admires(x, y)[Beth/x, Matt/y]

This is false for this variable assignment does not satisfy the wff: Beth does not admire Matt. However, the following is true because Matt admires Beth.

UM ⊨ Admires(x, y)[Matt/x, Beth/y]

For any wff Ψ, a suitable variable assignment g and structure U together ensure that the terms in Ψ designate elements in D. The structure U insures that individual constants have referents, and the assignment g insures that any free variables in Ψ get denotations. For any individual constant c, c[g] is the element IU(c). For each variable v, and assignment g whose domain contains v, v[g] is the element g(v). In effect, the variable assignment treats the variable v as a temporary name. We define t[g] as 'the element designated by t relative to the assignment g'.

c. A formalized definition of truth for Language M

We now give a definition of truth for the language M via the detour through satisfaction. The goal is to define for each formula α of M and each assignment g to the free variables, if any, of α in U what must obtain in order for U ⊨ α[g].

(I) Where R is an n-place predicate and t1, ..., tn are terms, UR(t1, ..., tn)[g] if and only if (iff) the n-tuple <t1[g], ..., tn[g]> is in IU(R).
(II) U ⊨ ~α[g] iff it is not true that U ⊨ α[g].
(III) U ⊨ (α & β)[g] iff U ⊨ α[g] and U ⊨ β[g].
(IV) U ⊨ (α v β)[g] iff U ⊨ α[g] or U ⊨ β[g].
(V) U ⊨ (α → β)[g] iff either it is not true that U ⊨ α[g] or U ⊨ β[g].

Before going on to the (VI) and (VII) clauses for quantificational sentences, it is worthwhile to introduce the notion of a variable assignment that comes from another. Consider

∃y(Female(x) & OlderThan(x, y)).

We want to say that a variable assignment g satisfies this wff if and only if there is a variable assignment g' differing from g at most with regard to the object it assigns to the variable y such that g' satisfies '(Female(x) & OlderThan(x, y))'. We say that a variable assignment g' comes from an assignment g when the domain of g' is that of g and a variable v, and g' assigns the same values as g with the possible exception of the element g' assigns to v. In general, we represent an extension g' of an assignment g as follows.

[g, d/v]

This picks out a variable assignment g' which differs at most from g in that v is in its domain and g'(v) = d, for some element d of the domain D. So, it is true that

UM ⊨∃y(Female(x) & OlderThan(x, y)) [Beth/x]

since

UM ⊨ (Female(x) & OlderThan(x, y)) [Beth/x, Paige/y].

What this says is that the variable assignment that comes from the assignment of Beth to 'x' by adding the assignment of Paige to 'y' satisfies '(Female(x) & OlderThan(x, y))' in UM. This is true for Beth is a female who is older than Paige. Now we give the satisfaction clauses for quantificational sentences. Let Ψ be any formula of M.

(VI) U ⊨∃vΨ[g] iff for at least one element d of D, U ⊨ Ψ[g, d/v].
(VII) U ⊨ ∀vΨ[g] iff for all elements d of D, U ⊨ Ψ[g, d/v].

If α is a sentence, then it has no free variables and we write U ⊨ α[g] which says that the empty variable assignment satisfies α in U. The empty variable assignment g does not assign objects to any variables. In short: the definition of truth for language L is

A sentence α is true in U if and only if U ⊨ α[g], that is the empty variable assignment satisfies α in U.

The truth definition specifies the conditions in which a formula of M is true in a structure by explaining how the semantic properties of any formula of M are determined by its construction from semantically primitive expressions (e.g., predicates, individual constants, and variables) whose semantic properties are specified directly. If every member of a set of sentences is true in a structure U we say that U is a model of the set. We now work through some examples. The reader will be aided by referring when needed to the clauses (I)-(VII).

It is true that UM ⊨ ~Married(kelly, kelly))[g], that is, by (II) it is not true that UM ⊨ Married(kelly, kelly))[g], because <kelly[g], kelly[g]> is not in IUM(Married). Hence, by (IV)

UM ⊨ (Married(shannon, kelly) v ~Married(kelly, kelly))[g].

Our truth definition should confirm that

∃x∃y Admires(x, y)

is true in UM. Note that by (VI) UM ⊨∃yAdmires(x, y)[g, Paige/x] since UM ⊨ Admires(x, y)[g, Paige/x, kelly/y]. Hence, by (VI)

UM ⊨∃x∃y Admires(x, y)[g] .

The sentence, '∀x∃y(Older(y, x) → Admires(x, y))' is true in UM . By (VII) we know that

UM ⊨ ∀x∃y(Older(y, x) → Admires(x, y))[g]

if and only if

for all elements d of D, UM ⊨∃y(Older(y, x) → Admires(x, y))[g, d/x].

This is true. For each element d and assignment [g, d/x], UM ⊨ (Older(y, x) → Admires(x, y))[g, d/x, d'/y], that is, there is some element d' and variable assignment g differing from [g, d/x] only in assigning d' to 'y', such that g satisfies '(Older(y, x) → Admires(x, y))' in UM .

d. Model-theoretic consequence defined

For any set K of M-sentences and M-sentence X, we write

K ⊨ X

to mean that every M-structure that is a model of K is also a model of X, that is, X is a model-theoretic consequence of K.

(1) OlderThan(paige, matt)
(2) ∀x(Male(x) → OlderThan(paige, x))

Note that both (1) and (2) are false in the intended structure UM . We show that (2) is not a model theoretic consequence of (1) by describing a structure which is a model of (1) but not (2). The above structure U' will do the trick. By (I) it is true that U' ⊨ OlderThan(paige, matt)[g] because <(paige)[g], (matt)[g]> is in IU'(OlderThan) (because 11 is greater than 3). But, by (VII), it is not the case that

U' ⊨ ∀x(Male(x) → OlderThan(paige, x))[g]

since the variable assignment [g, 13/x] doesn't satisfy '(Male(x) → OlderThan(paige, x))' in U' according to (V) for U' ⊨ Male(x)[g, 13/x] but not U' ⊨ OlderThan(paige, x))[g, 13/x]. So, (2) is not a model-theoretic consequence of (1). Consider the following sentences.

(3) (Admires(evan, paige) → Admires(paige, kelly))
(4) (Admires(paige, kelly) → Admires(kelly, beth))
(5) (Admires(evan, paige) → Admires(kelly, beth))

(5) is a model-theoretic consequence of (3) and (4). For assume otherwise. That is assume, that there is a structure U'' such that

(i) U'' ⊨ (Admires(evan, paige) → Admires(paige, kelly))[g]

and

(ii) U'' ⊨ (Admires(paige, kelly) → Admires(kelly, beth))[g]

but not

(iii) U'' ⊨ (Admires(evan, paige) → Admires(kelly, beth))[g].

By (V), from the assumption that (iii) is false, it follows that U'' ⊨ Admires(evan, paige)[g] and not U'' ⊨ Admires(kelly, beth)[g]. Given the former, in order for (i) to hold according to (V) it must be the case that U'' ⊨ Admires(paige, kelly))[g]. But then it is true that U'' ⊨ Admires(paige, kelly))[g] and false that U'' ⊨ Admires(kelly, beth)[g], which, again appealing to (V), contradicts our assumption (ii). Hence, there is no such U'', and so (5) is a model-theoretic consequence of (3) and (4).

Here are some more examples of the model-theoretic consequence relation in action.

(6) ∃xMale(x)
(7) ∃xBrother(x, shannon)
(8) ∃x(Male(x) & Brother(x, shannon))

(8) is not a model-theoretic consequence of (6) and (7). Consider the following structure U'''.

D = {1, 2, 3}

For all M-individual constants c, IU'''(c) = 1.

IU'''(Male) = {2}, IU'''(Brother) = {<3, 1>}. For all other M-predicates P, IU'''(P) = ∅.

Appealing to the satisfaction clauses (I), (III), and (VI), it is fairly straightforward to see that the structure U''' is a model of (6) and (7) but not of (8). For example, U''' is not a model of (8) for there is no element d of D and assignment [d/x] such that

U''' ⊨ (Male(x) & Brother(x, shannon))[g, d/x].

Consider the following two sentences

(9) Female(shannon)
(10) ∃x Female(x)

(10) is a model-theoretic consequence of (9). For an arbitrary M-structure U, if U ⊨ Female(shannon)[g], then by satisfaction clause (I), shannon[g] is in IU(Female), and so there is at least one element of D, shannon[g], in IU(Female). Consequently, by (VI), U ⊨∃x Female(x)[g].

For a sentence X of M, we write

⊨ X.

to mean that X is a model-theoretic consequence of the empty set of sentences. This means that every M-structure is a model of X. Such sentences represent logical truths; it is not logically possible for them to be false. For example,

⊨ (∀x Male(x) → ∃x Male(x))

is true. Here's one explanation why. Let U be an arbitrary M-structure. We now show that

U ⊨ (∀x Male(x) → ∃x Male(x))[g].

If U ⊨ ∀x Male(x) [g] holds, then by (VII) for every element d of the domain D, U ⊨ Male(x)[g, d/x]. But we know that D is non-empty, by the requirements on structures (see the beginning of Section 4.1), and so D has at least one element d. Hence for at least one element d of D, U ⊨ Male(x)[g, d/x], that is by (VI), U ⊨∃x Male(x))[g]. So, if U ⊨ (∀x Male(x)[g] then U ⊨∃x Male(x))[g], and, therefore according to (V),

U ⊨ (∀x Male(x) → ∃x Male(x))[g].

Since U is arbitrary, this establishes

⊨ (∀x Male(x) → ∃x Male(x)).

If we treat '=' as a logical constant and require that for all M-structures U, IU(=) = {<d, d'> D2| d = d'}, then M-sentences asserting that identity is reflexive, symmetrical, and transitive are true in every M-structure, that is the following hold.

⊨ ∀x(x = x)
⊨ ∀x∀y((x = y) → (y = x))
⊨ ∀x∀y∀z(((x = y) & (y = z)) → (x = z))

Structures which assign {<d, d'> D2| d = d'} to the identity symbol are sometimes called normal models. Letting 'Ψ(v)' be any wff in which just variable v occurs free,

∀x∀y((x = y) → (Ψ(x) → Ψ(y)))

is an instance of the principle that identicals are indiscernible—if x = y then whatever holds of x holds of y—and it is true in every M-structure U that is a normal model. Treating '=' as a logical constant (which is standard) requires that we restrict the class of M-structures appealed to in the above model-theoretic definition of logical consequence to those that are normal models.

5. The Status of the Model-Theoretic Characterization of Logical Consequence

Logical consequence in language M has been defined in terms of the model-theoretic consequence relation. What is the status of this definition? We answered this question in part in Logical Consequence, Deductive-Theoretic Conceptions: Section 5a. by highlighting Tarski's argument for holding that the model-theoretic conception of logical consequence is more basic than any deductive-system account of it. Tarski points to the fact that there are languages for which valid principles of inference can't be represented in a deductive-system, but the logical consequence relation they determine can be represented model-theoretically. In what follows, we identify the type of definition the model-theoretic characterization of logical consequence is, and then discuss its adequacy.

a. The model-theoretic characterization is a theoretical definition of logical consequence

In order to determine the success of the model-theoretic characterization, we need to know what type of definition it is. Clearly it is not intended as a lexical definition. As Tarski's opening passage in his (1936) makes clear, a theory of logical consequence need not yield a report of what 'logical consequence' means. On other hand, it is clear that Tarski doesn't see himself as offering just a stipulative definition. Tarski is not merely stating how he proposes to use 'logical consequence' and 'logical truth' (but see Tarski 1986) any more than Newton was just proposing how to use certain words when he defined force in terms of mass and acceleration. Newton was invoking a fundamental conceptual relationship in order to improve our understanding of the physical world. Similarly, Tarski's definition of 'logical consequence' in terms of model-theoretic consequence is supposed to help us formulate a theory of logical consequence that deepens our understanding of what Tarski calls the common concept of logical consequence. Tarski thinks that the logical consequence relation is commonly regarded as necessary, formal, and a priori . As Tarski (1936, p. 409) says, "The concept of logical consequence is one of those whose introduction into a field of strict formal investigation was not a matter of arbitrary decision on the part of this or that investigator; in defining this concept efforts were made to adhere to the common usage of the language of everyday life."

Let's follow this approach in Tarski's (1936) and treat the model-theoretic definition as a theoretical definition of 'logical consequence'. The questions raised are whether the Tarskian model-theoretic definition of logical consequence leads to a good theory and whether it improves our understanding of logical consequence. In order to sketch a framework for thinking about this question, we review the key moves in the Tarskian analysis. In what follows, K is an arbitrary set of sentences from a language L, and X is any sentence from L. First, Tarski observes what he takes to be the commonly regarded features of logical consequence (necessity, formality, and a prioricity) and makes the following claim.

(1) X is a logical consequence of K if and only if (a) it is not possible for all the K to be true and X false, (b) this is due to the forms of the sentences, and (c) this is known a priori.

Tarski's deep insight was to see the criteria, listed in bold, in terms of the technical notion of truth in a structure. The key step in his analysis is to embody the above criteria (a)-(c) in terms of the notion of a possible interpretation of the non-logical terminology in sentences. Substituting for what is in bold in (1) we get

(2) X is a logical consequence of K if and only if there is no possible interpretation of the non-logical terminology of the language according to which all the sentences in K are true and X is false.

The third step of the Tarskian analysis of logical consequence is to use the technical notion of truth in a structure or model to capture the idea of a possible interpretation. That is, we understand there is no possible interpretation of the non-logical terminology of the language according to which all of the sentences in K are true and X is false in terms of: Every model of K is a model of X, that is, K ⊨ X.

To elaborate, as reflected in (2), the analysis turns on a selection of terms as logical constants. This is represented model-theoretically by allowing the interpretation of the non-logical terminology to change from one structure to another, and by making the interpretation of the logical constants invariant across the class of structures. Then, relative to a set of terms treated as logical, the Tarskian, model-theoretic analysis is committed to

(3) X is a logical consequence of K if and only if K ⊨ X.

and

(4) X is a logical truth, that is, it is logically impossible for X to be false, if and only if ⊨ X.

As a theoretical definition, we expect the ⊨-relation to reflect the essential features of the common concept of logical consequence. By Tarski's lights, the ⊨-consequence relation should be necessary, formal, and a priori. Note that model theory by itself does not provide the means for drawing a boundary between the logical and the non-logical. Indeed, its use presupposes that a list of logical terms is in hand. For example, taking Sister and Female to be logical constants, the consequence relation from (A) 'Sister(kelly, paige)' to (B) 'Female(kelly)' is necessary, formal and a priori. So perhaps (B) should be a logical consequence of (A). The fact that (B) is not a model-theoretic consequence of (A) is due to the fact that the interpretation of the two predicates can vary from one structure to another. To remedy this we could make the interpretation of the two predicates invariant so that '∀x(∃y Sister(x, y) → Female(x))' is true in all structures, and, therefore if (A) is true in a structure, (B) is too. The point here is that the use of models to capture the logical consequence relation requires a prior choice of what terms to treat as logical. This is, in turn, reflected in the identification of the terms whose interpretation is constant from one structure to another.

So in assessing the success of the Tarskian model-theoretic definition of logical consequence for a language L, two issues arise. First, does the model-theoretic consequence relation reflect the salient features of the common concept of logical consequence? Second, is the boundary in L between logical and non-logical terms correctly drawn? In other words: what in L qualifies as a logical constant? Both questions are motivated by the adequacy criteria for theoretical definitions of logical consequence. They are central questions in the philosophy of logic and their significance is at least partly due to the prevalent use of model theory in logic to represent logical consequence in a variety of languages. In what follows, I sketch some responses to the two questions that draw on contemporary work in philosophy of logic. I begin with the first question.

b. Does the model-theoretic consequence relation reflect the salient features of the common concept of logical consequence?

The ⊨-consequence relation is formal. Also, a brief inspection of the above justifications that K ⊨ X obtain for given K and X reveals that the ⊨-consequence relation is a priori. Does the ⊨-consequence relation capture the modal element in the common concept of logical consequence? There are critics who argue that the model-theoretic account lacks the conceptual resources to rule out the possibility of there being logically possible situations in which sentences in K are true and X is false but no structure U such that U ⊨ K and not U ⊨ X. Kneale (1961) is an early critic, and Etchemendy (1988, 1999) offers a sustained and multi-faceted attack. We follow Etchemendy. Consider the following three sentences.

(1) (Female(shannon) & ~Married(shannon, matt))
(2) (~Female(matt) & Married(beth, matt))
(3) ~Female(beth)

(3) is neither a logical nor a model-theoretic consequence of (1) and (2). However, in order for a structure to make (1) and (2) true but not (3) its domain must have at least three elements. If the world contained, say, just two things, then there would be no such structure and (3) would be a model-theoretic consequence of (1) and (2). But in this scenario, (3) would not be a logical consequence of (1) and (2) because it would still be logically possible for the world to be larger and in such a possible situation (1) and (2) can be interpreted true and (3) false. The problem raised for the model-theoretic account of logical consequence is that we do not think that the class of logically possible situations varies under different assumptions as to the cardinality of the world's elements. But the class of structures surely does since they are composed of worldly elements. This is a tricky criticism. Let's look at it from a slightly different vantagepoint.

We might think that the extension of the logical consequence relation for an interpreted language such as our language M about the McKeons is necessary. For example, it can't be the case that for some K and X, even though X isn't a logical consequence of a set K of sentences, X could be. So, on the supposition that the world contains less, the extension of the logical consequence relation should not expand. However, the extension of the model-theoretic consequence does expand. For example, (3) is not, in fact, a model-theoretic consequence of (1) and (2), but it would be if there were just two things. This is evidence that the model-theoretic characterization has failed to capture the modal notion inherent in the common concept of logical consequence.

In defense of Tarski (see Ray 1999 and Sher 1991 for defenses of the Tarskian analysis against Etchemendy), one might question the force of the criticism because it rests on the supposition that it is possible for there to be just finitely many things. How could there be just two things? Indeed, if we countenance an infinite totality of necessary existents such as abstract objects (e.g., pure sets), then the class of structures will be fixed relative to an infinite collection of necessary existents, and the above criticism that turns on it being possible that there are just n things for finite n doesn't go through (for discussion see McGee 1999). One could reply that while it is metaphysically impossible for there to be merely finitely many things it is nevertheless logically possible and this is relevant to the modal notion in the concept of logical consequence. This reply requires the existence of primitive, basic intuitions regarding the logical possibility of there being just finitely many things. However, intuitions about possible cardinalities of worldly individuals—not informed by mathematics and science—tend to run stale. Consequently, it is hard to debate this reply: one either has the needed logical intuitions, or not.

What is clear is that our knowledge of what is a model-theoretic consequence of what in a given L depends on our knowledge of the class of L-structures. Since such structures are furniture of the world, our knowledge of the model-theoretic consequence relation is grounded on knowledge of substantive facts about the world. Even if such knowledge is a priori, it is far from obvious that our a priori knowledge of the logical consequence relation is so substantive. One might argue that knowledge of what follows from what shouldn't turn on worldly matters of fact, even if they are necessary and a priori (see the discussion of the locked room metaphor in Logical Consequence, Philosophical Considerations: Section 2.2.1). If correct, this is a strike against the model-theoretic definition. However, this standard logical positivist line has been recently challenged by those who see logic penetrated and permeated by metaphysics (e.g., Putnam 1971, Almog 1989, Sher 1991, Williamson 1999). We illustrate the insight behind the challenge with a simple example. Consider the following two sentences.

(4) ∃x(Female(x) & Sister(x, evan))
(5) ∃x Female(x)

(5) is a logical consequence of (4), that is, there is no domain for the quantifiers and no interpretation of the predicates and the individual constant in that domain which makes (4) true and not (5). Why? Because on any interpretation of the non-logical terminology, (4) is true just in case the intersection of the set of objects that satisfy Female(x) and the set of objects that satisfy Sister(x, evan) is non-empty. If this obtains, then the set of objects that satisfy Female(x) is non-empty and this makes (5) true. The basic metaphysical truth underlying the reasoning here is that for any two sets, if their intersection is non-empty, then neither set is the empty set. This necessary and a priori truth about the world, in particular about its set-theoretic part, is an essential reason why (5) follows from (4). This approach, reflected in the model-theoretic consequence relation (see Sher 1996), can lead to an intriguing view of the formality of logical consequence reminiscent of the pre-Wittgensteinian views of Russell and Frege. Following the above, the consequence relation from (4) to (5) is formal because the metaphysical truth on which it turns describes a formal (structural) feature of the world. In other words: it is not possible for (4) to be true and (5) false because

For any extensions of P, P', if an object α satisfies '(P(v) & P'(v, n))', then α satisfies 'P(v)'.

According to this vision of the formality of logical consequence, the consequence relation between (4) and (5) is formal because what is in bold expresses a formal feature of reality. Russell writes that, "Logic, I should maintain, must no more admit a unicorn than zoology can; for logic is concerned with the real world just as truly as zoology, though with its more abstract and general features" (Russell 1919, p. 169). If we take the abstract and general features of the world to be its formal features, then Russell's remark captures the view of logic that emerges from anchoring the necessity, formality and a priority of logical consequence in the formal features of the world. The question arises as to what counts as a formal feature of the world. If we say that all set-theoretic truths depict formal features of the world, including claims about how many sets there are, then this would seem to justify making

∃x∃y~(x = y)

(that is, there are at least two individuals) a logical truth since it is necessary, a priori, and a formal truth. To reflect model-theoretically that such sentences, which consist just of logical terminology, are logical truths we would require that the domain of a structure simply be the collection of the world's individuals. See Sher (1991) for an elaboration and defense of this view of the formality of logical truth and consequence. See Shapiro (1993) for further discussion and criticism of the project of grounding our logical knowledge on primitive intuitions of logical possibility instead of on our knowledge of metaphysical truths.

Part of the difficulty in reaching a consensus with respect to whether or not the model-theoretic consequence relation reflects the salient features of the common concept of logical consequence is that philosophers and logicians differ over what the features of the common concept are. Some offer accounts of the logical consequence relation according to which it is not a priori (e.g., see Koslow 1999, Sher 1991 and see Hanson 1997 for criticism of Sher) or deny that it even need be strongly necessary (Smiley 1995, 2000, section 6). Here we illustrate with a quick example.

Given that we know that a McKeon only admires those who are older (that is, we know that (a) ∀x∀y(Admires(x, y) → OlderThan(y, x))), wouldn't we take (7) to be a logical consequences of (6)?

(6) Admires(paige, kelly)
(7) OlderThan(kelly, paige)

A Tarskian response is that (7) is not a consequence of (6) alone, but of (6) plus (a). So in thinking that (7) follows from (6), one assumes (a). A counter suggestion is to say that (7) is a logical consequence of (6) for if (6) is true, then necessarily-relative-to-the-truth-of-(a) (7) is true. The modal notion here is a weakened sense of necessity: necessity relative to the truth of a collection of sentences, which in this case is composed of (a). Since (a) is not a priori, neither is the consequence relation between (6) and (7). The motive here seems to be that this conception of modality is inherent in the notion of logical consequence that drives deductive inference in science, law, and other fields outside of the logic classroom. This supposes that a theory of logical consequence must not only account for the features of the intuitive concept of logical consequence but also reflect the intuitively correct deductive inferences. After all, the logical consequence relation is the foundation of deductive inference: it is not correct to deductively infer B from A unless B is a logical consequence of A. Referring to our example, in a conversation where (a) is a truth that is understood and accepted by the conversants, the inference from (6) to (7) seems legit. Hence, this should be supported by an accompanying concept of logical consequence. This idea of construing the common concept of logical consequence in part by the lights of basic intuitions about correct inferences is reflected in the Relevance logician's objection to the Tarskian account. The Relevance logician claims that X is not a logical consequence of K unless K is relevant to X. For example, consider the following pairs of sentences.

(1) (Female(evan) & ~Female(evan)) (1) Admires(kelly, paige)
(2) Admires(kelly, shannon) (2) (Female(evan) v ~Female(evan))

In the first pair, (1) is logically false, and in the second, (2) is a logical truth. Hence it isn't possible for (1) to be true and (2) false. Since this seems to be formally determined and a priori, for each pair (2) is a logical consequence of (1) according to Tarski. Against this Anderson and Belnap write, "the fancy that relevance is irrelevant to validity [that is logical consequence] strikes us as ludicrous, and we therefore make an attempt to explicate the notion of relevance of A to B" (Anderson and Belnap 1975, pp. 17-18). The typical support for the relevance conception of logical consequence draws on intuitions regarding correct inference, e.g. it is counterintuitive to think that it is correct to infer (2) from (1) in either pair for what does being a female have to do with who one admires? Would you think it correct to infer, say, that Admires(kelly, shannon) on the basis of (Female(evan) & ~Female(evan))? For further discussion of the different types of relevance logic and more on the relevant philosophical issues see Haack (1978, pp. 198-203) and Read (1995, pp. 54-63). The bibliography in Haack (1996, pp. 264-265) is helpful. For further discussion on relevance logic, see Logical Consequence, Deductive-Theoretic Conceptions: Section 5.2.1.

Our question is, does the model-theoretic consequence relation reflect the essential features of the common concept of logical consequence? Our discussion illustrates at least two things. First, it isn't obvious that the model-theoretic definition of logical consequence reflects the Tarskian portrayal of the common concept. One option, not discussed above, is to deny that the model-theoretic definition is a theoretical definition and argue for its utility simply on the basis that it is extensionally equivalent with the common concept (see Shapiro 1998). Our discussion also illustrates that Tarski's identification of the essential features of logical consequence is disputed. One reaction, not discussed above, is to question the presupposition of the debate and take a more pluralist approach to the common concept of logical consequence. On this line, it is not so much that the common concept of logical consequence is vague as it is ambiguous. At minimum, to say that a sentence X is a logical consequence of a set K of sentences is to say that X is true in every circumstance (that is logically possible situation) in which the sentences in K are true. "Different disambiguations of this notion arise from taking different extensions of the term 'circumstance' " (Restall 2002, p. 427). If we disambiguate the relevant notion of 'circumstance' by the lights of Tarski, 'Admires(kelly, paige)' is a logical consequence of '(Female(evan) & ~Female(evan))'. If we follow the Relevance logician, then not. There is no fact of the matter about whether or not the first sentence is a logical consequence of the second independent of such a disambiguation.

c. What is a logical constant?

We turn to the second, related issue of what qualifies as a logical constant. Tarski (1936, 418-419) writes,

No objective grounds are known to me which permit us to draw a sharp boundary between [logical and non-logical terms]. It seems possible to include among logical terms some which are usually regarded by logicians as extra-logical without running into consequences which stand in sharp contrast to ordinary usage.

And at the end of his (1936), he tells us that the fluctuation in the common usage of the concept of consequence would be accurately reflected in a relative concept of logical consequence, that is a relative concept "which must, on each occasion, be related to a definite, although in greater or less degree arbitrary, division of terms into logical and extra logical" (p. 420). Unlike the relativity described in the previous paragraph, which speaks to the features of the concept of logical consequence, the relativity contemplated by Tarski concerns the selection of logical constants. Tarski's observations of the common concept do not yield a sharp boundary between logical and non-logical terms. It seems that the sentential connectives and the quantifiers of our language M about the McKeons qualify as logical if any terms of M do. We've also followed many logicians and included the identity predicate as logical. (See Quine 1986 for considerations against treating '=' as a logical constant.) But why not include other predicates such as 'OlderThan'?

(1) OlderThan(kelly, paige) (3) ~OlderThan(kelly, kelly)
(2) ~OlderThan(paige, kelly)

Then the consequence relation from (1) to (2) is necessary, formal, and a priori and the truth of (3) is necessary, formal and also a priori. If treating 'OlderThan' as a logical constant does not do violence to our intuitions about the features of the common concept of logical consequence and truth, then it is hard to see why we should forbid such a treatment. By the lights of the relative concept of logical consequence, there is no fact of the matter about whether (2) is a logical consequence of (1) since it is relative to the selection of 'OlderThan' as a logical constant. On the other hand, Tarski hints that even by the lights of the relative concept there is something wrong in thinking that B follows from A and B only relative to taking 'and' as a logical constant. Rather, B follows from A and B we might say absolutely since 'and' should be on everybody's list of logical constants. But why do 'and' and the other sentential connectives, along with the identity predicate and the quantifiers have more of a claim to logical constancy than, say, 'OlderThan'? Tarski (1936) offers no criteria of logical constancy that help answer this question.

On the contemporary scene, there are three general approaches to the issue of what qualifies as a logical constant. One approach is to argue for an inherent property (or properties) of logical constancy that some expressions have and others lack. For example, topic neutrality is one feature traditionally thought to essentially characterize logical constants. The sentential connectives, the identity predicate, and the quantifiers seem topic neutral: they seem applicable to discourse on any topic. The predicates other than identity such as 'OlderThan' do not appear to be topic neutral, at least as standardly interpreted, e.g., 'OlderThan' has no application in the domain of natural numbers. One way of making the concept of topic neutrality precise is to follow Tarski's suggestion in his (1986) that the logical notions expressed in a language L are those notions that are invariant under all one-one transformations of the domain of discourse onto itself. A one-one transformation of the domain of discourse onto itself is a one-one function whose domain and range coincide with the domain of discourse. And a one-one function is a function that always assigns different values to different objects in its domain (that is, for all x and y in the domain of f, if f(x) = f(y), then x = y).

Consider 'Olderthan'. By Tarski's lights, the notion expressed by the predicate is its extension, that is the set of ordered pairs <d, d'> such that d is older than d'. Recall that the extension is:

{<Beth, Matt>, <Beth, Shannon>, <Beth, Kelly>, <Beth, Paige>, <Beth, Evan>, <Matt, Shannon>, <Matt, Kelly>, <Matt, Paige>, <Matt, Evan>, <Shannon, Kelly>, <Shannon, Paige>, <Shannon, Evan>, <Kelly, Paige>, <Kelly, Evan>, <Paige, Evan>}.

If 'OlderThan' is a logical constant its extension (the notion it expresses) should be invariant under every one-one transformation of the domain of discourse (that is the set of McKeons) onto itself. A set is invariant under a one-one transformation f when the set is carried onto itself by the transformation. For example, the extension of 'Female' is invariant under f when for every d, d is a female if and only if f(d) is. 'OlderThan' is invariant under f when <d, d'> is in the extension of 'OlderThan' if and only if <f(d), f(d')> is. Clearly, the extensions of the Female predicate and the Olderthan relation are not invariant under every one-one transformation. For example, Beth is older than Matt, but f(Beth) is not older than f(Matt) when f(Beth) = Evan and f(Matt) = Paige. Compare the identity relation: it is invariant under every one-one transformation of the domain of McKeons because it holds for each and every McKeon. The invariance condition makes precise the concept of topic neutrality. Any expression whose extension is altered by a one-one transformation must discriminate among elements of the domain, making the relevant notions topic-specific. The invariance condition can be extended in a straightforward way to the quantifiers and sentential connectives (see McCarthy 1981 and McGee 1997). Here I illustrate with the existential quantifier. Let Ψ be a well-formed formula with 'x' as its only free variable. '∃x Ψ' has a truth-value in the intended structure UM for our language M about the McKeons. Let f be an arbitrary one-one transformation of the domain D of McKeons onto itself. The function f determines an interpretation I' for Ψ in the range D' of f. The existential quantifier satisfies the invariance requirement for UM ⊨∃x Ψ if and only if U ⊨∃x Ψ for every U derived by a one-one transformation f of the domain D of UM (we say that the U's are isomorphic with UM ).

For example, consider the following existential quantification.

∃x Female(x)

This is true in the intended structure for our language M about the McKeons (that is, UM ⊨∃x Female(x)[g]) ultimately because the set of elements that satisfy 'Female(x)' on some variable assignment that extends g is non-empty (recall that Beth, Shannon, Kelly, and Paige are females). The cardinality of the set of McKeons that satisfy an M-formula is invariant under every one-one transformation of the domain of McKeons onto itself. Hence, for every U isomorphic with UM, the set of elements from DU that satisfy 'Female(x)' on some variable assignment that extends g is non-empty and so

U ⊨∃x Female(x)[g].

Speaking to the other part of the invariance requirement given at the end of the previous paragraph, clearly for every U isomorphic with UM, if U ⊨∃x Female(x)[g], then UM ⊨∃x Female(x)[g] (since U is isomorphic with itself). Crudely, the topic neutrality of the existential quantifier is confirmed by the fact that it is invariant under all one-one transformations of the domain of discourse onto itself.

Key here is that the cardinality of the subset of the domain D that satisfies an L-formula under an interpretation is invariant under every one-one transformation of D onto itself. For example, if at least two elements from D satisfy a formula on an interpretation of it, then at least two elements from D' satisfy the formula under the I' induced by f. This makes not only 'All' and 'Some' topic neutral, but also any cardinality quantifier such as 'Most', 'Finitely many', 'Few', 'At least two', etc. The view suggested in Tarski (1986, p. 149) is that the logic of a language L is the science of all notions expressible in L which are invariant under one-one transformations of L's domain of discourse. For further discussion, defense of, and extensions of the Tarskian invariance requirement on logical constancy, in addition to McCarthy (1981) and McGee (1997), see Sher (1989, 1991).

A second approach to what qualifies as a logical constant is not to make topic neutrality a necessary condition for logical constancy. This undercuts at least some of the significance of the invariance requirement. Instead of thinking that there is an inherent property of logical constancy, we allow the choice of logical constants to depend, at least in part, on the needs at hand, as long as the resulting consequence relation reflects the essential features of the intuitive, pre-theoretic concept of logical consequence. I take this view to be very close to the one that we are left with by default in Tarski (1936). The approach is suggested in Prior (1976) and developed in related but different ways in Hanson (1996) and Warmbrod (1999). It amounts to regarding logic in a strict sense and loose sense. Logic in the strict sense is the science of what follows from what relative to topic neutral expressions, and logic in the loose sense is the study of what follows from what relative to both topic neutral expressions and those topic centered expressions of interest that yield a consequence relation possessing the salient features of the common concept.

Finally, a third approach the issue of what makes an expression a logical constant is simply to reject the view of logical consequence as a formal consequence relation, thereby nullifying the need to distinguish logical terminology in the first place (see Etchemendy 1983 and Bencivenga 1999). We just say, for example, that X is a logical consequence of a set K of sentences if the supposition that all of the K are true and X false violates the meaning of component terminology. Hence, 'Female(kelly)' is a logical consequence of 'Sister(kelly, paige)' simply because the supposition otherwise violates the meaning of the predicates. Whether or not 'Female' and 'Sister' are logical terms doesn't come into play.

6. Conclusion

Using the first-order language M as the context for our inquiry, we have discussed the model-theoretic conception of the conditions that must be met in order for a sentence to be a logical consequence of others. This theoretical characterization is motivated by a distinct development of the common concept of logical consequence. The issue of the nature of logical consequence, which intersects with other areas of philosophy, is still a matter of debate. Any full coverage of the topic would involve study of the logical consequence relation between sentences from other types of languages such as modal languages (containing necessity and possibility operators) (see Hughes and Cresswell 1996) and second-order languages (containing variables that range over properties) (see Shapiro 1991). See also the entries, Logical Consequence, Philosophical Considerations, and Logical Consequence, Deductive-Theoretic Conceptions, in the encyclopedia.

7. References and Further Reading

  • Almog, J. (1989): "Logic and the World", pp. 43-65 in Themes From Kaplan, ed. J. Almog, J. Perry, and H. Wettstein. New York: Oxford University Press.
  • Anderson, A.R., and N. Belnap (1975): Entailment: The Logic of Relevance and Necessity. Princeton: Princeton University Press.
  • Bencivenga, E. (1999): "What is Logic About?", pp. 5-19 in Varzi (1999).
  • Etchemendy, J. (1983): "The Doctrine of Logic as Form", Linguistics and Philosophy 6, pp. 319-334.
  • Etchemendy, J. (1988): "Tarski on truth and logical consequence", Journal of Symbolic Logic 53, pp. 51-79.
  • Etchemendy, J. (1999): The Concept of Logical Consequence. Stanford: CSLI Publications.
  • Haack, S. (1978): Philosophy of Logics. Cambridge: Cambridge University Press.
  • Haack, S. (1996): Deviant Logic, Fuzzy Logic. Chicago: The University of Chicago Press.
  • Hanson, W. (1997): "The Concept of Logical Consequence", The Philosophical Review 106, pp. 365-409.
  • Hughes, G. E. and M.J Cresswell (1996): A New Introduction to Modal Logic. London: Routledge.
  • Kneale, W. (1961): "Universality and Necessity", British Journal for the Philosophy of Science 12, pp. 89-102.
  • Kneale, W. and M. Kneale (1986): The Development of Logic. Oxford: Clarendon Press.
  • Koslow, A. (1999): "The Implicational Nature of Logic: A Structuralist Account", pp. 111-155 in Varzi (1999).
  • McCarthy, T. (1981): "The Idea of a Logical Constant", Journal of Philosophy 78, pp. 499-523.
  • McCarthy, T. (1998): "Logical Constants", pp. 599-603 in Routledge Encyclopedia of Philosophy, vol. 5, ed. E. Craig. London: Routledge.
  • McGee, V. (1999): "Two Problems with Tarski's Theory of Consequence", Proceedings of the Aristotelean Society 92, pp. 273-292.
  • Priest. G. (1995): "Etchemendy and Logical Consequence", Canadian Journal of Philosophy 25, pp. 283-292.
  • Prior, A. (1976): "What is Logic?", pp. 122-129 in Papers in Logic and Ethics ed. P.T. Geach and A. Kenny. Amherst: University of Massachusetts Press.
  • Putnam, H. (1971): Philosophy of Logic. New York: Harper & Row.
  • Quine, W.V. (1986): Philosophy of Logic, 2nd ed. Cambridge: Harvard University Press.
  • Ray, G. (1996): "Logical Consequence: A Defense of Tarski", Journal of Philosophical Logic 25, pp. 617-677.
  • Read, S. (1995): Thinking About Logic. Oxford: Oxford University Press.
  • Restall, G. (2002): "Carnap's Tolerance, Meaning, And Logical Pluralism", Journal of Philosophy 99, pp. 426-443.
  • Russell, B. (1919): Introduction to Mathematical Philosophy. London: Routledge, 1993 printing.
  • Shapiro, S. (1991): Foundations without Foundationalism: A Case For Second-order Logic. Oxford: Clarendon Press.
  • Shapiro, S. (1993): "Modality and Ontology", Mind 102, pp. 455-481.
  • Shapiro, S. (1998): "Logical Consequence: "Models and Modality", pp. 131-156 in The Philosophy of Mathematics Today, ed. Matthias Schirn. Oxford, Clarendon Press.
  • Sher, G. (1989): "A Conception of Tarskian Logic", Pacific Philosophical Quarterly 70, pp. 341-368.
  • Sher, G. (1991): The Bounds of Logic: A Generalized Viewpoint. Cambridge, Mass: MIT Press.
  • Sher, G. (1996): "Did Tarski Commit 'Tarski's Fallacy'?" Journal of Symbolic Logic 61, pp. 653-686.
  • Sher, G. (1999): "Is Logic a Theory of the Obvious?", pp.207-238 in Varzi (1999).
  • Smiley, T. (1995): "A Tale of Two Tortoises", Mind 104, pp. 725-36.
  • Smiley, T. (1998): "Consequence, Conceptions of", pp. 599-603 in Routledge Encyclopedia of Philosophy, vol. 2, ed. E. Craig. London: Routledge.
  • Tarski, A. (1933): "Pojecie prawdy w jezykach nauk dedukeycyjnych", translated as "On the Concept of Truth in Formalized Languages", pp. 152-278 in Tarski (1983).
  • Tarski, A. (1936): "On the Concept of Logical Consequence", pp. 409-420 in Tarski (1983).
  • Tarski, A. (1983): Logic, Semantics, Metamathematics 2nd ed. Indianapolis: Hackett Publishing.
  • Tarski, A. (1986): "What are Logical Notions?" History and Philosophy of Logic 7, pp. 143-154.
  • Varzi, A., ed. (1999): European Review of Philosophy, vol. 4, The Nature of Logic. Stanford: CSLI Publications.
  • Warbrod, K., (1999): "Logical Constants", Mind 108, pp. 503-538.

Author Information

Matthew McKeon
Email: mckeonm@msu.edu
Michigan State University
U. S. A.

Propositional Logic

Propositional Logic

Propositional logic, also known as sentential logic and statement logic, is the branch of logic that studies ways of joining and/or modifying entire propositions, statements or sentences to form more complicated propositions, statements or sentences, as well as the logical relationships and properties that are derived from these methods of combining or altering statements. In propositional logic, the simplest statements are considered as indivisible units, and hence, propositional logic does not study those logical properties and relations that depend upon parts of statements that are not themselves statements on their own, such as the subject and predicate of a statement. The most thoroughly researched branch of propositional logic is classical truth-functional propositional logic, which studies logical operators and connectives that are used to produce complex statements whose truth-value depends entirely on the truth-values of the simpler statements making them up, and in which it is assumed that every statement is either true or false and not both. However, there are other forms of propositional logic in which other truth-values are considered, or in which there is consideration of connectives that are used to produce statements whose truth-values depend not simply on the truth-values of the parts, but additional things such as their necessity, possibility or relatedness to one another.

Table of Contents

  1. Introduction
  2. History
  3. The Language of Propositional Logic
    1. Syntax and Formation Rules of PL
    2. Truth Functions and Truth Tables
    3. Definability of the Operators and the Languages PL' and PL''
  4. Tautologies, Logical Equivalence and Validity
  5. Deduction: Rules of Inference and Replacement
    1. Natural Deduction
    2. Rules of Inference
    3. Rules of Replacement
    4. Direct Deductions
    5. Conditional and Indirect Proofs
  6. Axiomatic Systems and the Propositional Calculus
  7. Meta-Theoretic Results for the Propositional Calculus
  8. Other Forms of Propositional Logic
  9. References and Further Reading

1. Introduction

A statement can be defined as a declarative sentence, or part of a sentence, that is capable of having a truth-value, such as being true or false. So, for example, the following are statements:

  • George W. Bush is the 43rd President of the United States.
  • Paris is the capital of France.
  • Everyone born on Monday has purple hair.

Sometimes, a statement can contain one or more other statements as parts. Consider for example, the following statement:

  • Either Ganymede is a moon of Jupiter or Ganymede is a moon of Saturn.

While the above compound sentence is itself a statement, because it is true, the two parts, "Ganymede is a moon of Jupiter" and "Ganymede is a moon of Saturn", are themselves statements, because the first is true and the second is false.

The term proposition is sometimes used synonymously with statement. However, it is sometimes used to name something abstract that two different statements with the same meaning are both said to "express". In this usage, the English sentence, "It is raining", and the French sentence "Il pleut", would be considered to express the same proposition; similarly, the two English sentences, "Callisto orbits Jupiter" and "Jupiter is orbitted by Callisto" would also be considered to express the same proposition. However, the nature or existence of propositions as abstract meanings is still a matter of philosophical controversy, and for the purposes of this article, the phrases "statement" and "proposition" are used interchangeably.

Propositional logic, also known as sentential logic, is that branch of logic that studies ways of combining or altering statements or propositions to form more complicated statements or propositions. Joining two simpler propositions with the word "and" is one common way of combining statements. When two statements are joined together with "and", the complex statement formed by them is true if and only if both the component statements are true. Because of this, an argument of the following form is logically valid:

Paris is the capital of France and Paris has a population of over two million.
Therefore, Paris has a population of over two million.

Propositional logic largely involves studying logical connectives such as the words "and" and "or" and the rules determining the truth-values of the propositions they are used to join, as well as what these rules mean for the validity of arguments, and such logical relationships between statements as being consistent or inconsistent with one another, as well as logical properties of propositions, such as being tautologically true, being contingent, and being self-contradictory. (These notions are defined below.)

Propositional logic also studies way of modifying statements, such as the addition of the word "not" that is used to change an affirmative statement into a negative statement. Here, the fundamental logical principle involved is that if a given affirmative statement is true, the negation of that statement is false, and if a given affirmative statement is false, the negation of that statement is true.

What is distinctive about propositional logic as opposed to other (typically more complicated) branches of logic is that propositional logic does not deal with logical relationships and properties that involve the parts of a statement smaller than the simple statements making it up. Therefore, propositional logic does not study those logical characteristics of the propositions below in virtue of which they constitute a valid argument:

  1. George W. Bush is a president of the United States.
  2. George W. Bush is a son of a president of the United States.
  3. Therefore, there is someone who is both a president of the United States and a son of a president of the United States.

The recognition that the above argument is valid requires one to recognize that the subject in the first premise is the same as the subject in the second premise. However, in propositional logic, simple statements are considered as indivisible wholes, and those logical relationships and properties that involve parts of statements such as their subjects and predicates are not taken into consideration.

Propositional logic can be thought of as primarily the study of logical operators. A logical operator is any word or phrase used either to modify one statement to make a different statement, or join multiple statements together to form a more complicated statement. In English, words such as "and", "or", "not", "if ... then...", "because", and "necessarily", are all operators.

A logical operator is said to be truth-functional if the truth-values (the truth or falsity, etc.) of the statements it is used to construct always depend entirely on the truth or falsity of the statements from which they are constructed. The English words "and", "or" and "not" are (at least arguably) truth-functional, because a compound statement joined together with the word "and" is true if both the statements so joined are true, and false if either or both are false, a compound statement joined together with the word "or" is true if at least one of the joined statements is true, and false if both joined statements are false, and the negation of a statement is true if and only if the statement negated is false.

Some logical operators are not truth-functional. One example of an operator in English that is not truth-functional is the word "necessarily". Whether a statement formed using this operator is true or false does not depend entirely on the truth or falsity of the statement to which the operator is applied. For example, both of the following statements are true:

  • 2 + 2 = 4.
  • Someone is reading an article in a philosophy encyclopedia.

However, let us now consider the corresponding statements modified with the operator "necessarily":

  • Necessarily, 2 + 2 = 4.
  • Necessarily, someone is reading an article in a philosophy encyclopedia.

Here, the first example is true but the second example is false. Hence, the truth or falsity of a statement using the operator "necessarily" does not depend entirely on the truth or falsity of the statement modified.

Truth-functional propositional logic is that branch of propositional logic that limits itself to the study of truth-functional operators. Classical (or "bivalent") truth-functional propositional logic is that branch of truth-functional propositional logic that assumes that there are are only two possible truth-values a statement (whether simple or complex) can have: (1) truth, and (2) falsity, and that every statement is either true or false but not both.

Classical truth-functional propositional logic is by far the most widely studied branch of propositional logic, and for this reason, most of the remainder of this article focuses exclusively on this area of logic. In addition to classical truth-functional propositional logic, there are other branches of propositional logic that study logical operators, such as "necessarily", that are not truth-functional. There are also "non-classical" propositional logics in which such possibilities as (i) a proposition's having a truth-value other than truth or falsity, (ii) a proposition's having an indeterminate truth-value or lacking a truth-value altogether, and sometimes even (iii) a proposition's being both true and false, are considered. (For more information on these alternative forms of propositional logic, consult Section VIII below.)

2. History

The serious study of logic as an independent discipline began with the work of Aristotle (384-322 BCE). Generally, however, Aristotle's sophisticated writings on logic dealt with the logic of categories and quantifiers such as "all", and "some", which are not treated in propositional logic. However, in his metaphysical writings, Aristotle espoused two principles of great importance in propositional logic, which have since come to be called the Law of Excluded Middle and the Law of Contradiction. Interpreted in propositional logic, the first is the principle that every statement is either true or false, the second is the principle that no statement is both true and false. These are, of course, cornerstones of classical propositional logic. There is some evidence that Aristotle, or at least his successor at the Lyceum, Theophrastus (d. 287 BCE), did recognize a need for the development of a doctrine of "complex" or "hypothetical" propositions, i.e., those involving conjunctions (statements joined by "and"), disjunctions (statements joined by "or") and conditionals (statements joined by "if... then..."), but their investigations into this branch of logic seem to have been very minor.

More serious attempts to study such statement operators such as "and", "or" and "if... then..." were conducted by the Stoic philosophers in the late 3rd century BCE. Since most of their original works -- if indeed, many writings were even produced -- are lost, we cannot make many definite claims about exactly who first made investigations into what areas of propositional logic, but we do know from the writings of Sextus Empiricus that Diodorus Cronus and his pupil Philo had engaged in a protracted debate about whether the truth of a conditional statement depends entirely on it not being the case that its antecedent (if-clause) is true while its consequent (then-clause) is false, or whether it requires some sort of stronger connection between the antecedent and consequent -- a debate that continues to have relevance for modern discussion of conditionals. The Stoic philosopher Chrysippus (roughly 280-205 BCE) perhaps did the most in advancing Stoic propositional logic, by marking out a number of different ways of forming complex premises for arguments, and for each, listing valid inference schemata. Chrysippus suggested that the following inference schemata are to be considered the most basic:

  1. If the first, then the second; but the first; therefore the second.
  2. If the first, then the second; but not the second; therefore, not the first.
  3. Not both the first and the second; but the first; therefore, not the second.
  4. Either the first or the second [and not both]; but the first; therefore, not the second.
  5. Either the first or the second; but not the second; therefore the first.

Inference rules such as the above correspond very closely the the basic principles in a contemporary system of natural deduction for propositional logic. For example, the first two rules correspond to the rules of modus ponens and modus tollens, respectively. These basic inference schemata were expanded upon by less basic inference schemata by Chrysippus himself and other Stoics, and are preserved in the work of Diogenes Laertius, Sextus Empiricus and later, in the work of Cicero.

Advances on the work of the Stoics were undertaken in small steps in the centuries that followed. This work was done by, for example, the second century logician Galen (roughly 129-210 CE), the sixth century philosopher Boethius (roughly 480-525 CE) and later by medieval thinkers such as Peter Abelard (1079-1142) and William of Ockham (1288-1347), and others. Much of their work involved producing better formalizations of the principles of Aristotle or Chrysippus, introducing improved terminology and furthering the discussion of the relationships between operators. Abelard, for example, seems to have been the first to differentiate clearly exclusive from inclusive disjunction (discussed below), and to suggest that inclusive disjunction is the more important notion for the development of a relatively simple logic of disjunctions.

The next major step forward in the development of propositional logic came only much later with the advent of symbolic logic in the work of logicians such as Augustus DeMorgan (1806-1871) and, especialy, George Boole (1815-1864) in the mid-19th century. Boole was primarily interested in developing a mathematical-style "algebra" to replace Aristotelian syllogistic logic, primarily by employing the numeral "1" for the universal class, the numeral "0" for the empty class, the multiplication notation "xy" for the intersection of classes x and y, the addition notation "x + y" for the union of classes x and y, etc., so that statements of syllogistic logic could be treated in quasi-mathematical fashion as equations; e.g., "No x is y" could be written as "xy = 0". However, Boole noticed that if an equation such as "x = 1" is read as "x is true", and "x = 0" is read as "x is false", the rules given for his logic of classes can be transformed into logic for propositions, with "x + y = 1" reinterpreted as saying that either x or y is true, and "xy = 1" reinterpreted as meaning that x and y are both true. Boole's work sparked rapid interest in logic among mathematicians and later, "Boolean algebras" were used to form the basis of the truth-functional propositional logics utilized in computer design and programming.

In the late 19th century, Gottlob Frege (1848-1925) presented logic as a branch of systematic inquiry more fundamental than mathematics or algebra, and presented the first modern axiomatic calculus for logic in his 1879 work Begriffsschrift. While it covered more than propositional logic, from Frege's axiomatization it is possible to distill the first complete axiomatization of classical truth-functional propositional logic. Frege was also the first to systematically argue that all truth-functional connectives could be defined in terms of negation and the material conditional.

In the early 20th century, Bertrand Russell gave a different complete axiomatization of propositional logic, considered on its own, in his 1906 paper "The Theory of Implication", and later, along with A. N. Whitehead, produced another axiomatization using disjunction and negation as primitives in the 1910 work Principia Mathematica. Proof of the possibility of defining all truth functional operators in virtue of a single binary operator was first published by American logician H. M. Sheffer in 1913, though C. S. Peirce (1839-1914) seems have discovered this decades earlier. In 1917, French logician Jean Nicod discovered that an axiomatization for propositional logic using the Sheffer stroke involving only a single axiom schema and single inference rule was possible.

While the notion of a "truth table" often utilized in the discussion of truth-functional connectives, discussed below, seems to have been at least implicit in the work of Peirce, W. S. Jevons (1835-1882), Lewis Carroll (1832-1898), John Venn (1834-1923), and Allan Marquand (1853-1924), and truth tables appear explicitly in writings by Eugen Müller as early as 1909, their use gained rapid popularity in the early 1920s, perhaps due to the combined influence of the work of Emil Post, whose 1921 makes liberal use of them, and Ludwig Wittgenstein's 1921 Tractatus Logico-Philosophicus, in which truth tables and truth-functionality are prominently featured.

Systematic inquiry into axiomatic systems for propositional logic and related metatheory was conducted in the 1920s, 1930s and 1940s by such thinkers as David Hilbert, Paul Bernays, Alfred Tarski, Jan Łukasiewicz, Kurt Gödel, Alonzo Church and others. It is during this period, that most of the important metatheoretic results such as those discussed in Section VII were discovered.

Complete natural deduction systems for classical truth-functional propositional logic were developed and popularized in the work of Gerhard Gentzen in the mid-1930s, and subsequently introduced into influential textbooks such as that of F. B. Fitch (1952) and Irving Copi (1953).

Modal propositional logics are the most widely studied form of non-truth-functional propositional logic. While interest in modal logic dates back to Aristotle, by contemporary standards, the first systematic inquiry into this modal propositional logic can be found in the work of C. I. Lewis in 1912 and 1913. Among other well-known forms of non-truth-functional propositional logic, deontic logic began with the work of Ernst Mally in 1926, and epistemic logic was first treated systematically by Jaakko Hintikka in the early 1960s. The modern study of three-valued propositional logic began in the work of Jan Łukasiewicz in 1917, and other forms of non-classical propositional logic soon followed suit. Relevance propositional logic is relatively more recent; dating from the mid-1970s in the work of A. R. Anderson and N. D. Belnap. Paraconsistent logic, while having its roots in the work of Łukasiewicz and others, has blossomed into an independent area of research only recently, mainly due to work undertaken by N. C. A. da Costa, Graham Priest and others in the 1970s and 1980s.

3. The Language of Propositional Logic

The basic rules and principles of classical truth-functional propositional logic are, among contemporary logicians, almost entirely agreed upon, and capable of being stated in a definitive way. This is most easily done if we utilize a simplified logical language that deals only with simple statements considered as indivisible units as well as complex statements joined together by means of truth-functional connectives. We first consider a language called PL for "Propositional Logic". Later we shall consider two even simpler languages, PL' and PL''.

a. Syntax and Formation Rules of PL

In any ordinary language, a statement would never consist of a single word, but would always at the very least consist of a noun or pronoun along with a verb. However, because propositional logic does not consider smaller parts of statements, and treats simple statements as indivisible wholes, the language PL uses uppercase letters 'A', 'B', 'C', etc., in place of complete statements. The logical signs '&', 'v', '→', '↔', and '¬' are used in place of the truth-functional operators, "and", "or", "if... then...", "if and only if", and "not", respectively. So, consider again the following example argument, mentioned in Section I.

Paris is the capital of France and Paris has a population of over two million.
Therefore, Paris has a population of over two million.

If we use the letter 'C' as our translation of the statement "Paris is the captial of France" in PL, and the letter 'P' as our translation of the statement "Paris has a population of over two million", and use a horizontal line to separate the premise(s) of an argument from the conclusion, the above argument could be symbolized in language PL as follows:

C & P
P

In addition to statement letters like 'C' and 'P' and the operators, the only other signs that sometimes appear in the language PL are parentheses which are used in forming even more complex statements. Consider the English compound sentence, "Paris is the most important city in France if and only if Paris is the capital of France and Paris has a population of over two million." If we use the letter 'I' in language PL to mean that Paris is the most important city in France, this sentence would be translated into PL as follows:

I ↔ (C & P)

The parentheses are used to group together the statements 'C' and 'P' and differentiate the above statement from the one that would be written as follows:

(I ↔ C) & P

This latter statement asserts that Paris is the most important city in France if and only if it is the capital of France, and (separate from this), Paris has a population of over two million. The difference between the two is subtle, but important logically.

It is important to describe the syntax and make-up of statements in the language PL in a precise manner, and give some definitions that will be used later on. Before doing this, it is worthwhile to make a distinction between the language in which we will be discussing PL, namely, English, from PL itself. Whenever one language is used to discuss another, the language in which the discussion takes place is called the metalanguage, and language under discussion is called the object language. In this context, the object language is the language PL, and the metalanguage is English, or to be more precise, English supplemented with certain special devices that are used to talk about language PL. It is possible in English to talk about words and sentences in other languages, and when we do, we place the words or sentences we wish to talk about in quotation marks. Therefore, using ordinary English, I can say that "parler" is a French verb, and "I & C" is a statement of PL. The following expression is part of PL, not English:

(I ↔ C) & P

However, the following expression is a part of English; in particular, it is the English name of a PL sentence:

"(I ↔ C) & P"

This point may seem rather trivial, but it is easy to become confused if one is not careful.

In our metalanguage, we shall also be using certain variables that are used to stand for arbitrary expressions built from the basic symbols of PL. In what follows, the Greek letters 'α', 'β', and so on, are used for any object language (PL) expression of a certain designated form. For example, later on, we shall say that, if α is a statement of PL, then so is '¬α'. Notice that 'α' itself is not a symbol that appears in PL; it is a symbol used in English to speak about symbols of PL. We will also be making use of so-called "Quine corners", written ''' and ''', which are a special metalinguistic device used to speak about object language expressions constructed in a certain way. Suppose α is the statement "(I ↔ C)" and β is the statement "(P & C)"; then 'α v β' is the complex statement "(I ↔ C) v (P & C)".

Let us now proceed to giving certain definitions used in the metalanguage when speaking of the language PL.

Definition: A statement letter of PL is defined as any uppercase letter written with or without a numerical subscript.

Note: According to this definition, 'A', 'B', 'B2', 'C3', and 'P14' are examples of statement letters. The numerical subscripts are used just in case we need to deal with more than 26 simple statements: in that case, we can use 'P1' to mean something different than 'P2', and so forth.

Definition: A connective or operator of PL is any of the signs '¬', '&', 'v', '→', and '↔'.

Definition: A well-formed formula (hereafter abbrevated as wff) of PL is defined recursively as follows:

  1. Any statement letter is a well-formed formula.
  2. If α is a well-formed formula, then so is '¬α'.
  3. If α and β are well-formed formulas, then so is '(α & β)'.
  4. If α and β are well-formed formulas, then so is 'v β)'.
  5. If α and β are well-formed formulas, then so is '(α → β)'.
  6. If α and β are well-formed formulas, then so is '(α ↔ β)'.
  7. Nothing that cannot be constructed by successive steps of (1)-(6) is a well-formed formula.

Note: According to part (1) of this definition, the statement letters 'C', 'P' and 'M' are wffs. Because 'C' and 'P' are wffs, by part (3), "(C & P)" is a wff. Because it is a wff, and 'M' is also a wff, by part (6), "(M ↔ (C & P))" is a wff. It is conventional to regard the outermost parentheses on a wff as optional, so that "M ↔ (C & P)" is treated as an abbreviated form of "(M ↔ (C & P))". However, whenever a shorter wff is used in constructing a more complicated wff, the parentheses on the shorter wff are necessary.

The notion of a well-formed formula should be understood as corresponding to the notion of a grammatically correct or properly constructed statement of language PL. This definition tells us, for example, that "¬(Q v ¬R)" is grammatical for PL because it is a well-formed formula, whereas the string of symbols, ")¬Q¬v(↔P&", while consisting entirely of symbols used in PL, is not grammatical because it is not well-formed.

b. Truth Functions and Truth Tables

So far we have in effect described the grammar of language PL. When setting up a language fully, however, it is necessary not only to establish rules of grammar, but also describe the meanings of the symbols used in the language. We have already suggested that uppercase letters are used as complete simple statements. Because truth-functional propositional logic does not analyze the parts of simple statements, and only considers those ways of combining them to form more complicated statements that make the truth or falsity of the whole dependent entirely on the truth or falsity of the parts, in effect, it does not matter what meaning we assign to the individual statement letters like 'P', 'Q' and 'R', etc., provided that each is taken as either true or false (and not both).

However, more must be said about the meaning or semantics of the logical operators '&', 'v', '→', '↔', and '¬'. As mentioned above, these are used in place of the English words, 'and', 'or', 'if... then...', 'if and only if', and 'not', respectively. However, the correspondence is really only rough, because the operators of PL are considered to be entirely truth-functional, whereas their English counterparts are not always used truth-functionally. Consider, for example, the following statements:

  1. If Bob Dole is president of the United States in 2004, then the president of the United States in 2004 is a member of the Republican party.
  2. If Al Gore is president of the United States in 2004, then the president of the United States in 2004 is a member of the Republican party.

For those familiar with American politics, it is tempting to regard the English sentence (1) as true, but to regard (2) as false, since Dole is a Republican but Gore is not. But notice that in both cases, the simple statement in the "if" part of the "if... then..." statement is false, and the simple statement in the "then" part of the statement is true. This shows that the English operator "if... then..." is not fully truth-functional. However, all the operators of language PL are entirely truth-functional, so the sign '→', though similar in many ways to the English "if... then..." is not in all ways the same. More is said about this operator below.

Since our study is limited to the ways in which the truth-values of complex statements depend on the truth-values of the parts, for each operator, the only aspect of its meaning relevant in this context is its associated truth-function. The truth-function for an operator can be represented as a table, each line of which expresses a possible combination of truth-values for the simpler statements to which the operator applies, along with the resulting truth-value for the complex statement formed using the operator.

The signs '&', 'v', '→', '↔', and '¬', correspond, respectively, to the truth-functions of conjunction, disjunction, material implication, material equivalence, and negation. We shall consider these individually.

Conjunction: The conjunction of two statements α and β, written in PL as '(α & β)', is true if both α and β are true, and is false if either α is false or β is false or both are false. In effect, the meaning of the operator '&' can be displayed according to the following chart, which shows the truth-value of the conjunction depending on the four possibilities of the truth-values of the parts:

α β (α & β)
T
T
F
F
T
F
T
F
T
F
F
F

Conjunction using the operator '&' is language PL's rough equivalent of joining statements together with 'and' in English. In a statement of the form '(α & β)', the two statements joined together, α and β, are called the conjuncts, and the whole statement is called a conjunction.

Instead of the sign '&', some other logical works use the signs '' or '•' for conjunction.

Disjunction: The disjunction of two statements α and β, written in PL as 'v β)', is true if either α is true or β is true, or both α and β are true, and is false only if both α and β are false. A chart similar to that given above for conjunction, modified for to show the meaning of the disjunction sign 'v' instead, would be drawn as follows:

α β v β)
T
T
F
F
T
F
T
F
T
T
T
F

This is language PL's rough equivalent of joining statements together with the word 'or' in English. However, it should be noted that the sign 'v' is used for disjunction in the inclusive sense. Sometimes when the word 'or' is used to join together two English statements, we only regard the whole as true if one side or the other is true, but not both, as when the statement "Either we can buy the toy robot, or we can buy the toy truck; you must choose!" is spoken by a parent to a child who wants both toys. This is called the exclusive sense of 'or'. However, in PL, the sign 'v' is used inclusively, and is more analogous to the English word 'or' as it appears in a statement such as (e.g., said about someone who has just received a perfect score on the SAT), "either she studied hard, or she is extremely bright", which does not mean to rule out the possibility that she both studied hard and is bright. In a statement of the form 'v β)', the two statements joined together, α and β, are called the disjuncts, and the whole statement is called a disjunction.

Material Implication: This truth-function is represented in language PL with the sign '→'. A statement of the form '(α → β)', is false if α is true and β is false, and is true if either α is false or β is true (or both). This truth-function generates the following chart:

α β (α → β)
T
T
F
F
T
F
T
F
T
F
T
T

Because the truth of a statement of the form '(α → β)' rules out the possibility of α being true and β being false, there is some similarity between the operator '→' and the English phrase, "if... then...", which is also used to rule out the possibility of one statement being true and another false; however, '→' is used entirely truth-functionally, and so, for reasons discussed earlier, it is not entirely analogous with "if... then..." in English. If α is false, then '(α → β)' is regarded as true, whether or not there is any connection between the falsity of α and the truth-value of β. In a statement of the form, '(α → β)', we call α the antecedent, and we call β the consequent, and the whole statement '(α → β)' is sometimes also called a (material) conditional.

The sign '⊃' is sometimes used instead of '→' for material implication.

Material Equivalence: This truth-function is represented in language PL with the sign '↔'. A statement of the form '(α ↔ β)' is regarded as true if α and β are either both true or both false, and is regarded as false if they have different truth-values. Hence, we have the following chart:

α β (α ↔ β)
T
T
F
F
T
F
T
F
T
F
F
T

Since the truth of a statement of the form '(α ↔ β)' requires α and β to have the same truth-value, this operator is often likened to the English phrase "...if and only if...". Again, however, they are not in all ways alike, because '↔' is used entirely truth-functionally. Regardless of what α and β are, and what relation (if any) they have to one another, if both are false, '(α ↔ β)' is considered to be true. However, we would not normally regard the statement "Al Gore is the President of the United States in 2004 if and only if Bob Dole is the President of the United States in 2004" as true simply because both simpler statements happen to be false. A statement of the form '(α ↔ β)' is also sometimes referred to as a (material) biconditional.

The sign '≡' is sometimes used instead of '↔' for material equivalence.

Negation: The negation of statement α, simply written '¬α' in language PL, is regarded as true if α is false, and false if α is true. Unlike the other operators we have considered, negation is applied to a single statement. The corresponding chart can therefore be drawn more simply as follows:

α ¬α
T
F
F
T

The negation sign '¬' bears obvious similarities to the word 'not' used in English, as well as similar phrases used to change a statement from affirmative to negative or vice-versa. In logical languages, the signs '~' or '–' are sometimes used in place of '¬'.

The five charts together provide the rules needed to determine the truth-value of a given wff in language PL when given the truth-values of the independent statement letters making it up. These rules are very easy to apply in the case of a very simple wff such as "(P & Q)". Suppose that 'P' is true, and 'Q' is false; according to the second row of the chart given for the operator, '&', we can see that this statement is false.

However, the charts also provide the rules necessary for determining the truth-value of more complicated statements. We have just seen that "(P & Q)" is false if 'P' is true and 'Q' is false. Consider a more complicated statement that contains this statement as a part, e.g., "((P & Q) → ¬R)", and suppose once again that 'P' is true, and 'Q' is false, and further suppose that 'R' is also false. To determine the truth-value of this complicated statement, we begin by determining the truth-value of the internal parts. The statement "(P & Q)", as we have seen, is false. The other substatement, "¬R", is true, because 'R' is false, and '¬' reverses the truth-value of that to which it is applied. Now we can determine the truth-value of the whole wff, "((P & Q) → ¬R)", by consulting the chart given above for '→'. Here, the wff "(P & Q)" is our α, and "¬R" is our β, and since their truth-values are F and T, respectively, we consult the third row of the chart, and we see that the complex statement "((P & Q) → ¬R)" is true.

We have so far been considering the case in which 'P' is true and 'Q' and 'R' are both false. There are, however, a number of other possibilities with regard to the possible truth-values of the statement letters, 'P', 'Q' and 'R'. There are eight possibilities altogether, as shown by the following list:

P
Q
R
T
T
T
T
F
F
F
F
T
T
F
F
T
T
F
F
T
F
T
F
T
F
T
F

Strictly speaking, each of the eight possibilities above represents a different truth-value assignment, which can be defined as a possible assignment of truth-values T or F to the different statement letters making up a wff or series of wffs. If a wff has n distinct statement letters making up, the number of possible truth-value assignments is 2n. With the wff, "((P & Q) → ¬R)", there are three statement letters, 'P', 'Q' and 'R', and so there are 8 truth-value assignments.

It then becomes possible to draw a chart showing how the truth-value of a given wff would be resolved for each possible truth-value assignment. We begin with a chart showing all the possible truth-value assignments for the wff, such as the one given above. Next, we write out the wff itself on the top right of our chart, with spaces between the signs. Then, for each, truth-value assignment, we repeat the appropriate truth-value, 'T', or 'F', underneath the statement letters as they appear in the wff. Then, as the truth-values of those wffs that are parts of the complete wff are determined, we write their truth-values underneath the logical sign that is used to form them. The final column filled in shows the truth-value of the entire statement for each truth-value assignment. Given the importance of this column, we highlight it in some way. Here, we highlight it in yellow.

P
Q
R
|
((P
&
Q)
¬
R)
T
T
T
T
F
F
F
F
T
T
F
F
T
T
F
F
T
F
T
F
T
F
T
F
T
T
T
T
F
F
F
F
T
T
F
F
F
F
F
F
T
T
F
F
T
T
F
F
F
T
T
T
T
T
T
T
F
T
F
T
F
T
F
T
T
F
T
F
T
F
T
F

Charts such as the one given above are called truth tables. In classical truth-functional propositional logic, a truth table constructed for a given wff in effects reveals everything logically important about that wff. The above chart tells us that the wff "((P & Q) → ¬R)" can only be false if 'P', 'Q' and 'R' are all true, and is true otherwise.

c. Definability of the Operators and the Languages PL' and PL''

The language PL, as we have seen, contains operators that are roughly analogous to the English operators 'and', 'or', 'if... then...', 'if and only if', and 'not'. Each of these, as we have also seen, can be thought of as representing a certain truth-function. It might be objected however, that there are other methods of combining statements together in which the truth-value of the statement depends wholly on the truth-values of the parts, or in other words, that there are truth-functions besides conjunction, (inclusive) disjunction, material implication, material equivalence and negation. For example, we noted earlier that the sign 'v' is used analogously to 'or' in the inclusive sense, which means that language PL has no simple sign for 'or' in the exclusive sense. It might be thought, however, that the langauge PL is incomplete without the addition of an additional symbol, say 'v', such that 'v β)' would be regarded as true if α is true and β is false, or α is false and β is true, but would be regarded as false if either both α and β are true or both α and β are false.

However, a possible response to this objection would be to make note that while language PL does not include a simple sign for this exclusive sense of disjunction, it is possible, using the symbols that are included in PL, to construct a statement that is true in exactly the same circumstances. Consider, e.g., a statement of the form, '¬(α ↔ β)'. It is easily shown, using a truth table, that any wff of this form would have the same truth-value as a would-be statement using the operator 'v'. See the following chart:

α
β
|
¬
β)
T
T
F
F
T
F
T
F
F
T
T
F
T
T
F
F
T
F
F
T
T
F
T
F

Here we see that a wff of the form '¬(α ↔ β)' is true if either α or β is true but not both. This shows that PL is not lacking in any way by not containing a sign 'v'. All the work that one would wish to do with this sign can be done using the signs '↔' and '¬'. Indeed, one might claim that the sign 'v' can be defined in terms of the signs '↔', and '¬', and then use the form 'v β)' as an abbreviation of a wff of the form '¬(α ↔ β)', without actually expanding the primitive vocabulary of language PL.

The signs '&', 'v', '→', '↔' and '¬', were chosen as the operators to include in PL because they correspond (roughly) the sorts of truth-functional operators that are most often used in ordinary discourse and reasoning. However, given the preceding discussion, it is natural to ask whether or not some operators on this list can be defined in terms of the others. It turns out that they can. In fact, if for some reason we wished our logical language to have a more limited vocabulary, it is possible to get by using only the signs '¬' and '→', and define all other possible truth-functions in virtue of them. Consider, e.g., the following truth table for statements of the form '¬(α → ¬β)':

α
β
|
¬
¬
β)
T
T
F
F
T
F
T
F
T
F
F
F
T
T
F
F
F
T
T
T
F
T
F
T
T
F
T
F

We can see from the above that a wff of the form '¬(α → ¬β)' always has the same truth-value as the corresponding statement of the form '(α & β)'. This shows that the sign '&' can in effect be defined using the signs '¬' and '→'.

Next, consider the truth table for statements of the form '(¬α → β)':

α
β
|
α
β)
T
T
F