According to the deductivetheoretic conception of logical consequence, a sentence X is a logical consequence of a set K of sentences if and only if X is a deductive consequence of K, that is, X is deducible or provable from K. Deductive consequence is clarified in terms of the notion of proof in a correct deductive system. Since, arguably, logical consequence conceived deductivetheoretically is not a compact relation and deducibility in a deductive system is, there are languages for which deductive consequence cannot be defined in terms of deducibility in a correct deductive system. However, it is true that if a sentence is deducible in a correct deductive system from other sentences, then the sentence is a deductive consequence of them. A deductive system is correct only if its rules of inference correspond to intuitively valid principles of inference. So whether or not a natural deductive system is correct brings into play rival theories of valid principles of inference such as classical, relevance, intuitionistic, and free logics.
According to the deductivetheoretic conception of logical consequence, a sentence X is a logical consequence of a set K of sentences if and only if X is a deductive consequence of K, that is, X is deducible from K. X is deducible from K just in case there is an actual or possible deduction of X from K. In such a case, we say that X may be correctly inferred from K or that it would be correct to conclude X from K. A deduction is associated with a pair ; the set K of sentences is the basis of the deduction, and X is the conclusion. A deduction from K to X is a finite sequence S of sentences ending with X such that each sentence in S (that is, each intermediate conclusion) is derived from a sentence (or more) in K or from previous sentences in S in accordance with a correct principle of inference. The notion of a deduction is clarified by appealing to a deductive system. A deductive system D is a collection of rules that govern which sequences of sentences, associated with a given , are allowed and which are not. Such a sequence is called a proof in D (or, equivalently, a deduction in D) of X from K. The rules must be such that whether or not a given sequence associated with qualifies as a proof in D of X from K is decidable purely by inspection and calculation. That is, the rules provide a purely mechanical procedure for deciding whether a given object is a proof in D of X from K. We write
K ⊢_{D} X
to mean
X is deducible in deductive system D from K.
See the entry Logical Consequence, Philosophical Considerations for discussion of the interplay between the concepts of logical consequence and deductive consequence, and deductive systems. We say that a deductive system D is correct when for any K and X, proofs in D of X from K correspond to intuitively valid deductions. For a given language the deductive consequence relation is defined in terms of a correct deductive system D only if it is true that
X is a deductive consequence of K if and only if X is deducible in D from K.
Sundholm (1983) offers a thorough survey of three main types of deductive systems. In this article, a natural deductive system is presented that originates in the work of the mathematician Gerhard Gentzen (1934) and the logician Fredrick Fitch (1952). We will refer to the deductive system as N (for 'natural deduction'). For an indepth introductory presentation of a natural deductive system very similar to N see Barwise and Etchemendy (2001). N is a collection of inference rules. A proof of X from K that appeals exclusively to the inference rules of N is a formal deduction or formal proof. We shall take a formal proof to be associated with a pair where K is a set of sentences from a firstorder language M, which will be introduced below, and X is an Msentence. The set K of sentences is the basis of the deduction, and X is the conclusion. We say that a formal deduction from K to X is a finite sequence S of sentences ending with X such that each sentence in S is either an assumption, deduced from a sentence (or more) in K, or deduced from previous sentences in S in accordance with one of N's inference rules.
Formal proofs are not only epistemologically significant for securing knowledge, but also the derivations making up formal proofs may serve as models of the informal deductive reasoning performed using sentences from language M. Indeed, a primary value of a formal proof is that it can serve as a model of ordinary deductive reasoning that explains the force of such reasoning by representing the principles of inference required to get to X from K.
Gentzen, one of the first logicians to present a natural deductive system, makes clear that a primary motive for the construction of his system is to reflect as accurately as possible the actual logical reasoning involved in mathematical proofs. He writes,
My starting point was this: The formalization of logical deduction especially as it has been developed by Frege, Russell, and Hilbert, is rather far removed from the forms of deduction used in practice in mathematical proofs...In contrast, I intended first to set up a formal system which comes as close as possible to actual reasoning. The result was a 'calculus of natural deduction'. (Gentzen 1934, p. 68)
Natural deductive systems are distinguished from other deductive systems by their usefulness in modeling ordinary, informal deductive inferential practices. Paraphrasing Gentzen, we may say that if one is interested in seeing logical connections between sentences in the most natural way possible, then a natural deductive system is a good choice for defining the deductive consequence relation.
The remainder of the article proceeds as follows. First, an interpreted language M is given. Next, we present the deductive system N and represent the deductive consequence relation in M. After discussing the philosophical significance of the deductive consequence relation defined in terms of N, we consider some standard criticisms of the correctness of deductive system N.
Here we define a simple language M, a language about the McKeon family, by first sketching what strings qualify as wellformed formulas (wffs) in M. Next we define sentences from formulas, and then give an account of truth in M, that is we describe the conditions in which Msentences are true.
Building blocks of formulas
Terms
Individual names—'beth', 'kelly', 'matt', 'paige', 'shannon', 'evan', and 'w_{1}', 'w_{2}', 'w_{3}', etc.
Variables—'x', 'y', 'z', 'x_{1}', 'y_{1}', 'z_{1}', 'x_{2}', 'y_{2}', 'z_{2}', etc.
Predicates
1place predicates—'Female', 'Male'
2place predicates—'Parent', 'Brother', 'Sister', 'Married', 'OlderThan', 'Admires', '='.
Blueprints of wellformed formulas (wffs)
Atomic formulas: An atomic wff is any of the above nplace predicates followed by n terms which are enclosed in parentheses and separated by commas.
Formulas: The general notion of a wellformed formula (wff) is defined recursively as follows:
(1) All atomic wffs are wffs.
(2) If α is a wff, so is ~α.
(3) If α and β are wffs, so is (α & β).
(4) If α and β are wffs, so is (α v β).
(5) If α and β are wffs, so is (α → β).
(6) If Ψ is a wff and v is a variable, then ∃vΨ is a wff.
(7) If Ψ is a wff and v is a variable, then ∀vΨ is a wff.
Finally, no string of symbols is a wellformed formula of M unless the string can be derived from (1)(7).
The signs '~', '&', 'v', and '→', are called sentential connectives. The signs '∀' and '∃' are called quantifiers.
It will prove convenient to have available in M an infinite number of individual names as well as variables. The strings 'Parent(beth, paige)' and 'Male(x)' are examples of atomic wffs. We allow the identity symbol in an atomic formula to occur in between two terms, e.g., instead of '=(evan, evan)' we allow '(evan = evan)'. The symbols '~', '&', 'v', and '→' correspond to the English words 'not', 'and', 'or' and 'if...then', respectively. '∃' is our symbol for an existential quantifier and '∀' represents the universal quantifier. ∃vΨ and ∀vΨ correspond to for some v, Ψ, and for all v, Ψ, respectively. For every quantifier, its scope is the smallest part of the wff in which it is contained that is itself a wff. An occurrence of a variable v is a bound occurrence iff it is in the scope of some quantifier of the form ∃v or the form ∀v, and is free otherwise. For example, the occurrence of 'x' is free in 'Male(x)' and in '∃y Married(y, x)'. The occurrences of 'y' in the second formula are bound because they are in the scope of the existential quantifier. A wff with at least one free variable is an open wff, and a closed formula is one with no free variables. A sentence is a closed wff. For example, 'Female(kelly)' and '∃y∃x Married(y, x)' are sentences but 'OlderThan(kelly, y)' and '(∃x Male(x) & Female(z))' are not. So, not all of the wffs of M are sentences. As noted below, this will affect our definition of truth for M.
We now provide a semantics for M. This is done in two steps. First, we specify a domain of discourse, that is, the chunk of the world that our language M is about, and interpret M's predicates and names in terms of the elements composing the domain. Then we state the conditions under which each type of Msentence is true. To each of the above syntactic rules (17) there corresponds a semantic rule that stipulates the conditions in which the sentence constructed using the syntactic rule is true. The principle of bivalence is assumed and so 'not true' and 'false' are used interchangeably. In effect, the interpretation of M determines a truthvalue (true, false) for each and every sentence of M.
Domain D—The McKeons: Matt, Beth, Shannon, Kelly, Paige, and Evan.
Here are the referents and extensions of the names and predicates of M.
Terms: 'matt' refers to Matt, 'beth' refers to Beth, 'shannon' refers to Shannon, etc.
Predicates. The meaning of a predicate is identified with its extension, that is the set (possibly empty) of elements from the domain D the predicate is true of. The extension of a oneplace predicate is a set of elements from D, the extension of a twoplace predicate is a set of ordered pairs of elements from D.
The extension of 'Male' is {Matt, Evan}.
The extension of 'Female' is {Beth, Shannon, Kelly, Paige}.
The extension of 'Parent' is {<Matt, Shannon>, <Matt, Kelly>, <Matt, Paige>, <Matt, Evan>, <Beth, Shannon>, <Beth, Kelly>, <Beth, Paige>, <Beth, Evan>}.
The extension of 'Married' is {<Matt, Beth>, <Beth, Matt>}.
The extension of 'Sister' is {<Shannon, Kelly>, <Kelly, Shannon>, <Shannon, Paige>, <Paige, Shannon>, <Kelly, Paige>, <Paige, Kelly>, <Kelly, Evan>, <Paige, Evan>, <Shannon, Evan>}.
The extension of 'Brother' is {<Evan, Shannon>, <Evan, Kelly>, <Evan, Paige>}.
The extension of 'OlderThan' is {<Beth, Matt>, <Beth, Shannon>, <Beth, Kelly>, <Beth, Paige>, <Beth, Evan>, <Matt, Shannon>, <Matt, Kelly>, <Matt, Paige>, <Matt, Evan>, <Shannon, Kelly>, <Shannon, Paige>, <Shannon, Evan>, <Kelly, Paige>, <Kelly, Evan>, <Paige, Evan>}.
The extension of 'Admires' is {<Matt, Beth>, <Shannon, Matt>, <Shannon, Beth>, <Kelly, Beth>, <Kelly, Matt>, <Kelly, Shannon>, <Paige, Beth>, <Paige, Matt>, <Paige, Shannon>, <Paige, Kelly>, <Evan, Beth>, <Evan, Matt>, <Evan, Shannon>, <Evan, Kelly>, <Evan, Paige>}.
The extension of '=' is {<Matt, Matt>, <Beth, Beth>, <Shannon, Shannon>, <Kelly, Kelly>, <Paige, Paige>, <Evan, Evan>}.
The atomic sentence 'Female(kelly)' is true because, as indicated above, the referent of 'kelly' is in the extension of the property designated by 'Female'. The atomic sentence 'Married(shannon, kelly)' is false because the ordered pair is not in the extension of the relation designated by 'Married'.
(I)  An atomic sentence with a oneplace predicate is true iff the referent of the term is a member of the extension of the predicate, and an atomic sentence with a twoplace predicate is true iff the ordered pair formed from the referents of the terms in order is a member of the extension of the predicate. 
(II)  ~α is true iff α is false. 
(III)  (α & β) is true when both α and β are true; otherwise (α & β) is false. 
(IV)  (α v β) is true when at least one of α and β is true; otherwise (α v β) is false. 
(V)  (α → β) is true if and only if (iff) α is false or β is true. So, (α → β) is false just in case α is true and β is false. 
The meanings for '~' and '&' roughly correspond to the meanings of 'not' and 'and' as ordinarily used. We call ~α and (α & β) negation and conjunction formulas, respectively. The formula (~α v β) is called a disjunction and the meaning of 'v' corresponds to inclusive or. There are a variety of conditionals in English (e.g., causal, counterfactual, logical), with each type having a distinct meaning. The conditional defined by (V) above is called the material conditional. One way of following (V) is to see that the truth conditions for (α → β) are the same as for ~(α & ~β).
By (II) '~Married(shannon, kelly)' is true because, as noted above, 'Married(shannon, kelly)' is false. (II) also tells us that '~Female(kelly)' is false since 'Female(kelly)' is true. According to (III), '(~Married(shannon, kelly) & Female(kelly))' is true because '~Married(shannon, kelly)' is true and 'Female(kelly)' is true. And '(Male(shannon) & Female(shannon))' is false because 'Male(shannon)' is false. (IV) confirms that '(Female(kelly) v Married(evan, evan))' is true because, even though 'Married(evan, evan)' is false, 'Female(kelly)' is true. From (V) we know that the sentence '(~(beth = beth) → Male(shannon))' is true because '~(beth = beth)' is false. If α is false then (α → β) is true regardless of whether or not β is true. The sentence '(Female(beth) → Male(shannon))' is false because 'Female(beth)' is true and 'Male(shannon)' is false.
Before describing the truth conditions for quantified sentences we need to say something about the notion of satisfaction. We've defined truth only for the formulas of M that are sentences. So, the notions of truth and falsity are not applicable to nonsentences such as 'Male(x)' and '((x = x) → Female(x))' in which 'x' occurs free. However, objects may satisfy wffs that are nonsentences. We introduce the notion of satisfaction with some examples. An object satisfies 'Male(x)' just in case that object is male. Matt satisfies 'Male(x)', Beth does not. This is the case because replacing 'x' in 'Male(x)' with 'Matt' yields a truth while replacing the variable with 'beth' yields a falsehood. An object satisfies '((x = x) → Female(x))' if and only if it is either not identical with itself or is a female. Beth satisfies this wff (we get a truth when 'beth' is substituted for the variable in all of its occurrences), Matt does not (putting 'matt' in for 'x' wherever it occurs results in a falsehood). As a first approximation, we say that an object with a name, say 'a', satisfies a wff Ψv in which at most v occurs free if and only if the sentence that results by replacing v in all of its occurrences with 'a' is true. 'Male(x)' is neither true nor false because it is not a sentence, but it is either satisfiable or not by a given object. Now we define the truth conditions for quantifications, utilizing the notion of satisfaction. For a more detailed discussion of the notion of satisfaction, see the article, "Logical Consequence, ModelTheoretic Conceptions."
Let Ψ be any formula of M in which at most v occurs free.
(VI)  ∃vΨ is true just in case there is at least one individual in the domain of quantification (e.g. at least one McKeon) that satisfies Ψ. 
(VII)  ∀vΨ is true just in case every individual in the domain of quantification (e.g. every McKeon) satisfies Ψ. 
Here are some examples. '∃x(Male(x) & Married(x, beth))' is true because Matt satisfies '(Male(x) & Married(x, beth))'; replacing 'x' wherever it appears in the wff with 'matt' results in a true sentence. The sentence '∃xOlderThan(x, x)' is false because no McKeon satisfies 'OlderThan(x, x)', that is replacing 'x' in 'OlderThan(x, x)' with the name of a McKeon always yields a falsehood.
The universal quantification '∀x( OlderThan(x, paige) → Male(x))' is false for there is a McKeon who doesn't satisfy '(OlderThan(x, paige) → Male(x))'. For example, Shannon does not satisfy '(OlderThan(x, paige) → Male(x))' because Shannon satisfies 'OlderThan(x, paige)' but not 'Male(x)'. The sentence '∀x(x = x)' is true because all McKeons satisfy 'x = x'; replacing 'x' with the name of any McKeon results in a true sentence.
Note that in the explanation of satisfaction we suppose that an object satisfies a wff only if the object is named. But we don't want to presuppose that all objects in the domain of discourse are named. For the purposes of an example, suppose that the McKeons adopt a baby boy, but haven't named him yet. Then, '∃x Brother(x, evan)' is true because the adopted child satisfies 'Brother(x, evan)', even though we can't replace 'x' with the child's name to get a truth. To get around this is easy enough. We have added a list of names, 'w_{1'}, 'w_{2'}, 'w_{3'}, etc. to M, and we may say that any unnamed object satisfies Ψv iff the replacement of v with a previously unused w_{i} assigned as a name of this object results in a true sentence. In the above scenerio, '∃xBrother(x, evan)' is true because, ultimately, treating 'w_{1}' as a temporary name of the child, 'Brother(w_{1}, evan)' is true. Of course, the meanings of the predicates would have to be amended in order to reflect the addition of a new person to the domain of McKeons.
We have characterized an interpreted formal language M by defining what qualifies as a sentence of M and by specifying the conditions under which any Msentence is true. The received view of logical consequence entails that the logical consequence relation in M turns on the nature of the logical constants in the relevant Msentences. We shall regard just the sentential connectives, the quantifiers of M, and the identity predicate as logical constants (the language M is a firstorder language). For discussion of the notion of a logical constant see Logical Consequence, Philosophical Considerations and Logical Consequence, ModelTheoretic Conceptions. Intuitively, one Msentence is a logical consequence of a set of Msentences if and only if it is impossible for all the sentences in the set to be true without the former sentence being true as well. A modeltheoretic conception of logical consequence in M clarifies this intuitive characterization of logical consequence by appealing to the semantic properties of the logical constants, represented in the above truth clauses (I)(VII). The entry Logical Consequence, ModelTheoretic Conceptions formalizes the account of truth in language M and gives a modeltheoretic characterization of logical consequence in M. In contrast to the modeltheoretic conception, the deductivetheoretic conception clarifies logical consequence, conceived of in terms of deducibility, by appealing to the inferential properties of logical constants portrayed as intuitively valid principles of inference, that is, principles justifying steps in deductions. See Logical Consequence, Philosophical Considerations for discussion of the relationship between the logical consequence relation and the modeltheoretic and deductivetheoretic conceptions of it.
Deductive system N's inference rules, introduced below, are introduction and elimination rules, defined for each logical constant of our language M. An introduction rule introduces a logical constant into a proof and is useful for deriving a sentence that contains the constant. An elimination rule for the constant makes it possible to derive a sentence that has at least one less occurrence of the logical constant. Elimination rules are useful for deriving a sentence from another in which the constant appears.
Following Shapiro (1991, p. 3), we define a logic to be a language L plus either a modeltheoretic or a deductivetheoretic account of logical consequence. A language with both characterizations is a full logic just in case both characterizations coincide. For discussion on the relationship between the modeltheoretic and deductivetheoretic accounts of logical consequence, see Logical Consequence, Philosophical Considerations. The logic for M developed below may be viewed as a classical logic or a firstorder theory.
In stating N's rules, we begin with the simpler inference rules and give a sample formal deduction of them in action. Then we turn to the inference rules that employ what we shall call subproofs. In the statement of the rules, we let P and Q be any sentences from our language M. We shall number each line of a formal deduction with a positive integer. We let k, l, m, n, o, p and q be any positive integers such that k < m, and l < m, and m < n < o < p < q.
&Intro
k.  P  
l.  Q  
m.  (P & Q)  &Intro: k, l 
&Elim
k.  (P & Q)  k.  (P & Q)  
m.  P  &Elim: k  m.  Q  &Elim: k 
&Intro allows us to derive a conjunction from both of its two parts (called conjuncts). According to the &Elim rule we may derive a conjunct from a conjunction. To the right of the sentence derived using an inference rule is the justification. Steps in a proof are justified by identifying both the lines in the proof used and by citing the appropriate rule. The vertical lines serve as proof margins, which, as you will shortly see, help in portraying the structure of a proof when it contains embedded subproofs.
~Elim
k.  ~~P  
m.  P  ~Elim: k 
The ~Elim rule allows us to drop double negations and infer what was subject to the two negations.
vIntro
k.  P  k.  P  
m.  (P v Q)  vIntro: k  m.  (Q v P)  vIntro: k 
By vIntro we may derive a disjunction from one of its parts (called disjuncts).
→Elim
k.  (P → Q)  
l.  P  
m.  Q  →Elim: k, l 
The → Elim rule corresponds to the principle of inference called modus ponens: from a conditional and its antecedent one may infer the consequent.
Here's a sample deduction using the above inference rules. The formal deduction–the sequence of sentences 411—is associated with the pair
<{(Female(paige) & Female (kelly)), (Female(paige) → ~~Sister(paige, kelly)), (Female(kelly) → ~~Sister(paige, shannon))}, ((Sister(paige, kelly) & Sister(paige, shannon)) v Male(evan))>.
The first element is the set of basis sentences and the second element is the conclusion. We number the basis sentences and list them (beginning with 1) ahead of the deduction. The deduction ends with the conclusion.
1.  (Female(paige) & Female (kelly))  Basis  
2.  (Female(paige) → ~~Sister(paige, kelly))  Basis  
3.  (Female(kelly) → ~~Sister(paige, shannon))  Basis  
4.  Female(paige)  &Elim: 1  
5.  Female(kelly)  &Elim: 1  
6.  ~~Sister(paige, kelly)  →Elim: 2, 4  
7.  Sister(paige, kelly)  ~Elim: 6  
8.  ~~Sister(paige, shannon)  →Elim: 3, 5  
9.  Sister(paige, shannon)  ~Elim: 8  
10.  (Sister(paige, kelly) & Sister(paige, shannon))  &Intro: 7, 9  
11.  ((Sister(paige, kelly) & Sister(paige, shannon)) v Male(evan))  vIntro: 10 
Again, the column all the way to the right gives the explanations for each line of the proof. Assuming the adequacy of N, the formal deduction establishes that the following inference is correct.
(Female(paige) & Female (kelly))  
(Female(paige) → ~~Sister(paige, kelly))  
(Female(kelly) → ~~Sister(paige, shannon))


(therefore)  ((Sister(paige, kelly) & Sister(paige, shannon)) v Male(evan)) 
For convenience in building proofs, we expand M to include '⊥', which we use as a symbol for a contradiction (e.g., '(Female(beth) & ~Female(beth))').
⊥Intro
k.  P  
l.  ~P  
m.  ⊥  ⊥Intro: k, l 
⊥Elim
k.  ⊥  
m.  P  ⊥Elim: k 
If we have derived a sentence and its negation we may derive ⊥ using ⊥Intro. The ⊥Elim rule represents the idea that any sentence P is deducible from a contradiction. So, from ⊥ we may derive any sentence P using ⊥Elim.
Here's a deduction using the two rules.
1.  (Parent(beth, evan) & ~Parent(beth, evan))  Basis  
2.  Parent(beth, evan)  &Elim: 1  
3.  ~Parent(beth, evan)  &Elim: 1  
4.  ⊥  ⊥Intro: 2, 3  
5.  Parent(beth, shannon)  ⊥Elim: 4 
For convenience, we introduce a reiteration rule that allows us to repeat steps in a proof as needed.
Reit
k.  P  
.  
.  
.  
m.  P  Reit: k 
We now turn to the rules for the sentential connectives that employ what we shall call subproofs. Consider the following inference.
1.  ~(Married(shannon, kelly) & OlderThan(shannon, kelly))  
2.  Married(shannon, kelly)


(therefore)  ~Olderthan(shannon, kelly) 
Here is an informal deduction of the conclusion from the basis sentences.
Proof: Suppose that 'Olderthan(shannon, kelly)' is true. Then, from this assumption and basis sentence 2 it follows that '((Shannon is married to Kelly) & (Shannon is taller than Kelly))' is true. But this contradicts the first basis sentence '~((Shannon is married to Kelly) & (Shannon is taller than Kelly))', which is true by hypothesis. Hence our initial supposition is false. We have derived that '~(Shannon is married to Kelly)' is true.
Such a proof is called a reductio ad absurdum proof (or reductio for short). Reductio ad absurdum is Latin for 'reduction to the absurd'. (For more information, see the article "Reductio ad absurdum".) In order to model this proof in N we introduce the ~Intro rule.
~Intro
k.  P  Assumption  
.  
.  
.  
m.  ⊥  
n.  ~P  ~Intro: km 
The ~Intro rule allows us to infer the negation of an assumption if we have derived a contradiction, symbolized by '⊥', from the assumption. The indented proof margin (km) signifies a subproof. In a subproof the first line is always an assumption (and so requires no justification), which is cancelled when the subproof is ended and we are back out on a line that sits on a wider proof margin. The effect of this is that we can no longer appeal to any of the lines in the subproof to generate later lines on wider proof margins. No deduction ends in the middle of a subproof.
Here is a formal analogue of the above informal reductio.
1.  ~(Married(shannon, kelly) & OlderThan(shannon, kelly))  Basis  
2.  Married(shannon, kelly)  Basis  
3.  OlderThan(shannon, kelly)  Assumption  
4.  (Married(shannon, kelly) & OlderThan(shannon, kelly))  &Intro: 2, 3  
5.  ⊥  ⊥Intro: 1, 4  
6.  ~Olderthan(shannon, kelly)  ~Intro: 35 
We signify a subproof with the indented proof margin line; the start and finish of a subproof is indicated by the start and break of the indented proof margin. An assumption, like a basis sentence, is a supposition we suppose true for the purposes of the deduction. The difference is that whereas a basis sentence may be used at any step in a proof, an assumption may only be used to make a step within the subproof it heads. At the end of the subproof, the assumption is discharged. We now look at more subproofs in action and introduce another of N's inference rules. Consider the following inference.
1.  (Male(kelly) v Female(kelly))  
2.  (Male(kelly) → ~Sister(kelly, paige))  
3.  (Female(kelly) → ~Brother(kelly, evan))


(therefore)  (~Sister(kelly, paige) v ~Brother(kelly, evan)) 
Informal Proof:
By assumption '(Male(kelly) v Female(kelly))' is true, that is, by assumption at least one of the disjuncts is true.
Suppose that 'Male(kelly)' is true. Then by modus ponens we may derive that '~Sister(kelly, paige)' is true from this assumption and the basis sentence 2. Then '(~Sister(kelly, paige) v ~Brother(kelly, evan))' is true.
Suppose that 'Female(kelly)' is true. Then by modus ponens we may derive that '~Brother(kelly, evan)' is true from this assumption and the basis sentence 3. Then '(~Sister(kelly, paige) v ~Brother(kelly, evan))' is true.
So in either case we have derived that '(~Sister(kelly, paige) v ~Brother(kelly, evan))' is true. Thus we have shown that this sentence is a deductive consequence of the basis sentences.
We model this proof in N using the vElim rule.
vElim
k.  (P v Q)  
m.  P  Assumption  
.  
.  
.  
n.  R  
o.  Q  Assumption  
.  
.  
.  
p.  R  
q.  R  vElim: k, mn, op 
The vElim rule allows us to derive a sentence from a disjunction by deriving it from each disjunct, possibly using sentences on earlier lines that sit on wider proof margins.
The following formal proof models the above informal one.
1.  (Male(kelly) v Female(kelly))  Basis  
2.  (Male(kelly) → ~Sister(kelly, paige))  Basis  
3.  (Female(kelly) → ~Brother(kelly, evan))  Basis  
4.  Male(kelly)  Assumption  
5.  ~Sister(kelly, paige)  →Elim: 2, 4  
6.  (~Sister(kelly, paige) v ~Brother(kelly, evan))  vIntro: 5  
7.  Female(kelly)  Assumption  
8.  ~Brother(kelly, evan)  →Elim: 3, 7  
9.  (~Sister(kelly, paige) v ~Brother(kelly, evan))  vIntro: 8  
10.  (~Sister(kelly, paige) v ~Brother(kelly, evan))  vElim: 1, 46, 79 
1.  (P v Q)  Basis  
2.  ~P  Basis  
3.  P  Assumption  
4.  ⊥  ⊥Intro: 2, 3  
5.  Q  ⊥Elim: 4  
6.  Q  Assumption  
7.  Q  Reit: 6  
8.  Q  vElim: 1, 35, 67 
Now we introduce the →Intro rule by considering the following inference.
1.  (Olderthan(shannon, kelly) → OlderThan(shannon, paige))  
2.  (OlderThan(shannon, paige) → OlderThan(shannon, evan))


(therefore)  (Olderthan(shannon, kelly) → OlderThan(shannon, evan)) 
Informal proof:
Suppose that OlderThan(shannon, kelly). From this assumption and basis sentence 1 we may derive, by modus ponens, that OlderThan(shannon, paige). From this and basis sentence 2 we get, again by modus ponens, that OlderThan(shannon, evan). Hence, if OlderThan(shannon, kelly), then OlderThan(shannon, evan).
The structure of this proof is that of a conditional proof: a deduction of a conditional from a set of basis sentence which starts with the assumption of the antecedent, then a derivation of the consequent, and concludes with the conditional. To build conditional proofs in N, we rely on the →Intro rule.
→Intro
k.  P  Assumption  
.  
.  
.  
m.  Q  
n.  (P → Q)  →Intro: km 
According to the →Intro rule we may derive a conditional if we derive the consequent Q from the assumption of the antecedent P, and, perhaps, other sentences occurring earlier in the proof on wider proof margins. Again, such a proof is called a conditional proof.
We model the above informal conditional proof in N as follows.
1.  (Olderthan(shannon, kelly) → OlderThan(shannon, paige))  Basis  
2.  (Olderthan(shannon, paige) → OlderThan(shannon, evan))  Basis  
3.  OlderThan(shannon, kelly)  Assumption  
4.  OlderThan(shannon, paige)  →Elim: 1, 3  
5.  OlderThan(shannon, evan)  →Elim: 2, 4  
6.  (OlderThan(shannon, kelly) → OlderThan(shannon, evan))  →Intro: 35 
Mastery of a deductive system facilitates the discovery of proof pathways in hard cases and increases one's efficiency in communicating proofs to others and explaining why a sentence is a logical consequence of others. For example, suppose that (1) if Beth is not Paige's parent, then it is false that if Beth is a parent of Shannon, Shannon and Paige are sisters. Further suppose (2) that Beth is not Shannon's parent. Then we may conclude that Beth is Paige's parent. Of course, knowing the type of sentences involved is helpful for then we have a clearer idea of the inference principles that may be involved in deducing that Beth is a parent of Paige. Accordingly, we represent the two basis sentences and the conclusion in M, and then give a formal proof of the latter from the former.
1.  (~Parent(beth, paige) → ~(Parent(beth, shannon) → Sister(shannon, paige)))  Basis  
2.  ~Parent(beth, shannon)  Basis  
3.  ~Parent(beth, paige)  Assumption  
4.  ~(Parent(beth, shannon) → Sister(shannon, paige))  →Elim: 1, 3  
5.  Parent(beth, shannon)  Assumption  
6.  ⊥  ⊥Intro: 2, 5  
7.  Sister(shannon, paige)  ⊥Elim: 6  
8.  (Parent(beth, shannon) → Sister(shannon, paige))  →Intro: 57  
9.  ⊥  ⊥Intro: 4, 8  
10.  ~~Parent(beth, paige)  ~Intro: 39  
11.  Parent(beth, paige)  ~Elim: 10 
Because we derived a contradiction at line 9, we got '~~Parent(beth, paige)' at line 10, using ~Intro, and then we derived 'Parent(beth, paige)' by ~Elim. Look at the conditional proof (lines 57) from which we derived line 8. Pretty neat, huh? Lines 2 and 5 generated the contradiction from which we derived 'Sister(shannon, paige)' at line 7 in order to get the conditional at line 8. This is our first example of a subproof (57) embedded in another subproof (39). It is unlikely that independent of the resources of a deductive system, a reasoner would be able to readily build the informal analogue of this pathway from the basis sentences to the sentence at line 11. Again, mastery of a deductive system such as N can increase the efficiency of our performances of rigorous reasoning and cultivate skill at producing elegant proofs (proofs that take the least number of steps to get from the basis to the conclusion).
We now introduce the Intro and Elim rules for the identity symbol and the quantifiers. Let n and n' be any names, and Ωn and Ωn' be any wellformed formulas in which n and n' appear and that have no free variables.
=Intro
k. (n = n) =Intro 
=Elim
k.  Ωn  
l.  (n = n' )  
m.  Ωn'  =Elim: k, l 
The =Intro rule allows us to introduce (n = n) at any step in a proof. Since (n = n) is deducible from any sentence, there is no need to identify the lines from which line k is derived. In effect, the =Intro rule confirms that '(paige = paige)', '(shannon = shannon)', '(kelly = kelly)', etc... may be inferred from any sentence(s). The =Elim rule tells us that if we have proven Ωn and (n = n' ), then we may derive Ωn' which is gotten from Ωn by replacing n with n' in some but possibly not all occurrences. The =Elim rule represents the principle known as the indiscernibility of identicals, which says that if (n = n' ) is true, then whatever is true of the referent of n is true of the referent of n'. This principle grounds the following inference
1.  ~Sister(beth, kelly)  
2.  (beth = shannon)


(therefore)  ~Sister(shannon, kelly) 
The indiscernibility of identicals is fairly obvious. If I know that Beth isn't Kelly's sister and that Beth is Shannon (perhaps 'Shannon' is an alias) then this establishes, with the help of the indiscernibility of identicals, that Shannon isn't Kelly's sister. Now we turn to the quantifier rules.
Let Ωv be a formula in which v is the only free variable, and let n be any name.
∃Intro
k.  Ωn  
m.  ∃vΩv  ∃Intro: k 
∃Elim
k.  ∃vΩv  
[n]  m.  Ωn  Assumption  
.  
.  
.  
n.  P  
o.  P  ∃Elim: k, mn 
Here, n must be unique to the subproof, that is, n doesn't occur on any of the lines above m and below n.
The ∃Intro rule, which represents the principle of inference known as existential generalization, tells us that if we have proven Ωn, then we may derive ∃vΩv which results from Ωn by replacing n with a variable v in some but possibly not all of its occurrences and prefixing the existential quantifier. According to this rule, we may infer, say, '∃x Married(x, matt)' from the sentence 'Married(beth, matt)'. By the ∃Elim rule, we may reason from a sentence that is produced from an existential quantification by stripping the quantifier and replacing the resulting free variable in all of its occurrences by a name which is new to the proof. Recall that the language M has an infinite number of constants, and the name introduced by the ∃Elim rule may be one of the w_{i}. We regard the assumption at line l, which starts the embedded subproof, as saying "Suppose n names an arbitrary individual from the domain of discourse such that Ωn is true." To illustrate the basic idea behind the ∃Elim rule, if I tell you that Shannon admires some McKeon, you can't infer that Shannon admires any particular McKeon such as Matt, Beth, Shannon, Kelly, Paige, or Evan. Nevertheless we have it that she admires somebody. The principle of inference corresponding to the ∃Elim rule, called existential instantiation, allows us to assign this 'somebody' an arbitrary name new to the proof, say, 'w_{1}' and reason within the relevant subproof from 'Shannon admires w_{1}'. Then we cancel the assumption and infer a sentence that doesn't make any claims about w_{1}. For example, suppose that (1) Shannon admires some McKeon. Let's call this McKeon 'w_{1}', that is, assume (2) that Shannon admires a McKeon named 'w_{1}'. By the principle of inference corresponding to vIntro we may derive (3) that Shannon admires w_{1} or w_{1} admires Kelly. From (3), we may infer by existential generalization (4) that for some McKeon x, Shannon admires x or x admires Kelly. We now cancel the assumption (that is, cancel (2)) by concluding (5) that for some McKeon x, Shannon admires x or x admires Kelly from (1) and the subproof (2)(4), by existential instantiation. Here is the above reasoning set out formally.
1.  ∃x Admires(shannon, x)  Basis  
[w_{1}]  2.  Admires(shannon, w_{1})  Assumption  
3.  (Admires(shannon, w_{1}) v Admires(w_{1}, kelly))  vIntro: 2  
4.  ∃x(Admires(shannon, x) v Admires(x, kelly))  ∃Intro: 3  
5.  ∃x(Admires(shannon, x) v Admires(x, kelly))  ∃Elim: 1, 24 
The string at the assumption of the subproof (line 2) says "Suppose that 'w_{1 }' names an arbitrary McKeon such that 'Admires(shannon, w_{1})' is true." This is not a sentence of M, but of the metalanguage for M, that is, the language used to talk about M. Hence, the ∃Elim rule (as well as the ∀Intro rule introduced below) has a metalinguistic character.
∀Intro
[n]  k.  Assumption  
.  
.  
.  
m.  Ωn  
n.  ∀vΩv  ∀Intro: km  
n must be unique to the subproof 
∀Elim
k.  ∀vΩv  
m.  Ωn  ∀Elim: k 
The ∀Elim rule corresponds to the principle of inference known as universal instantiation: to infer that something holds for an individual of the domain if it holds for the entire domain. The ∀Intro rule allows us to derive a claim that holds for the entire domain of discourse from a proof that the claim holds for an arbitrary selected individual from the domain. The assumption at line k reads in English "Suppose n names an arbitrarily selected individual from the domain of discourse." As with the ∃Elim rule, the name introduced by the ∀Intro rule may be one of the w_{i}. The ∀Intro rule corresponds to the principle of inference often called universal generalization.
For example, suppose that we are told that (1) if a McKeon admires Paige, then that McKeon admires himself/herself, and that (2) every McKeon admires Paige. To show that we may correctly infer that every McKeon admires himself/herself we appeal to the principle of universal generalization, which (again) is represented in N by the ∀Intro rule. We begin by assuming that (3) a McKeon is named 'w_{1}'. All we assume about w_{1} is that w_{1} is one of the McKeons. From (2), we infer that (4) w_{1 }admires Paige. We know from (1), using the principle of universal instantiation (the ∀Elim rule in N), that (5) if w_{1} loves Paige then w_{1} loves w_{1}. From (4) and (5) we may infer that (6) w_{1} loves w_{1} by modus ponens. Since w_{1} is an arbitrarily selected individual (and so what holds for w_{1} holds for all McKeons) we may conclude from (3)(6) that (7) every McKeon loves himself/herself follows from (1) and (2) by universal generalization. This reasoning is represented by the following formal proof.
1.  ∀x(Admires(x, paige) → Admires(x, x))  Basis  
2.  ∀x Admires(x, paige)  Basis  
[w_{1}]  3.  Assumption  
4.  Admires(w_{1}, paige)  ∀Elim: 2  
5.  (Admires(w_{1}, paige) → Admires(w_{1}, w_{1}))  ∀Elim: 1  
6.  Admires(w_{1}, w_{1})  →Elim: 4, 5  
7.  ∀x Admires(x, x)  ∀Intro: 36 
Line 3, the assumption of the subproof, corresponds to the English sentence "Let 'w_{1}' refer to an arbitrary McKeon." The notion of a name referring to an arbitrary individual from the domain of discourse, utilized by both the ∀Intro and ∃Elim rules in the assumptions that start the respective subproofs, incorporates two distinct ideas. One, relevant to the ∃Elim rule, means "some specific object, but I don't know which", while the other, relevant to the ∀Intro rule means "any object, it doesn't matter which" (See Pelletier 1999, pp. 118120 for discussion.)
Consider:
K = {All McKeons admire those who admire somebody, Some McKeon admires a McKeon}
X = Paige admires Paige
Here's a proof that X is deducible from K.
1.  ∀x(∃y Admires(x, y) → ∀z Admires(z, x))  Basis  
2.  ∃x∃y Admires(x, y)  Basis  
[w_{1}]  3.  ∃y Admires(w_{1}, y)  Assumption  
4.  (∃y Admires(w_{1}, y) → ∀z Admires(z, w_{1}))  ∀Elim: 1  
5.  ∀z Admires(z, w_{1})  →Elim: 3, 4  
6.  Admires(paige, w_{1})  ∀Elim: 5  
7.  ∃y Admires(paige, y)  ∃Intro: 6  
8.  (∃y Admires(paige, y) → ∀z Admires(z, paige))  ∀Elim: 1  
9.  ∀z Admires(z, paige)  →Elim: 7, 8  
10.  Admires(paige, paige)  ∀Elim: 9  
11.  Admires(paige, paige)  ∃Elim: 2, 310 
An informal correlate put somewhat succinctly, runs as follows.
Let's call the unnamed admirer, mentioned in (2), w_{1}. From this and (1), every McKeon admires w_{1} and so Paige admires w_{1}. Hence, Paige admires somebody. From this and (1) it follows that everybody admires Paige. So, Paige admires Paige. This is our desired conclusion
Even though the informal proof skips steps and doesn't mention by name the principles of inference used, the formal proof guides its construction.
We began the article by presenting the deductivetheoretic characterization of logical consequence: X is a logical consequence of a set K of sentences if and only if X is deducible from K, that is, there is a deduction of X from K. To make it official, we now characterize the deductive consequence relation in M in terms of deducibility in N.
X is a deductive consequence of K if and only if K ⊢_{N} X, that is, X is deducible in N from K
We now inquire into the status of this characterization of deductive consequence.
The first thing to note is that deductive system N is complete and sound with respect to the modeltheoretic consequence relation defined in Logical Consequence, ModelTheoretic Conceptions: Section 4.4. Let
K ⊢_{N} X
abbreviate
X is deducible in N from K
Similarly, let
K ⊨ X
abbreviate
X is a modeltheoretic consequence of K, that is, every Mstructure that is a model of K is also a model of X. (For more information on structures and models, see Logical Consequence, ModelTheoretic Conceptions.)
The completeness and soundness of N means that for any set K of M sentences and Msentence X, K ⊢_{N} X if and only if K ⊨ X. A soundness proof establishes K ⊢_{N} X only if K ⊨ X, and a completeness proof establishes K ⊢_{N} X if K ⊨ X. So, the ⊢_{N} and ⊨ relations, defined on sentences of M, are extensionally equivalent. The question arises: which characterization of the logical consequence relation is more basic or fundamental?
The first thing to note is that the ⊢_{N}consequence relation is compact. For any deductive system D and pair there is a K' such that, K ⊢_{D} X if and only if K' ⊢_{D} X, where K' is a finite subset of sentences from K. As pointed out by Tarski (1936), among others, there are intuitively correct principles of inference reflected in certain languages according to which one may infer a sentence X from a set K of sentences, even though it is incorrect to infer X from any finite subset of K. Here's a rendition of his reasoning, focusing on the ⊢_{N}consequence relation defined on a language for arithmetic, which allows us to talk about the natural numbers 0, 1, 2, 3, and so on. Let 'P' be a predicate defined over the domain of natural numbers and let 'NatNum(x)' abbreviate 'x is a natural number'. According to Tarski, intuitively,
∀x(NatNum(x) → P(x))
is a logical consequence of the infinite set S of sentences
P(0)
P(1)
P(2)
.
.
.
However, the universal quantification is not a ⊢_{N}consequence of the set S. The reason why is that the ⊢_{N}consequence relation is compact: for any sentence X and set K of sentences, X is a ⊢_{N}consequence of K, if and only if X is a ⊢_{N}consequence of some finite subset of K. Proofs in N are objects of finite length; a deduction is a finite sequence of sentences. Since the universal quantification is not a ⊢_{N}consequence of any finite subset of S, it is not a ⊢_{N}consequence of S. By the completeness of system N, it follows that
∀x(NatNum(x) → P(x))
is not a ⊨consequence of S either. Consider the structure U* whose domain is the set of McKeons. Let all numerals name Beth. Let the extension of 'NatNum' be the entire domain, and the extension of 'P' be just Beth. Then each element of S is true in U*, but '∀x (NatNum(x) → P(x))' is not true in U*. (See Logical Consequence, ModelTheoretic Conceptions for further discussion of structures.) Note that the sentences in S only say that P holds for 0, 1, 2, and so on, and not also that 0,1, 2, etc., are all the elements of the domain of discourse. The above interpretation takes advantage of this fact by reinterpreting all numerals as names for Beth.
However, we can reflect modeltheoretically the intuition that '∀x(NatNum(x) → P(x))' is a logical consequence of set S by doing one of two things. We can add to S the functional equivalent of the claim that 1, 2, 3, etc., are all the natural numbers there are on the basis that this is an implicit assumption of the view that the universal quantification follows from S. Or we could add 'NatNum' and all numerals to our list of logical terms. On either option it still won't be the case that '∀x(NatNum(x) → P(x))' is a ⊢_{N}consequence of the set S. There is no way to accommodate the intuition that '∀x(NatNum(x) → P(x))' is a logical consequence of S in terms of a compact consequence relation. Tarski takes this to be a reason to think that the modeltheoretic account of logical consequence is definitive as opposed to an account of logical consequence in terms of a compact consequence relation such as ⊢_{N}.
Tarski's illustration shows that what is called the ωrule is a correct inference rule.
The ωrule is that from:
{P(0), P(1), P(2), ...}
one may infer
∀x(NatNum(x) → P(x))
with respect to any predicate P. Any inference guided by this rule is correct even though it can't be represented in a deductive system as this notion has been used here and discussed in Logical Consequence, Philosophical Considerations.
Compactness is not a salient feature of logical consequence conceived deductive theoretically. This suggests, by the third criterion of a successful theoretical definition of logical consequence mentioned in Logical Consequence, Philosophical Considerations, that no compact consequence relation is definitive of the intuitive notion of deducibility. So, assuming that deductive system N is correct (that is, deducibility is coextensive in M with the ⊢_{N}relation), we can't treat
X is intuitively deducible from K if and only if K ⊢_{N} X.
as a definition of deducibility in M since
X is a deductive consequence of K if and only if X is deducible in a correct deductive system from K.
is not true with respect to languages for which deducibility is not captured by any compact consequence relation (that is, not captured by any deductionsystem account of it ). Some (e.g., Quine) demur using a language for purposes of science in which deducibility is not completely represented by a deductionsystem account because of epistemological considerations. Nevertheless, as Tarski (1936) argues, the fact that there cannot be deductionsystem accounts of some intuitively correct principles of inference is reason for taking a modeltheoretic characterization of logical consequence to be more fundamental than any characterization in terms of a deductive system sound and complete with respect to the modeltheoretic characterization.
In discussing the status of the characterization of logical consequence in terms of deductive system N, we assumed that N is correct. The question arises whether N is, indeed, correct. That is, is it the case that X is intuitively deducible from K if and only if K ⊢_{N} X? The biconditional holds only if both (1) and (2) are true.
(1) If sentence X is intuitively deducible from set K of sentences, then K ⊢_{N} X.
(2) If K ⊢_{N} X, then sentence X is intuitively deducible from set K of sentences.
So N is incorrect if either (1) or (2) is false. The truth of (1) and (2) is relevant to the correctness of the characterization of logical consequence in terms of system N, because any adequate deductivetheoretic characterization of logical consequence must identify the logical terms of the relevant language and account for their inferential properties (for discussion, see Logical Consequence, Philosophical Considerations: Section 4). (1) is false if the list of logical terms in M is incomplete. In such a case, there will be a sentence X and set K of sentences such that X is intuitively deducible from set K because of at least one inferential property of logical terminology unaccounted for by N and so false that K ⊢_{N} X (for discussion of some of the issues surrounding what qualifies as a logical term see Logical Consequence, Modeltheoretic Conceptions: Section 5.3). In this case, N would be incorrect because it wouldn't completely account for the inferential machinery of language M. (2) is false if there are deductions in N that are intuitively incorrect. Are there such deductions? In order to finetune the question note that the sentential connectives, the identity symbol, and the quantifiers of M are intended to correspond to or, and, not, if...then (the indicative conditional), is identical with, some, and all. Hence, N is a correct deductive system only if the Intro and Elim rules of N reflect the inferential properties of the ordinary language expressions. In what follows, we sketch three views that are critical of the correctness of system N because they reject (2).
Not everybody accepts it as a fact that any sentence is deducible from a contradiction, and so some question the correctness of the ⊥Elim rule. Consider the following informal proof of Q from P & ~P, for sentences P and Q, as a rationale for the ⊥Elim rule.
From (1) P and notP, we may correctly infer (2) P, from which it is correct to infer (3) P or Q. We derive (4) notP from (1). (5) P follows from (3) and (4).
The proof seems to be composed of valid modes of inference. Critics of the ⊥Elim rule are obliged to tell us where it goes wrong. Here we follow the relevance logicians Anderson and Belnap (1962, pp.105108; for discussion, see Read 1995, pp. 5460). In a nutshell, Anderson and Belnap claim that the proof is defective because it commits a fallacy of equivocation. The move from (2) to (3) is correct only if or has the sense of at least one. For example, from Kelly is female it is legit to infer that at least one of the two sentences Kelly is female and Kelly is older than Paige is true. On this sense of or given that Kelly is female, one may infer that Kelly is female or whatever you like. However, in order for the passage from (3) and (4) to (5) to be legitimate the sense of or in (3) is if not...then. For example from if Kelly is not female, then Kelly is not Paige's sister and Kelly is not female it is correct to infer Kelly is not Paige's sister. Hence, the above "support" for the ⊥Elim rule is defective for it equivocates on the meaning of or.
Two things to highlight. First, Anderson and Belnap think that the inference from (2) to (3) on the if not...then reading of or is incorrect. Given that Kelly is female it is problematic to deduce that if she is not then Kelly is older than Paige—or whatever you like. Such an inference commits a fallacy of relevance for Kelly not being female is not relevant to her being older than Paige. The representation of this inference in system N appeals to the ⊥Elim rule, which is rejected by Anderson and Belnap. Second, the principle of inference underlying the move from (3) and (4) to (5)—from P or Q and notP to infer Q—is called the principle of the disjunctive syllogism. Anderson and Belnap claim that this principle is not generally valid when or has the sense of at least one, which it has when it is rendered by 'v' (e.g., see above). If Q is relevant to P, then the principle holds on this reading of or.
It is worthwhile to note the essentially informal nature of the debate. It calls upon our pretheoretic intuitions about correct inference. It would be quite useless to cite the proof in N of the validity of disjunctive syllogism (given above) against Anderson and Belnap for it relies on the ⊥Elim rule whose legitimacy is in question. No doubt, pretheoretical notions and original intuitions must be refined and shaped somewhat by theory. Our pretheoretic notion of correct deductive reasoning in ordinary language is not completely determinant and precise independently of the resources of a full or partial logic. (See Shapiro 1991, chaps. 1 and 2 for discussion of the interplay between theory and pretheoretic notions and intuitions.) Nevertheless, hardcore intuitions regarding correct deductive reasoning do seem to drive the debate over the legitimacy of deductive systems such as N and over the legitimacy of the ⊥Elim rule in particular. Anderson and Belnap (1962, p. 108) write that denying the principle of the disjunctive syllogism, regarded as a valid mode of inference since Aristotle, "... will seem hopelessly naïve to those logicians whose logical intuitions have been numbed through hearing and repeating the logicians fairy tales of the past half century, and hence stand in need of further support". The possibility that intuitions in support of the general validity of the principle of the disjunctive syllogism have been shaped by a bad theory of inference is motive enough to consider argumentative support for the principle and to investigate deductive systems for relevance logic.
A natural deductive system for relevance logic has the means for tracking the relevance quotient of the steps used in a proof and allows the application of an introduction rule in the step from A to B "only when A is relevant to B in the sense that A is used in arriving at B" (Anderson and Belnap 1962, p. 90). Consider the following proof in system N.
1.  Admires(evan, paige)  Basis  
2.  ~Married(beth, matt)  Assumption  
3.  Admires(evan, paige)  Reit: 1  
4.  (~Married(beth, matt) → Admires(evan, paige))  →Intro: 23 
Recall that the rationale behind the →Intro rule is that we may derive a conditional if we derive the consequent Q from the assumption of the antecedent P, and, perhaps, other sentences occurring earlier in the proof on wider proof margins. The defect of this rule, according to Anderson and Belnap is that "from" in "from the assumption of the antecedent P" is not taken seriously. They seem to have a point. By the lights of the → Intro rule, we have derived line 4 but it is hard to see how we have derived the sentence at line 3 from the assumption at step 2 when we have simply reiterated the basis at line 3. Clearly, '~Married(beth, matt)' was not used in inferring 'Admires(evan, beth)' at line 3. The relevance logician claims that the →Intro rule in a correct natural deductive system should not make it possible to prove a conditional when the consequent was arrived at independently of the antecedent. A typical strategy is to use classes of numerals to mark the relevance conditions of basis sentences and assumptions and formulate the Intro and Elim rules to tell us how an application of the rule transfers the numerical subscript(s) from the sentences used to the sentence derived with the help of the rule. Label the basis sentences, if any, with distinct numerical subscripts. Let a, b, c, etc., range over classes of numerals. The →rules for a relevance natural deductive system may be represented as follows.
→Elim
k.  (P → Q)_{a}  
l.  P_{b}  
m.  Q_{a∪b}  →Elim: k, l 
→Intro
k.  P_{{k}}  Assumption  
.  
.  
.  
m.  Q_{b}  
n.  (P → Q)_{b – {k}}  →Intro: km, provided k∈b  
The numerical subscript of the assumption at line k must be new to the proof. This is insured by using the line number for the subscript. 
In the directions for the →Intro rule, the proviso that k∈b insures that the antecedent P is used in deriving the consequent Q. Anderson and Belnap require that if the line that results from the application of either rule is the conclusion of the proof the relevance markers be discharged. Here is a sample proof of the above two rules in action.
1.  Admires(evan, paige)_{1}  Assumption  
2.  (Admires(evan, paige) → ~Married(beth, matt))_{2}  Assumption  
3.  ~Married(beth, matt)_{1, 2}  →Elim: 1,2  
4.  ((Admires(evan, paige) → ~Married(beth, matt)) → ~Married(beth, matt))_{1}  →Intro: 23  
5.  (Admires(evan, paige) → ((Admires(evan, paige) → ~Married(beth, matt)) → ~Married(beth, matt)))  →Intro: 14 
For further discussion see Anderson and Belnap (1962). For a comprehensive discussion of relevance deductive systems see their (1975). For a more uptodate review of the relevance logic literature see Dunn (1986).
We now consider the correctness of the ~Elim rule and consider the rule in the context of using it along with the ~Intro rule.
~Intro
k.  P  Assumption  
.  
.  
.  
m.  ⊥  
n.  ~P  ~Intro: km 
~Elim
k.  ~~P  
m.  P  ~Elim: k 
Here is a typical use in classical logic of the ~Intro and ~Elim rules. Suppose that we derive a contradiction from the assumption that a sentence P is true. So, if P were true, then a contradiction would be true which is impossible. So P cannot be true and we may infer that notP. Similarily, suppose that we derive a contradiction from the assumption that notP. Since a contradiction cannot be true, notP is not true. Then we may infer that P is true by ~Elim.
The intuitionist logician rejects the reasoning given in bold. If a contradiction is derived from notP we may infer that notP is not true, that is, that notnotP is true, but it is incorrect to infer that P is true. Why? Because the intuitionist rejects the presupposition behind the ~Elim rule, which is that for any proposition P there are two alternatives: P and notP. The grounds for this are the intuitionistic conceptions of truth and meaning.
According to intuitionistic logic, truth is an epistemic notion: the truth of a sentence P consists of our ability to verify it. To assert P is to have a proof of P, and to assert notP is to have a refutation of P. This leads to an epistemic conception of the meaning of logical constants. The meaning of a logical constant is characterized in terms of its contribution to the criteria of proof for the sentences in which it occurs. Compare with classical logic: the meaning of a logical constant is semantically characterized in terms of its contribution to the determination of the truth conditions of the sentences in which it occurs. For example, the classical logician accepts a sentence of the form P v Q only when she accepts that at least one of the disjuncts is true. On the other hand, the intuitionistic logician accepts P v Q only when she has a method for proving P or a method for proving Q. But then the Law of Excluded Middle no longer holds, because a sentence of the form P or notP is true, that is assertible, only when we are in a position to prove or refute P, and we lack the means for verifying or refuting all sentences. The alleged problem with the ~Elim rule is that it illegitimately extends the grounds for asserting P on the basis of notnotP since a refutation of notP is not ipso facto a proof of P.
Since there are finitely many McKeons and the predicates of language M seem well defined, we can work through the domain of the McKeons to verify or refute any Msentence and so there doesn't seem to be an Msentence that is neither verifiable nor refutable. However, consider a language about the natural numbers. Any sentence that results by substituting numerals for the variables in 'x = y + z' is decidable. This is to say that for any natural numbers x, y, and z, we have an effective procedure for determining whether or not x is the sum of y and z. Hence, for all x, y, and z either we may assert that x = y + z or we may assert the contrary. Let 'A(x)' abbreviate 'if x is even and greater than 2 then there exists primes y and z such that x = y + z'. Since there are algorithms for determining of any number whether or not it is even, greater than 2, or prime, the hypothesis that the open formula 'A(x)' is satisfied by a given natural number is decidable for we can effectively determine for all smaller numbers whether or not they are prime. However, there is no known method for verifying or refuting Goldbach's conjecture, for all x, A(x). Even though, for each numeral n standing for a natural number, the sentence A(n) is decidable (that is, we can determine which of A(n) or notA(n) is true), the sentence 'for all x, A(x)' is not. That is, we are not in a position to hold that either Goldbach's conjecture is true or that it is not. Clearly, verification of the conjecture via an exhaustive search of the domain of natural numbers is not possible since the domain is nonfinite. Minus a counterexample or proof of Goldbach's conjecture, the intuitionist demurs from asserting that either Goldbach's conjecture is true or it is not. This is just one of many examples where the intuitionist thinks that the law of excluded middle fails.
In sum, the legitimacy of the ~Elim rule requires a realist conception of truth as verification transcendent. On this conception, sentences have truthvalues independently of the possibility of a method for verifying them. Intuitionistic logic abandons this conception of truth in favor of an epistemic conception according to which the truth of a sentence turns on our ability to verify it. Hence, the inference rules of an intuitionistic natural deductive system must be coded in such a way to reflect this notion of truth. For example, consider an intuitionistic language in which a, b, ... range over proofs, 'a: P' stands for 'a is a proof of P', and '(a, b)' stands for some suitable pairing of the proofs a and b. The &rules of an intuitionistic natural deductive system may look like the following:
&Intro
k.  a: P  
l.  b: Q  
m.  (a, b): (P & Q)  &Intro: k, l 
&Elim
k.  (a, b): (P & Q)  & nbsp;  k.  (a, b): (P & Q)  
m.  a: P  &Elim: k  m.  b: Q  &Elim: k 
Apart from the negation rules, it is fairly straightforward to dress the Intro and Elim rules of N with a proof interpretation as is illustrated above with the &rules. For the details see Van Dalen (1999). For further introductory discussion of the philosophical theses underlying intuitionistic logic see Read (1995) and Shapiro (2000). Tennant (1997) offers a more comprehensive discussion and defense of the philosophy of language underlying intuitionistic logic.
We now turn to the ∃Intro and ∀Elim rules. Consider the following two inferences.
(1)  Male(evan)

(3)  ∀x Male(x)


(therefore)  (2)  ∃x Male(x)  (therefore)  (4)  Male(evan) 
Both are correct by the lights of our system N. Specifically, (2) is derivable from (1) by the ∃Intro rule and we get (4) from (3) by the ∀Elim rule. Note an implicit assumption required for the legitimacy of these inferences: every individual constant refers to an element of the quantifier domain. If this existence assumption, which is built into the semantics for M and reflected in the two quantifier rules, is rejected, then the inferences are unacceptable. What motivates rejecting the existence assumption and denying the correctness of the above inferences?
There are contexts in which singular terms are used without assuming that they refer to existing objects. For example, it is perfectly reasonable to regard the individual constants of a language used to talk about myths and fairy tales as not denoting existing objects. It seems inappropriate to infer that some actually existing individual is jolly on the basis that the sentence Santa Claus is jolly is true. Also, the logic of a language used to debate the existence of God should not presuppose that God refers to something in the world. The atheist doesn't seem to be contradicting herself in asserting that God does not exist. Furthermore, there are contexts in science where introducing an individual constant for an allegedly existing object such as a planet or particle should not require the scientist to know that the purported object to which the term allegedly refers actually exists. A logic that allows nondenoting individual constants (terms that do not refer to existing things) while maintaining the existential import of the quantifiers ('∀x' and '∃x' mean something like 'for all existing individuals x' and 'for some existing individuals x', respectively) is called a free logic. In order for the above two inferences to be correct by the lights of free logic, the sentence Evan exists must be added to the basis. Correspondingly, the ∃Intro and ∀Elim rules in a natural deductive system for free logic may be portrayed as follows. Again, let Ωv be a formula in which v is the only free variable, and let n be any name.
∀Elim  ∃Intro  
k.  ∀vΩv  k.  Ωn  
l.  E!n  l.  E!n  
m.  Ωn  ∀Elim: k, l  m.  ∃vΩv  ∃Intro: k, l 
E!n abbreviates n exists and so we suppose that 'E!' is an item of the relevant language. The ∀Intro and ∃Elim rules in a free logic deductive system also make explicit the required existential presuppositions with respect to individual constants (for details see Bencivenga 1986, p. 387). Free logic seems to be a useful tool for representing and evaluating reasoning in contexts such as the above. Different types of free logic arise depending on whether we treat terms that do not denote existing individuals as denoting objects that do not actually exist or as simply not denoting at all.
In sum, there are contexts in which it is appropriate to use languages whose vocabulary and syntactic formation rules are independent of our knowledge of the actual existence of the entities the language is about. In such languages, the quantifier rules of deductive system N sanction incorrect inferences, and so at best N represents correct deductive reasoning in languages for which the existential presupposition with respect to singular terms makes sense. The proponent of system N may argue that only those expressions guaranteed a referent (e.g., demonstratives) are truly singular terms. On this view, advocated by Bertrand Russell at one time, expressions that may not have a referent such as Santa Claus, God, Evan, Bill Clinton, the child abused by Michael Jackson are not genuinely singular expressions. For example, in the sentence Evan is male, Evan abbreviates a unique description such as the son of Matt and Beth. Then Evan is male comes to
There exists a unique x such that x is a son of Matt and Beth and x is male.
From this we may correctly infer that some are male. The representation of this inference in N appeals to both the ∃Intro and &exists;Elim rules, as well as the &Elim rule. However, treating most singular expressions as disguised definite descriptions at worst generates counterintuitive truthvalue assignments (Santa Claus is jolly turns out false since there is no Santa Claus) and seems at best an unnatural response to the criticism posed from the vantagepoint of free logic.
For a short discussion of the motives behind free logic and a review of the family of free logics see Read (1995, chap. 5). For a more comprehensive discussion and a survey of the relevant literature see Bencivenga (1986). Morscher and Hieke (2001) is a collection of recent essays devoted to taking stock of the past fifty years of research in free logic and outlining new directions.
This completes our discussion of the deductivetheoretic conception of logical consequence. Since, arguably, logical consequence conceived deductivetheoretically is not compact it cannot be defined in terms of deducibility in a correct deductive system. Nevertheless correct deductive systems are useful for modeling deductive reasoning and they have applications in areas such as computer science and mathematics. Is deductive system N correct? In other words: Do the Intro and Elim rules of N represent correct principles of inference? We sketched three motives for answering in the negative, each leading to a logic that differs from the classical one developed here and which requires altering Intro and Elim rules of N. It is clear from the discussion that any full coverage of the topic would have to engage philosophical issues, still a matter of debate, such as the nature of truth, meaning and inference. For a comprehensive and very readable survey of proposed revisions to classical logic (those discussed here and others) see Haack (1996). For discussion of related issues, see also the entries, "Logical Consequence, Philosophical Considerations" and "Logical Consequence, ModelTheoretic Conceptions" in this encyclopedia.
Matthew McKeon
Email: mckeonm@msu.edu
Michigan State University
U. S. A.
Article printed from Internet Encyclopedia of Philosophy: http://www.iep.utm.edu/logcond/
Copyright © The Internet Encyclopedia of Philosophy. All rights reserved.