Paraconsistent Logic

A paraconsistent logic is a way to reason about inconsistent information without lapsing into absurdity. In a non-paraconsistent logic, inconsistency explodes in the sense that if a contradiction obtains, then everything (everything!) else obtains, too. Someone reasoning with a paraconsistent logic can begin with inconsistent premises—say, a moral dilemma, a Kantian antinomy, or a semantic paradox—and still reach sensible conclusions, without completely exploding into incoherence.

Paraconsistency is a thesis about logical consequence: not every contradiction entails arbitrary absurdities. Beyond that minimal claim, views and mechanics of paraconsistent logic come in a broad spectrum, from weak to strong, as follows.

On the very weak end, paraconsistent logics are taken to be safeguards to control for human fallibility. We inevitably revise our theories, have false beliefs, and make mistakes; to prevent falling into incoherence, a paraconsistent logic is required. Such modest and conservative claims say nothing about truth per se. Weak paraconsistency is still compatible with the thought that if a contradiction were true, then everything would be true, too—because, beliefs and theories notwithstanding, contradictions cannot be true.

On the very strong end of the spectrum, paraconsistent logics underwrite the claim that some contradictions really are true. This thesis—dialetheism—is that sometimes the best theory (of mathematics, or metaphysics, or even the empirical world) is contradictory. Paraconsistency is mandated because the dialetheist still maintains that not everything is true. In fact, strong paraconsistency maintains that all contradictions are false—even though some contradictions also are true. Thus, at this end of the spectrum, dialetheism is itself one of the true contradictions.

This article offers a brief discussion of some main ideas and approaches to paraconsistency. Modern logics are couched in the language of mathematics and formal symbolism. Nevertheless, this article is not a tutorial on the technical aspects of paraconsistency, but rather a synopsis of the underlying ideas. See the  suggested readings for formal expositions, as well as historical material.

Table of Contents

  1. The Problem
  2. Logical Background
    1. Definitions
    2. Two Grades of Paraconsistency
    3. Requirements for a Logic to Be Paraconsistent
  3. Schools of Paraconsistent Logic
    1. Discussive Logic
    2. Preservationism
    3. Adaptive Logic
    4. Relevance
    5. Logics of Formal Inconsistency
    6. Dialetheism
  4. Applications
    1. Moral Dilemmas
    2. Law, Science, and Belief Revision
    3. Closed Theories – Truth and Sets
      1. Naïve Axioms
      2. Further Logical Restrictions
    4. Learning, Beliefs, and AI
  5. Conclusion
  6. References and Further Reading

1. The Problem

Consider an example due to Alan Weir, concerning a political leader who absolutely, fundamentally believes in the sanctity of human life, and so believes that war is always wrong. All the same, a situation arises where her country must enter into war (else people will die, which is wrong). Entering into war will inevitably mean that some people will die. Plausibly, the political leader is now embroiled in a dilemma. This is exactly when paraconsistent inference is appropriate. Imagine our leader thinking, ‘War is always wrong, but since we are going to war anyway, we may as well bomb civilians.’ Absurdist reasoning of this sort is not only bad logic, but just plain old bad.

David Hume once wrote (1740, p. 633),

I find myself involv’d in such a labyrinth, that, I must confess, I neither know how to correct my former opinions, nor how to render them consistent.

As Schotch and Jennings rightly point out, ‘it is no good telling Hume that if his inconsistent opinions were, all of them, true then every sentence would be true.’ The best we could tell Hume is that at least some of his opinions are wrong—but ‘this, so far from being news to Hume, was what occasioned much of the anguish he evidently felt’ (Schotch et al. p. 23). We want a way to keep sensible and reasonable even when—especially when—such problems arise. We need a way to keep from falling to irrational pieces when life, logic, mathematics or even philosophy leads us into paradox and conundrum. That is what paraconsistent logics are for.

2. Logical Background

a. Definitions

A logic is a set of well-formed formulae, along with an inference relation ⊢. The inference relation, also called logical consequence, may be specified syntactically or semantically, and tells us which formulae (conclusions) follow from which formulae (premises). When a sentence B follows from a bunch of sentences A0, A1, …, An, we write

A0, A1, …, AnB.

When the relation ⊢ holds, we say that the inference is valid. The set of all sentences that can be validly inferred in a logic is called a theory.

A key distinction behind the entire paraconsistent enterprise is that between consistency and coherence. A theory is consistent if no pairs of contradictory sentences A, ¬A are derivable, or alternatively iff no single sentence of the form A & ¬A is derivable. Coherence is a broader notion, sometimes called absolute (as opposed to simple) consistency, and more often called non-triviality. A trivial or absurd theory is one in which absolutely every sentence holds. The idea of paraconsistency is that coherence is possible even without consistency. Put another way, a paraconsistent logician can say that a theory is inconsistent without meaning that the theory is incoherent, or absurd. The former is a structural feature of the theory, worth repair or further study; the latter means the theory has gone disastrously wrong. Paraconsistency gives us a principled way to resist equating contradiction with absurdity.

In classical logic, the logic developed by Boole, Frege, Russell et al. in the late 1800s, and the logic almost always taught in university courses, has an inference relation according to which

A, ¬AB

is valid. Here the conclusion, B, could be absolutely anything at all. Thus this inference is called ex contradictione quodlibet (from a contradiction, everything follows) or explosion. Paraconsistent logicians have urged that this feature of classical inference is incorrect. While the reasons for denying the validity of explosion will vary according to one’s view of the role of logic, a basic claim is that the move from a contradiction to an arbitrary formula does not seem like reasoning. As the founders of relevant logic, Anderson and Belnap, urge in their canonical book Entailment, a ‘proof’ submitted to a mathematics journal in which the essential steps fail to provide a reason to believe the conclusion, e.g. a proof by explosion, would be rejected out of hand. Mark Colyvan (2008) illustrates the point by noting that no one has laid claim to a startlingly simple proof of the Riemann hypothesis:

Riemann’s Hypothesis: All the zeros of the zeta function have real part equal to 1/2.
Proof: Let R stand for the Russell set, the set of all sets that are not members of themselves. It is straightforward to show that this set is both a member of itself and not a member of itself. Therefore, all the zeros of Riemann’s zeta function have real part equal to 1/2.

Needless to say, the Riemann hypothesis remains an open problem at time of writing.

Minimally, paraconsistent logicians claim that there are or may be situations in which paraconsistency is a viable alternative to classical logic. This is a pluralist view, by which different logics are appropriate to different areas. Just as a matter of practical value, explosion does not seem like good advice for a person who is faced with a contradiction, as the quote from Hume above makes clear. More forcefully, paraconsistent logics make claim to being a better account of logic than the classical apparatus. This is closer to a monistic view, in which there is, essentially, one correct logic, and it is paraconsistent.

b. Two Grades of Paraconsistency

Let us have a formal definition of paraconsistency.

Definition 1. A logic is paraconsistent iff it is not the case for all sentences A, B that A, ¬AB.

This definition simply is the denial of ex contradictione quodlibet; a logic is paraconsistent iff it does not validate explosion. The definition is neutral as to whether any inconsistency will ever arise. It only indicates that, were an inconsistency to arise, this would not necessarily lead to inferential explosion. In the next definition, things are a little different:

Definition 2. A logic is paraconsistent iff there are some sentences A, B such that ⊢ A and ⊢ ¬A, but not ⊢ B.

A logic that is paraconsistent in the sense of definition 2 automatically satisfies definition 1. But the second definition suggests that there are actually inconsistent theories. The idea is that, in order for explosion to fail, one needs to envisage circumstances in which contradictions obtain. The difference between the definitions is subtle, but it will help us distinguish between two main gradations of paraconsistency, weak and strong.

Roughly, weak paraconsistency is the cluster concept that

  • any apparent contradictions are always due to human error;
  • classical logic is preferable, and in a better world where humans did not err, we would use classical logic;
  • no true theory would ever contain an inconsistency.

Weak paraconsistent logicians see their role as akin to doctors or mechanics. Sometimes information systems develop regrettable but inevitable errors, and paraconsistent logics are tools for damage control. Weak paraconsistentists look for ways to restore consistency to the system or to make the system work as consistently as possible. Weak paraconsistentists have the same view, more or less, of contradictions as do classical logicians.

On the other side, strong paraconsistency includes ideas like

  • Some contradictions may not be errors;
  • classical logic is wrong in principle;
  • some true theories may actually be inconsistent.

A strong paraconsistentist considers relaxing the law of non-contradiction in some way, either by dropping it entirely, so that ¬(A & ¬A) is not a theorem, or by holding that the law can itself figure into contradictions, of the form

Always, not (A and not A),
and sometimes, both A and not A.

Strong paraconsistentists may be interested in inconsistent systems for their own sake, rather like a mathematician considering different non-Euclidean systems of geometry, without worry about the ‘truth’ of the systems; or a strong paraconsistentist may expect that inconsistent systems are true and accurate descriptions of the world, like a physicist considering a non-Euclidean geometry as the actual geometry of space.

It is important to keep weak paraconsistency distinct from logical pluralism, and strong paraconsistency or dialetheism (see §3f.) distinct from logical monism. For example, one can well be a weak paraconsistentist, insofar as one claims that explosion is invalid, even though there are no true contradictions, and at the same time a logical monist, holding that the One True Logic is paraconsistent. This was the position of the fathers of relevance logic, Anderson and Belnap, for instance. Similarly, one could be a dialetheist and a logical pluralist, as is the contemporary philosophical logician Jc Beall (see suggested readings).

c. Requirements for a Logic to be Paraconsistent

All approaches to paraconsistency seek inference relations that do not explode. Sometimes this is accomplished by going back to basics, developing new and powerful ideas about the meaning of logical consequence, and checking that these ideas naturally do not lead to explosion (e.g. relevance logic, §3d). More often paraconsistency is accomplished by looking at what causes explosion in classical inference, and simply removing the causes. In either case, there are some key constraints on a paraconsistent logic that we should look at up front.

Of course, the main requirement is to block the rule of explosion. This is not really a limitation, since explosion is prima facie invalid anyway. But we cannot simply remove the inference of explosion from classical logic and automatically get a paraconsistent logic. The reason for this, and the main, serious constraint on a paraconsistent logic, was discovered by C. I. Lewis in the 1950s. Suppose we have both A and ¬A as premises. If we have A, then we have that either A or B, since a disjunction only requires that one of its disjuncts holds. But then, given ¬A, it seems that we have B, since if either A or B, but not A, then B. Therefore, from A and ¬A, we have deduced B. The problem is that B is completely arbitrary—an absurdity. So if it is invalid to infer everything from a contradiction, then this rule, called disjunctive syllogism,

AB, ¬AB,

must be invalid, too.

There are two things to remark about the failure of disjunctive syllogism (DS).

First, we might say that classical logic runs into trouble when it comes to inconsistent situations. This something like the way Newtonian physics makes bad predictions when it comes to the large-scale structure of space-time. And so similarly, as Newtonian physics is still basically accurate and applicable on medium-sized domains, we can say that classical logic is still accurate and appropriate in consistent domains. For working out sudoku puzzles, paying taxes, or solving murder mysteries, there is nothing wrong with classical reasoning. For exotic objects like contradictions, though, classical logic in unprepared.

Secondly, since DS is a valid classical inference, we can see clearly that a paraconsistent logic will validate fewer inferences than classical logic. (No classically invalid inferences are going to become valid by dint of inconsistent information.) That is the whole idea—that classical logic allows too much, and especially given the possibility of inconsistency, we must be more discriminating. This is sometimes expressed by saying that paraconsistent logics are ‘weaker’ than classical logic; but since paraconsistent logics are more flexible and apply to more situations, we needn’t focus too much on the slang. Classical logic is in many ways more limited than paraconsistent logic (see §4c.).

A third point, which we will take up in §3d, is that the invalidity of DS shows, essentially, that for the basic inference of modus ponens to be valid in all situations, we need a new logical connective for implication, not defined in terms of disjunction and negation. Now we turn to some weak and strong systems of paraconsistency.

3. Schools of Paraconsistent Logic

a. Discussive Logic

The first paraconsistent logic was developed by Jaśkowski, a student of Lukasiewicz, in Poland in 1948. He gave some basic criteria for a paraconsistent logic:

To find a system of sentential calculus which:
1) when applied to contradictory systems would not entail their triviality;
2) would be rich enough to enable practical inference;
3) would have intuitive justification.

To meet his own criteria, Jaśkowski’s idea is to imagine a group of people having a discussion, some of whom are disagreeing with each other. One person asserts: ‘Wealth should be distributed equally amongst all persons.’ Another person says, ‘No, it should not; everyone should just have what he earns.’ The group as a whole is now in an inconsistent information state. We face such states all time time: reading news articles, blogs, and opinion pieces, we take in contradictions (even if each article is internally consistent, which is unusual). How to reason about conflicting information like this?

Jaśkowski’s idea is to prevent the inconsistent information from co-mingling. He does so, in effect, by blocking the rule of adjunction:

A, BA & B.

This rule says that, given two premises A and B, we can conjoin them into a single statement, (AB). If the adjunction rule is removed, then we can have A and ¬A, without deriving a full-blown contradiction A & ¬A. The information is kept separate. On this approach, the classical rule of explosion actually can still hold, in the form

A & ¬AB.

The aim of this approach is not to prevent explosion at the sentence level, but rather to ensure that no contradictory sentence (as opposed to inconsistent sentences) can ever arise. So while the inconsistency arising from different disagreeing parties can be made coherent sense of, a person who is internally contradictory is still reckoned to be absurd.

In 1974, Rescher and Brandom suggested a very similar approach, in terms of worlds. As Belnap has pointed out, the non-adjunctive idea has obvious applications to computer science, for example when a large amount of polling data is stored by a system.

b. Preservationism

Around 1978, the Candadian logicians Schotch and Jennings developed an approach to modal logic and paraconsistency that has some close affinities with the discussion approach. Their approach is now known as the preservationist school. The fundamental idea is that, given an inconsistent collection of premises, we should not try to reason about the collection of premises as a whole, but rather focus on internally consistent subsets of premises. Like discussion logics, preservationists see an important distinction between an inconsistent data set, like

{A, ¬A},

which is considered tractable, versus an outright contradiction like

A & ¬A,

which is considered hopeless. The whole idea is summarized in a paraphrase of Gillman Payette, a major contributor to the preservationist program:

Question: How do you reason from an inconsistent set of premises?
Answer: You don’t, since every formula follows in that case. You reason from consistent subsets of premises.

Preservationists begin with an already defined logic X, usually classical logic. They assert that we, as fallible humans, are simply sometimes ‘stuck with bad data’; and this being the case, some kind of repair is needed on the logic X to insure coherence. Preservationists define the level of a set of premises to be the least number of cells into which the set must be divided for every cell to be internally consistent. They then define an inference relation, called forcing, in terms of the logic X, as follows:

A set of sentences Γ forces A iff there is at least one subset Δ of Γ such that A is an X-valid inference from Δ.

Forcing preserves the level of Γ. If there is any consistency to preserve, forcing ensures that things do not get any more inconsistent. In particular, if a data set is inconsistent but contains no single-sentence contradictions, then the forcing relation is paraconsistent.

Aside from paraconsistent applications, and roots in modal logic, preservationists have recently proved some deep theorems about logic more generally. Payette has shown, for example, that two logics are identical iff they assign any set of sentences the same level.

Detour: Chunk and Permeate

Closely related to the preservationist paradigm is a technique called chunk and permeate, developed by Bryson Brown and Graham Priest to explain the early differential calculus of Newton and Leibniz (see inconsistent mathematics). It is known that the early calculus involved contradictions of some kind, in particular, infinitesimal numbers that are sometimes identical to zero, and other times of a non-zero quantity. Brown and Priest show how reasoning about infinitesimals (and their related notions of derivatives) can be done coherently, by breaking up the reasoning into consistent ‘chunks,’ and defining carefully controlled ‘permeations’ between the chunks. The permeations show how enough but not too much information can pass from one chunk to another, and thus reconstruct how a correct mathematical solution can obtain from apparently inconsistent data.

c. Adaptive Logic

Taking applied examples from scientific reasoning as its starting point, the adaptive logic program considers systems in which the rules of inference themselves can change as we go along. The logics are dynamic. In dynamic logics, rules of inference change as a function of what has been derived to that point, and so some sentences which were derivable at a point in time are no longer derivable, and vice versa. The program has been developed by Dederik Batens and his school in Ghent.

The idea is that our commitments may entail a belief that we nevertheless reject. This is because, as humans, our knowledge is not closed under logical consequence and so we are not fully aware of all the consequences of our commitments. When we find ourselves confronted with a problem, there may be two kinds of dynamics at work. In external dynamics, a conclusion may be withdrawn given some new information; logics in which this is allowed are called non-monotonic. External dynamics are widely recognized and are also important to the preservationist program. In internal dynamics, the premises themselves may lead to a conclusion being withdrawn. This kind of dynamic is less recognized and is more properly within the ambit of paraconsistency. Sometimes, we do derive a consequence we later reject, without modifying our convictions.

Adaptive systems work by recognizing abnormalities, and deploying formal strategies. Both of these notions are defined specifically to the task at hand; for instance, an abnormality might be an inconsistency, or it might be an inductive inference, and a strategy might be to delete a line of a proof, or to change an inference rule. The base paraconsistent logic studied by the adaptive school is called CLuN, which is all of the positive (negation-free) fragment of classical logic, plus the law of excluded middle A ∨ ¬A.

d. Relevance

Relevant logic is not fundamentally about issues of consistency and contradiction. Instead the chief motivation of relevant logic is that, for an argument to be valid, the premises must have a meaningful connection to the conclusion. For example, classical inferences like

BAB,

or

¬(AB) ⊢ A,

seem to relevance logicians to fail as decent logical inferences. The requirement that premises be relevant to the conclusion delivers a paraconsistent inference relation as a byproduct, since in ex contradictione quodlibet, the premises A and ¬A do not have anything to do with an arbitrary conclusion B. Relevant logic begins with Ackermann, and was properly developed in the work of Anderson and Belnap. Many of the founders of relevant logic, such as Robert Meyer and Richard Routley, have also been directly concerned with paraconsistency.

From our perspective, one of the most important aspects of relevant logic is that it provides an implication connective that obeys modus ponens, even in inconsistent situations. In §2b, we saw that the disjunctive syllogism is not paraconsistently valid; and so in any logic in which implication is defined by negation and disjunction, modus ponens is invalid, too. That is,

AB := ¬AB

does not, as we saw in §2b above, define a conditional that obeys

A, ABB.

In the argot, we say that ‘hook is not detachable’ or ‘ponenable’. In relevant logic, implication AB is not defined with truth-functional connectives at all, but rather is defined either axiomatically or semantically (with worlds or algebraic semantics). Going this way, one can have a very robust implication connective, in which not only modus ponens is valid,

AB, A; therefore, B.

Other widely used inferences obtain, too. Let’s just mention a few that involve negation in ways that might seem suspect from a paraconsistent point of view. We can have contraposition

AB ⊢ ¬B → ¬A,

which gives us modus tollens

AB, ¬B ⊢ ¬A.

With the law of non-contradiction ¬(A & ¬A), this gives us reductio ad absurdum, in two forms,

A → (B & ¬B) ⊢ ¬A,

A → ¬A ⊢ ¬A,

and consequentia mirabilis:

¬AAA.

Evidently the relevant arrow restores a lot of power apparently lost in the invalidity of disjunctive syllogism.

There are a great number of relevant logics differing in strength. One can do away with the laws of non-contradiction and excluded middle, giving a very weak consistent paraconsistent logic called B (for basic). Or one can add powerful negation principles as we have just seen above for inconsistent but non-trivial logics. The relevant approach was used in Meyer’s attempt to found a paraconsistent arithmetic in a logic called R# (see inconsistent mathematics). It has also been used by Brady for naïve set theory (§4c), and, more recently, Beall for truth theory. On the other hand, relevant logics validate fewer entailments than classical logic; in order for AB to be valid, we have additional requirements of relevance besides truth preservation in all possible circumstances. Because of this, it is often difficult to recapture within a relevant logic some of classical mathematical reasoning. We return to this problem in §4c below.

e. Logics of Formal Inconsistency

One of the first pioneers of paraconsistent logic was Newton C. A. da Costa in Brazil, in the 1950s. Da Costa’s interests have been largely in paraconsistent mathematics (with applications to physics), and his attitude toward paraconsistency is more open minded than some of the others we have seen. Da Costa considers the investigation of inconsistent but not trivial theories as akin to the study of non-Euclidean geometry. He has been an advocate of paraconsistency not only for its pragmatic benefits, for example in reconstructing infinitesimal calculus, but also as an investigation of novel structure for its own sake. He gives the following methodological guidelines:

  • In these calculi, the principle of contradiction should not be generally valid;
  • From two contradictory statements it should not in general be possible to deduce any statement whatever;
  • The extension of these calculi to quantification calculi should be immediate.

Note that da Costa’s first principle is not like any we’ve seen so far, and his third is more ambitious than others. His main system is an infinite hierarchy of logics known as the C systems.

The main idea of the C systems is to track which sentences are consistent and to treat these differently than sentences that may be inconsistent. Following this method, first of all, means that the logic itself is about inconsistency. The logic can model how a person can or should reason about inconsistent information. Secondly, this gives us a principled way to make our paraconsistent logic as much like classical logic as possible: When all the sentences are marked as consistent, they can be safely reasoned about in a classical way, for example, using disjunctive syllogism.

To make this work, we begin with a base logic, called C(0). When a sentence A behaves consistently in C(0), we mark it according to this definition:

A0 := ¬(A & ¬A).

Then, a strong kind of negation can be defined:

A := ¬A & A0.

The logic with these two connectives added to it, we call C(1). In C(1) then we can have inferences like

¬AB, A, A0B.

And in the same way that we reached C(1), we could go on and define a logic C(2), with an operator A1 = (A0)0, that means something like ‘behaves consistently in C(1)’. The C systems continue up to the first transfinite ordinal, C(ω).

More recently, a broad generalization of the C-systems has been developed by Carnielli, Marcos, and others, called logics of formal inconsistency. Da Costa’s C-systems are a subclass (albeit an important one) of  the much wider family of  the LFIs. The C-systems are precisely the LFIs where consistency can be expressed as a unary operator.

These logics have been used to model some actual mathematics. The axioms of Zermelo–Fraenkel set theory and some postulates about identity (=) can be added to C(1), as can axioms asserting the existence of a universal set and a Russell set. This yields an inconsistent, non-trivial set theory. Arruda and Batens obtained some early results in this set theory. Work in arithmetic, infinitesimal calculus, and model theory has also been carried out by da Costa and his students.

A driving idea of da Costa’s paraconsistency is that the law of non-contradiction ¬(A & ¬A) should not hold at the propositional level. This is, philosophically, how his approach works: ¬(A & ¬A) is not true. Aside from some weak relevant logics, this is a unique feature of the C systems (among paraconsistent logics). In other schools like the discussion and preservationist schools, non-contradiction holds not only at the level of sentences, but as a normative rule; and in the next school we consider, non-contradiction is false, but it is true as well.

f. Dialetheism

The best reason to study paraconsistency, and to use it for developing theories, would be if there were actually contradictions in the world (as opposed to in our beliefs or theories). That is, if it turns out that the best and truest description of the world includes some inconsistency, then paraconsistency is not only required, but is in some sense natural and appropriate. ‘Dialetheism’ is a neologism meaning two-way truth and is the thesis that some sentences are both true and false, at the same time and in the same way. Dialetheism is particularly motivated as a response to the liar paradox and set theoretic antinomies like Russell’s Paradox, and was pioneered by Richard Routley and Graham Priest in Australia in the 1970s. Priest continues to be the best known proponent.

A dialetheic logic is easiest to understand as a many-valued logic. This is not the only way to understand dialetheism, and the logic we are about to consider is not the only logic a dialetheist could use. Dialetheism is not a logic. But here is a simple way to introduce the concept. In addition to the truth-values true and false, sentences can also be both. This third value is a little unusual, maybe, but uncomplicated: if a sentence A is both, then A is true, and A is false, and vice versa. The most straightforward application of a ‘both’ truth-value is Priest’s logic of paradox, or LP. In LP the standard logical connectives have a natural semantics, which can be deduced following the principle that a sentence is designated iff it is at least true—i.e. iff it is true only, or both true and false. If

¬A is true when A is false,

and

¬A is false when A is true,

for example, then

¬A is both iff A is both.

So inconsistent negation is something like a fixed point. An argument is valid in LP iff it is not possible for the conclusion to be completely false but all the premises at least true. That is, suppose we have premises that are all either true or both. If the argument is valid, then the conclusion is also at least true.

In LP, any sentence of the form ¬(A & ¬A) is always true, and also some instances are sometimes false. So the law of non-contradiction is itself a dialetheia—the schema ¬(A & ¬A) is universal but also has counterexamples—and furthermore, dialetheism says of itself that it is both true and false. (The statement ‘there are true contradictions’ is both true—there are some—and false—all contradictions are false.) This may seem odd, but it is appropriate, given dialetheism’s origins in the liar paradox.

LP uses only extensional connectives (and, or, not) and so has no detachable conditional. If one adds to LP a detachable conditional, then, given its semantics, the most natural extension of LP to a logic with an implication connective is the logic called RM3. Unfortunately, this logic is not appropriate for naïve set theory or truth theory (see §4c.ii). If a fourth neutral truth value is added to LP, the logic is weakened to the system of first degree entailment FDE. In FDE, the inference

BA ∨ ¬A

is not valid any more than explosion is. This makes some sense, since if the former is invalid by dint of not representing actual reasoning, then the latter should be invalid, too, since the premise does not ‘lead to’ the conclusion. Because of this, FDE has no theorems, of the form ⊢ A, at all.

4. Applications

A paraconsistent logic becomes useful when we are faced with inconsistencies. Motivations for and applications of paraconsistency arise from situations that are plausibly inconsistent—that is, situations in which inconsistency is not merely due to careless mistakes or confusion, but rather inconsistency that is not easily dispelled even upon careful and concentrated reflection. A student making an arithmetic error does not need a paraconsistent logic, but rather more arithmetic tutorials (although see inconsistent mathematics). On the other hand, people in the following situations may turn to a paraconsistent toolkit.

a. Moral Dilemmas

A mother gives birth to identical conjoined twins (in an example due to Helen Bohse). Doctors quickly assess that if the twins are not surgically separated, then neither will survive. However, doctors also know only one of the babies can survive surgery. The babies are completely identical in all respects. It seems morally obligatory to save one of life at the expense of the other. But because there is nothing to help choose which baby, it also seems morally wrong to let one baby die rather than the other. Quite plausibly, this is an intractable moral dilemma with premises of the form we ought to save the baby on the left, and, by symmetrical reasoning about the baby on the right, also we ought not to save the baby on the left. This is not yet technically a contradiction, but unless some logical precautions are taken, it is a tragic situation on the verge of rational disaster.

A moral dilemma takes the form O(A) and O(¬A), that it is obligatory to do A and it is obligatory to do ¬A. In standard deontic logic—a logic of moral obligations—we can argue from a moral dilemma to moral explosion as follows (see Routley and Plumwood 1989). First, obligations ‘aggregate’:

O(A), O(¬A) ⊢ O(A & ¬A).

Next, note that A & ¬A is equivalent to (A & ¬A) & B. (‘Equivalent’ here can mean classically, or in the sense of C. I. Lewis’ strict implication.) Thus

O(A & ¬A) ⊢ O((A & ¬A) & B)

But O((A & ¬A) & B) ⊢ O(B). So we have shown from inconsistent obligations O(A), O(¬A), that O(B), that anything whatsoever is obligatory—in standard, non-paraconsistent systems.

A paraconsistent deontic logic can follow any of the schools we have seen already. A standard paraconsistent solution is to follow the non-adjunctive approach of Jaśkowski and the preservationists. One can block the rule of modal aggregation, so that both O(A), O(¬A) may hold without implying O(A & ¬A).

Alternatively, one could deny that A & ¬A is strictly equivalent to (A & ¬A) & B, by adopting a logic (such as a relevant logic) in which such an equivalence fails. Taking this path, we would then run into the principle of deontic consistency,

O(A) ⊢ P(A),

that if you ought to do A, then it is permissible to do A. (You are not obliged not to do A.) Accordingly, from O(A & ¬A), we get P(A & ¬A). If we had the further axiom that inconsistent actions are not permitted, then we would now have a full blown inconsistency, P(A & ¬A) and ¬P(A & ¬A). If reductio is allowed, then we would also seem to have obligations such that O(A) and ¬O(A). This move calls attention to which obligations are consistent. One could drop deontic consistency, so that A is obligatory without necessarily being permissible. Or one could reason that, however odd inconsistent actions may sound, there is no obvious reason they should be impermissible. The result would be strange but harmless statements of the form P(A & ¬A).

A principle even stronger than deontic consistency is the Kantian dictum that ‘ought implies can,’ where ‘can’ means basic possibility. Kant’s dictum converts moral dilemmas to explicit contradictions. This seems to rule out moral dilemmas, since it is not possible, e.g., both to save and not to save a baby from our conjoined twins example, it is not obligatory to save one of the two babies, appearances to the contrary. So an option for the paraconsistent deontic logician is to deny Kant’s dictum. Perhaps we have unrealizable obligations; indeed, this seems to be the intuition behind moral dilemmas. A consequence of denying Kant’s dictum is that, sometimes, we inevitably do wrong.

Most liberally, one can keep everything and accept that sometimes inconsistent action is possible. For example, if I make a contract with you to break this very contract, then I break the contract if and only if I keep it. By signing, I am eo ipso breaking and not breaking the contract. In general, though, how one could do both A and its negation is a question beyond the scope of logic.

b. Laws, Science, and Revision

Consider a country with the following laws (in an example from Priest 2006, ch. 13):

(1) No non-Caucasian people shall have the right to vote.
(2) All landowners shall have the right to vote.

As it happens, though, Phil is not Caucasian, and owns a small farm. The laws, as they stand, are inconsistent. A judge may see this as a need to impose a further law (e.g. non-Caucasians cannot own land) or revise one of the current laws. In either case, though, the law as it stands needs to be dealt with in a discriminating way. Crucially, the inferential background of the current laws does not seem to permit or entail total anarchy.

Similarly, in science we hold some body of laws as true. It is part of the scientific process that these laws can be revised, updated, or even rejected completely. The process of such progress again requires that contradictions not be met with systemic collapse. At present, it seems extremely likely that different branches of science are inconsistent with one another—or even within the same discipline, as is the case in theoretical physics with relativity and quantum mechanics. Does this situation make science absurd?

c. Closed Theories – Truth and Sets

Conceptual closure means taking a full account of whatever is under study. Suppose, for example, we are studying language. We carry out our study using language. A closed theory would have to account for our study itself; the language of the theory would have to include terms like ‘language’, ‘theory’, ‘true’, and so forth. More expansively, a theory of everything would include the theory itself. Perhaps the simplest way to grasp the nature of a closed theory is through a remark of Wittgenstein, the preface to his Tractatus: ‘In order to draw a limit to thought, one would have to find both sides of the limit thinkable.’ Priest has argued that the problematic of closure can be seen in the philosophies of Kant and Hegel, as well as in earlier Greek and Medieval thought, and continues on in postmodernist philosophies. As was discovered in the 20th century, closed formal theories are highly liable to be inconsistent, because they are extremely conducive to self-reference and diagonalization (see logical paradoxes).

For logicians, the most important of the closed theories, susceptible to self-reference, are of truth and sets. Producing closed theories of truth and sets using paraconsistency is, at least to start with, straightforward. We will look at two paradigm cases, followed by some detail on how they can be pursued.

i. Naïve Axioms

In modern logic we present formal, mathematical descriptions of how sentences are true and false, e.g. (AB) is true iff A is true and B is true. This itself is a rational statement, presumably governed by some logic and so itself amenable to formal study. To reason about it logically, we would need to study the truth predicate, ‘x is true.’ An analysis of the concept of truth that is almost too-obviously correct is the schema

T(‘A’) iff A.

It seems so obvious—until (even when?) a sentence like

This sentence of the IEP is false,

a liar paradox which leads to a contradiction, falls out the other side. A paraconsistent logic can be used for a theory of truth in which the truth schema is maintained, but where either the derivation of the paradox is blocked (by dropping the law of excluded middle) or else the contradiction is not explosive.

In modern set theory, similarly, we understand mathematical objects as being built out of sets, where each set is itself built out of pre-given sets. The resulting picture is the iterative hierarchy of sets. The problem is that the iterative hierarchy itself is a mathematically definite object, but cannot itself reside on the hierarchy. A closed theory of sets will include objects like this, beginning from an analysis of the concept of set that is almost too-obviously correct: the naïve comprehension schema,

x is a member of {y: A(y)} iff A(x).

A way to understand what naïve comprehension means is to take it as the claim: any collection of objects is a set, which is itself an object. Naïve set theory can be studied, and has been, with paraconsistent logics; see inconsistent mathematics. Contradictions like the existence of a Russell set {y: y is not a member of y} arise but are simply theorems: natural parts of the theory; they do not explode the theory.

ii. Further Logical Restrictions

For both naïve truth theory and naïve set theory, there is an additional and extremely important restriction on the logic. A logic for these schemas cannot validate contraction,

If (if A then (if A then B)), then (if A then B).

This restriction is due to Curry’s paradox, which is a stronger form of the liar paradox. A Curry sentence says

If this sentence is true, then everything is true.

If the Curry sentence, call it C, is put into the truth-schema, then everything follows by the principle of contraction:

1) T(‘C’) iff (if T(‘C’) then everything). [truth schema]
2) If T(‘C’) then (if T(‘C’) then everything). [from 1]
3) If T(‘C’) then everything. [from 2 by contraction]
4) T(‘C’) [modus ponens on 1, 3]
5) Everything. [modus ponens on 3, 4]

Since not everything is true, if the T schema is correct then contraction is invalid. For set theory, analogously, the Curry set is

C = {x: If x is a member of x, then everything is true},

and a similar argument establishes triviality.

As was discovered later by Dunn, Meyer and Routley while studying naïve set theory in relevant logic, the sentence

(A & (AB)) → B

is a form of contraction too, and so must similarly not be allowed. (Let A be a Curry sentence and B be absurdity.) Calling this sentence (schema) invalid is different than blocking modus ponens, which is an inference, validated by a rule. The above sentence, meanwhile, is just that—a sentence—and we are saying whether or not all its instances are true. If naïve truth and set theories are coherent, instances of this sentence are not always true, even when modus ponens is valid.

The logic LP does not satisfy contraction and so a dialetheic truth or set theory can be embedded in it. Some basic contradictions, like the liar paradox and Russell’s paradox, do obtain, as do a few core operations. Because LP has no conditional, though, one does not get very far. Most other paraconsistent logics cannot handle naïve set theory and naïve truth theory as stated here. A hard problem in (strong) paraconsistency, then, is how to formulate the ‘iff’ in our naïve schemata, and in general how to formulate a suitable conditional. The most promising candidates to date have been relevant logics, though as we have seen there are strict limitations.

d. Learning, Beliefs, and AI

Some work has been done to apply paraconsistency to modeling cognition. The main idea here is that the limitations on machine reasoning as (apparently) dictated by Gödel’s incompleteness theorems no longer hold. What this has to do with cognition per se is a matter of some debate, and so most applications of paraconsistency to epistemology are still rather speculative. See Berto 2009 for a recent introduction to the area.

Tanaka has shown how a paraconsistent reasoning machine revises its beliefs differently than suggested by the more orthodox but highly idealized Alchourrón-Gärdenfors-Makinson theory. That latter prevailing theory of belief revision has it that inconsistent sets of beliefs are impossible. Paraconsistent reasoning machines, meanwhile, are situated reasoners, in sets of beliefs (say, acquired simply via education) that can occasionally be inconsistent. Consistency is just one of the criteria of epistemic adequacy among others—simplicity, unity, explanatory power, etc. If this is right, the notion of recursive learning might be extended, to shed new light on knowledge acquisition, conflict resolution, and pattern recognition. If the mind is able to reason around contradiction without absurdity, then paraconsistent machines may be better able to model the mind.

Paraconsistent logics have been applied by computer scientists in software architecture (though this goes beyond the expertise of the present author). That paraconsistency could have further applications to the theory of computation was explored by Jack Copeland and Richard Sylvan. Copeland has independently argued that there are effective procedures that go beyond the capacity of Turing machines. Sylvan (formerly Routley) further postulated the possibility of dialethic machines, programs capable of computing their own decision functions. In principle, this is a possibility. The non-computability of decision functions, and the unsolvability of the halting problem, are both proved by reductio ad absurdum: if a universal decision procedure were to exist, it would have some contradictions as outputs. Classically, this has been interpreted to mean that there is no such procedure. But, Sylvan suggests, there is more on heaven and Earth than is dreamt of in classical theories of computation.

5. Conclusion

Paraconsistency may be minimally construed as the doctrine that not everything is true, even if some contradictions are. Most paraconsistent logicians subscribe to views on the milder end of the spectrum; most paraconsistent logicians are actually much more conservative than a slur like Quine’s ‘deviant logician’ might suggest. On the other hand, taking paraconsistency seriously means on some level taking inconsistency seriously, something that a classically minded person will not do. It has therefore been thought that, insofar as true inconsistency is an unwelcome thought—mad, bad, and dangerous to know—paraconsistency might be some kind of gateway to darker doctrines. After all, once one has come to rational grips with the idea that inconsistent data may still make sense, what, really, stands in the way of inconsistent data being true? This has been called the slippery slope from weak to strong paraconsistency. Note that the slippery slope, while proposed as an attractive thought by those more inclined to strong paraconsistency, could seem to go even further, away from paraconsistency completely and toward the insane idea of trivialism: that everything really is true. That is, contradictions obtain, but explosion is also still valid. Why not?

No one, paraconsistentist or otherwise, is a trivialist. Nor is paraconsistency an invitation to trivilalism, even if it is a temptation to dialetheism. By analogy, when Hume pointed out that we cannot be certain that the sun will rise tomorrow, no one became seriously concerned about the possibility. But people did begin to wonder about the necessity of the ‘laws of nature’, and no one now can sit as comfortably as before Hume awoke us from our dogmatic slumber. So too with paraconsistent logic. In one sense, paraconsistent logics can do much more than classical logics. But in studying paraconsistency, especially strong paraconsistency closer to the dialetheic end of the spectrum, we see that there are many things logic cannot do. Logic alone cannot tell us what is true or false. Simply writing down the syntactic marking ‘A’ does nothing to show us that A cannot be false, even if A is a theorem. There is no absolute safeguard. Defending consistency, or denying the absurdity of trivialism, is ultimately not the job of logic alone. Affirming coherence and denying absurdity is an act, a job for human beings.

6. References and Further Reading

It’s a little dated, but the ‘bible’ of paraconsistency is still the first big collection on the topic:

  • Priest, G., Routley, R. & Norman, J. eds. (1989). Paraconsistent Logic: Essays on the Inconsistent. Philosophia Verlag.

This covers most of the known systems, including discussive and adaptive logic, with original papers by the founders. It also has extensive histories of paraconsistent logic and philosophy, and a paper by the Routleys on moral dilemmas. For more recent work, see also

  • Batens, D., Mortensen, C., Priest, G., & van Bendegem, J.-P. eds. (2000). Frontiers of Paraconsistent Logic. Kluwer.
  • Berto, F. and Mares, E., Paoli, F., and Tanaka, K. eds. (2013). The Fourth World Congress on Paraconsistency, Springer.

A roundabout philosophical introduction to non-classical logics, including paraconsistency, is in

  • Beall, JC and Restall, Greg (2006). Logical Pluralism. Oxford University Press.

Philosophical introductions to strong paraconsistency:

  • Priest, Graham (2006). In Contradiction: A Study of the Transconsistent. Oxford University Press. Second edition.
  • Priest, Graham (2006). Doubt Truth to be a Liar. Oxford University Press.
  • Berto, Francesco (2007). How to Sell a Contradiction. Studies in Logic vol. 6. College Publications.

More philosophical debate about strong paraconsistency is in the excellent collection

  • Preist, G., Beall, JC and Armour-Garb, B. eds. (2004). The Law of Non-Contradiction. Oxford University Press.

For the technical how-to of paraconsistent logics:

  • Beall, JC and van Frassen, Bas (2003). Possibilities and Paradox: An Introduction to Modal and Many-Valued Logics. Oxford University Press.
  • Gabbay, Dov M. & Günthner, F. eds. (2002). Handbook of Philosophical Logic. Second edition, vol. 6, Kluwer.
  • Priest, Graham (2008). An Introduction to Non-Classical Logic. Cambridge University Press. Second edition.

For a recent introduction to preservationism, see

  • Schotch, P., Brown, B. and Jennings, R. eds. (2009). On Preserving: Essays on Preservationism and Paraconsistent Logic. University of Toronto Press.
  • Brown, Bryson and Priest, Graham (2004). “Chunk and Permeate I: The Infinitesimal Calculus.” Journal of Philosophical Logic 33, pp. 379–88.

Logics of formal inconsistency:

  • W. A. Carnielli and J. Marcos. A taxonomy of C- systems. In Paraconsistency: the Logical Way to the Inconsistent, Lecture Notes in Pure and Applied Mathematics, Vol. 228, pp. 01–94, 2002.
  • W. A. Carnielli, M. E. Coniglio and J. Marcos.  Logics of Formal Inconsistency. In Handbook of Philosophical Logic, vol. 14, pp. 15–107. Eds.: D. Gabbay; F. Guenthner. Springer, 2007.
  • da Costa, Newton C. A. (1974). “On the Theory of Inconsistent Formal Systems.” Notre Dame Journal of Formal Logic 15, pp. 497–510.
  • da Costa, Newton C. A. (2000). Paraconsistent Mathematics. In Batens et al. (2000), pp. 165–180.
  • da Costa, Newton C. A., Krause, Décio & Bueno, Otávio (2007). “Paraconsistent Logics and Paraconsistency.” In Jacquette, D. ed. Philosophy of Logic (Handbook of the Philosophy of Science), North-Holland, pp. 791–912.

Relevant logics:

  • Anderson, A. R. and Belnap, N. D., Jr. (1975). Entailment: The Logic of Relevance and Necessity. Princeton University Press, vol. I.
  • Mares, E. D. (2004). Relevant Logic: A Philosophical Interpretation. Cambridge University Press.

The implications of Gödel’s theorems:

  • Berto, Francesco (2009). There’s Something About Gödel. Wiley-Blackwell.

Belief revision:

  • Tanaka, Koji (2005). “The AGM Theory and Inconsistent Belief Change.” Logique et Analyse 189–92, pp. 113–50.

Artificial Intelligence:

  • Copeland, B. J. and Sylvan, R. (1999). “Beyond the Universal Turing Machine.” Australasian Journal of Philosophy 77, pp. 46–66.
  • Sylvan, Richard (2000). Sociative Logics and their Applications. Priest, G. and Hyde, D. eds. Ashgate.

Moral dilemmas:

  • Bohse, Helen (2005). “A Paraconsistent Solution to the Problem of Moral Dilemmas.” South African Journal of Philosophy 24, pp. 77–86.
  • Routley, R. and Plumwood, V. (1989). “Moral Dilemmas and the Logic of Deontic Notions.” In Priest et al. 1989, 653–690.
  • Weber, Zach (2007). “On Paraconsistent Ethics.” South African Journal of Philosophy 26, pp. 239–244.

Other works cited:

  • Colyvan, Mark (2008). “Who’s Afraid of Inconsistent Mathematics?” Protosociology 25, pp. 24–35. Reprinted in G. Preyer and G. Peter eds. Philosophy of Mathematics: Set Theory, Measuring Theories and Nominalism, Frankfurt: Verlag, 2008, pp. 28–39.
  • Hume, David (1740). A Treatise of Human Nature, ed. L. A. Selby-Bigge. Second edition 1978. Oxford: Clarendon Press.

Author Information

Zach Weber
Email: zweber@unimelb.edu.au
University of Melbourne
Australia

Email: z.weber@usyd.edu.au
University of Sydney
Australia