Epistemic foundationalism is a view about the proper structure of one’s knowledge or justified beliefs. Some beliefs are known or justifiedly believed only because some other beliefs are known or justifiedly believed. For example, you can know that you have heart disease only if you know some other claims such as your doctors report this and doctors are reliable. The support these beliefs provide for your belief that you have heart disease illustrates that your first belief is epistemically dependent on these other two beliefs. This epistemic dependence naturally raises the question about the proper epistemic structure for our beliefs. Should all beliefs be supported by other beliefs? Are some beliefs rightly believed apart from receiving support from other beliefs? What is the nature of the proper support between beliefs? Epistemic foundationalism is one view about how to answer these questions. Foundationalists maintain that some beliefs are properly basic and that the rest of one’s beliefs inherit their epistemic status (knowledge or justification) in virtue of receiving proper support from the basic beliefs. Foundationalists have two main projects: a theory of proper basicality (that is, a theory of noninferential justification) and a theory of appropriate support (that is, a theory of inferential justification).
Foundationalism has a long history. Aristotle in the Posterior Analytics argues for foundationalism on the basis of the regress argument. Aristotle assumes that the alternatives to foundationalism must either endorse circular reasoning or land in an infinite regress of reasons. Because neither of these views is plausible, foundationalism comes out as the clear winner in an argument by elimination. Arguably, the most well known foundationalist is Descartes, who takes as the foundation the allegedly indubitable knowledge of his own existence and the content of his ideas. Every other justified belief must be grounded ultimately in this knowledge.
The debate over foundationalism was reinvigorated in the early part of the twentieth century by the debate over the nature of the scientific method. Otto Neurath (1959; original 1932) argued for a view of scientific knowledge illuminated by the raft metaphor according to which there is no privileged set of statements that serve as the ultimate foundation; rather knowledge arises out of a coherence among the set of statements we accept. In opposition to this raft metaphor, Moritz Schlick (1959; original 1932) argued for a view of scientific knowledge akin to the pyramid image in which knowledge rests on a special class of statements whose verification doesn’t depend on other beliefs.
The Neurath-Schlick debate transformed into a discussion over nature and role of observation sentences within a theory. Quine (1951) extended this debate with his metaphor of the web of belief in which observation sentences are able to confirm or disconfirm a hypothesis only in connection with a larger theory. Sellars (1963) criticizes foundationalism as endorsing a flawed model of the cognitive significance of experience. Following the work of Quine and Sellars, a number of people arose to defend foundationalism (see section below on modest foundationalism). This touched off a burst of activity on foundationalism in the late 1970s to early 1980s. One of the significant developments from this period is the formulation and defense of reformed epistemology, a foundationalist view that took as the foundations beliefs such as there is a God (see Plantinga (1983)). While the debate over foundationalism has abated in recent decades, new work has picked up on neglected topics about the architecture of knowledge and justification.
The foundationalist attempts to answer the question: what is the proper structure of one’s knowledge or justified beliefs? This question assumes a prior grasp of the concepts of knowledge and justification. Before the development of externalist theories of knowledge (see entry on internalism and externalism in epistemology) it was assumed that knowledge required justification. On a standard conception of knowledge, knowledge was justified true belief. Thus investigation on foundationalism focused on the structural conditions for justification. How should one’s beliefs be structured so as to be justified? The following essay discusses foundationalism in terms of justification (see BonJour (1985) for a defense of the claim that knowledge requires justification). Where the distinction between justification and knowledge is relevant (for example, weak foundationalism), this article will observe it.
What is it for a belief to be justified? Alvin Plantinga (1993) observes that the notion of justification is heavily steeped in deontological terms, terms like rightness, obligation, and duty. A belief is justified for a person if and only if the person is right to believe it or the subject has fulfilled her intellectual duties relating to the belief. Laurence BonJour (1985) presents a slightly different take on the concept of justification stating that it is “roughly that of a reason or warrant of some kind meeting some appropriate standard” (pp., 5-6). This ‘appropriate standard’ conception of justification permits a wider understanding of the concept of justification. BonJour, for instance, takes the distinguishing characteristic of justification to be “its essential or internal relation to the cognitive goal of truth” (p. 8). Most accounts of justification assume some form of epistemic internalism. Roughly speaking, this is the view that a belief’s justification does not require that it meets some condition external to a subject’s perspective, conditions such as being true, being produced by a reliable process, or being caused by the corresponding fact (see entry on internalism and externalism in epistemology). All the relevant conditions for justification are ‘internal’ to a subject’s perspective. These conditions vary from facts about a subject’s occurrent beliefs and experiences to facts about a subject’s occurrent and stored beliefs and experiences and further to facts simply about subject’s mind, where this may include information that, in some sense or other, a subject has difficulty bringing to explicit consciousness. Although some externalists offer accounts of justification (see Goldman (1979) & Bergmann (2006)), this article assumes that justification is internalistic. Externalists have a much easier time addressing concerns over foundationalism. It is a common judgment that the foundationalist / coherentist debate takes place within the backdrop of internalism (see BonJour (1999)).
This section discusses prominent arguments for a general type of foundationalism. Section 4, on varieties of foundationalism, discusses more specific arguments aimed to defend a particular species of foundationalism.
The epistemic regress problem has a long history. Aristotle used the regress argument to prove that justification requires basic beliefs, beliefs that are not supported by any other beliefs but are able to support further beliefs (see Aristotle’s Posterior Analytics I.3:5-23). The regress problem was prominent in the writings of the academic skeptics, especially Sextus Empiricus’s Outlines of Pyrrhonism and Diogenes Laertius “The Life of Pyrrho” in his book The Lives and Options of Eminent Philosophers. In the 20th century the regress problem has received new life in the development of the coherentist and infinitist options (see BonJour (1985) and Klein (1999), respectively).
To appreciate the regress problem begin with the thought that the best way to have a good reason for some claim is to have an argument from which the claim follows. Thus one possesses good reason to believe p when it follows from the premises q and r. But then we must inquire about the justification for believing the premises. Does one have a good argument for the premises? Suppose one does. Then we can inquire about the justification for those premises. Does one have an argument for those claims? If not, then it appears one lacks a good reason for the original claim because the original claim is ultimately based on claims for which no reason is offered. If one does have an argument for those premises then either one will continue to trace out the chain of arguments to some premises for which no further reason is offered or one will trace out the chain of arguments until one loops back to the original claims or one will continue to trace out the arguments without end. We can then begin to see the significance of the regress problem: is it arguments all the way down? Does one eventually come back to premises that appeared in earlier arguments or does one eventually come to some ultimate premises, premises that support other claims but do not themselves require any additional support?
Skepticism aside, the options in the regress problem are known as foundationalism, coherentism, and infinitism. Foundationalists maintain that there are some ultimate premises, premises that provide good reasons for other claims but themselves do not require additional reasons. These ultimate premises are the proper stopping points in the regress argument. Foundationalists hold that the other options for ending the regress are deeply problematic and that consequently there must be basic beliefs.
Coherentists and infinitists deny that there are any ultimate premises. A simple form of coherentism holds that the arguments for any claim will eventually loop back to the original claim itself. As long as the circle of justifications is large enough it is rationally acceptable. After all, every claim is supported by some other claim and, arguably, the claims fit together in such a way to provide an explanation of their truth (see Lehrer (1997), Chs 1 & 2)
Infinitists think that both the foundationalist and coherentist options are epistemically objectionable. Infinitists (as well as coherentists) claim that the foundationalist options land in arbitrary premises, premises that are alleged to support other claims but themselves lack reasons. Against the coherentist, infinitists claim that it simply endorses circular reasoning: no matter how big the circle, circular arguments do not establish that the original claim is true. Positively, infinitists maintain that possessing a good reason for a claim requires that it be supported by an infinite string of non-repeating reasons (see Klein (1999)).
Foundationalists use the regress argument to set up the alternative epistemological positions and then proceed to knock down these positions. Foundationalists argue against infinitism that we never possess an infinite chain of non-repeating reasons. At best when we consider the justification for some claim we are able to carry this justification out several steps but we never come close to anything resembling an unending chain of justifications. For this criticism and others of infinitism see Fumerton (1998).
Against the coherentist the foundationalist agrees with the infinitist’s criticism mentioned above that circular reasoning never justifies anything. If p is used to support q then q itself cannot be used in defense of p no matter how many intermediate steps there are between q and p. This verdict against simple coherentism is strong, but foundationalist strategy is complicated by the fact that it is hard to find an actual coherentist who endorses circular reasoning (though see Lehrer (1997) Ch 1 and 2 for remarks about the circular nature of explanation). Coherentists, rather, identify the assumption of linear inference in the regress argument and replace it with a stress on the holistic character of justification (see BonJour (1985)). The assumption of linear inference in the regress argument is clearly seen above by the idea that the regress traces out arguments for some claim, where the premises of those arguments are known or justifiedly believed prior to the conclusion being known or justified believed. The form of coherentism that rejects this assumption in the regress argument is known as holistic coherentism.
Foundationalist arguments against holistic coherentism must proceed with more care. Because holistic coherentists disavow circular reasoning and stress the mistaken role of linear inference in the regress argument, foundationalists must supply a different argument against this option. A standard argument against holistic coherentism is that unless the data used for coherence reasoning has some initial justification it is impossible for coherence reasoning to provide justification. This problem affected Laurence BonJour’s attempt to defend coherentism (see BonJour (1985), pp. 102-3). BonJour argued that coherence among one’s beliefs provided excellent reason to think that those beliefs were true. But BonJour realized that he needed an account of how one was justified in believing that one had certain beliefs, i.e., what justified one in thinking that one did indeed hold the system of beliefs one takes oneself to believe. BonJour quickly recognized that coherence couldn’t provide justification for this belief but it wasn’t until later in his career that he deemed this problem insuperable for a pure coherentist account (see BonJour (1997) for details).
The regress problem provides a powerful argument for foundationalism. The regress argument, though, does not resolve particular questions about foundationalism. The regress provides little guidance about the nature of basic beliefs or the correct theory of inferential support. As we just observed with the discussion of holistic coherentism, considerations from the regress argument show, minimally, that the data used for coherence reasoning must have some initial presumption in its favor. This form of foundationalism may be far from the initial hope of a rational reconstruction of common sense. Such a reconstruction would amount to setting out in clear order the arguments for various commonsense claims (for example, I have hands, there is a material world, I have existed for more than five minutes, etc) that exhibits the ultimate basis for our view of things. We shall consider the issues relating to varieties of foundationalists views below.
Another powerful consideration for foundationalism is our natural judgment about particular cases. It seems evident that some beliefs are properly basic. Leibniz, for instance, gives several examples of claims that don’t “involve any work of proving” and that “the mind perceives as immediately as the eye sees light” (see New Essays, IV, chapter 2, 1). Leibniz mentions the following examples:
White is not black.
A circle is not a triangle.
Three is one and two.
Other philosophers (for example, C.I. Lewis, Roderick Chisholm, and Richard Fumerton) have found examples of such propositions in appearance states (traditionally, referred to as the given). For instance, it may not be evident that there is a red circle before one because one may be in a misleading situation (for example, a red light shining on a white circle). However, if one carefully considers the matter one may be convinced that something appears red. Foundationalists stress that it is difficult to see what one could offer as a further justification for the claim about how things seem to one. In short, truths about one’s appearance states are excellent candidates for basic beliefs.
As we shall see below a feature of this appeal to natural judgment is that it can support strong forms of foundationalism. Richard Fumerton maintains that for some cases, for example, pain states, one’s belief can reach the highest level of philosophical assurance (see Fumerton (2006)). Other philosophers (for example, James Pryor (2000)) maintain that some ordinary propositions, such as I have hands, are foundational.
This section examines two general arguments against foundationalism. Arguments against specific incarnations of foundationalism are considered in section 4.
As noted above the regress argument figures prominently in arguing for foundationalism. The regress argument supports the conclusion that some beliefs must be justified independently of receiving warrant from other beliefs. However, some philosophers judge that this claim amounts to accepting some beliefs as true for no reason at all, that is, epistemically arbitrary beliefs. This objection has significant bite against a doxastic form of foundationalism (the language of ‘doxastic’ comes from the Greek word ‘doxa’ meaning belief). Doxastic foundationalism is the view that the justification of one’s beliefs is exclusively a matter of what other beliefs one holds. Regarding the basic beliefs, a doxastic foundationalist holds that these beliefs are ‘self-justified’ (see Pollock & Cruz (1999), 22-23). The content of the basic beliefs are typically perceptual reports but importantly a doxastic foundationalist does not conceive of one’s corresponding perceptual state as a reason for the belief. Doxastic foundationalists hold that one is justified in accepting a perceptual report simply because one has the belief. However, given the fallibility of perceptual reports, it is epistemically arbitrary to accept a perceptual report for no reason at all.
The arbitrariness objection against non-doxastic theories must proceed with more care. A non-doxastic form of foundationalism denies that justification is exclusively a matter of relations between one’s beliefs. Consider a non-doxastic foundationalist that attempts to stop the regress with non-doxastic states like experiences. This foundationalist claims that, for example, the belief that there is a red disk before one is properly basic. This belief is not justified on the basis of any other beliefs but instead justified by the character of one’s sense experience. Because one can tell by reflection alone that one’s experience has a certain character, the experience itself provides one with an excellent reason for the belief. The critic of non-doxastic foundationalism argues that stopping with this experience is arbitrary. After all, there are scenarios in which this experience is misleading. If, for example, the disk is white but illuminated with red light then one’s experience will misled one to think that the disk is really red.Unless one has a reason to think that these scenarios fail to obtain then it’s improper to stop the regress of reasons here.
One foundationalist solution to the arbitrariness problem is to move to epistemically certain foundations. Epistemically certain foundations are beliefs that cannot be misleading and so cannot provide a foothold for arbitrariness concerns. If, for instance, one’s experience is of a red disk and one believes just that one’s experience has this character, it is difficult to see how one’s belief could be mistaken in this specific context. Consequently, it is hard to make sense of how one’s belief about the character of one’s experience could be epistemically arbitrary. In general, many foundationalists want to resist this move. First, relative to the large number of beliefs we have, there are few epistemically certain beliefs. Second, even if one locates a few epistemically certain beliefs, it is very difficult to reconstruct our common-sense view of the world from those beliefs. If the ultimate premises of one’s view include only beliefs about the current character of one’s sense experience it’s near impossible to figure out how to justify beliefs about the external world or the past.
Another foundationalist response to the arbitrariness argument is to note that it is merely required that a properly basic belief possess some feature in virtue of which the belief is likely to be true. It is not required that a subject believe her belief possesses that feature. This response has the virtue of allowing for modest forms of foundationalism in which the basic beliefs are less than certain. Critics of foundationalism continue to insist that unless the subject is aware that the belief possesses this feature, her belief is an improper stopping point in the regress of reasons. For a defense of the arbitrariness objection against foundationalism see Klein (1999) & (2004), and for responses to Klein see Bergmann (2004), Howard-Snyder & Coffman (2006), Howard-Snyder (2005), and Huemer (2003).
The Sellarsian dilemma was first formulated in Wilfrid Sellars’s rich, but difficult, essay “Empiricism and the Philosophy of Mind.” Sellars’s main goal in this essay is to undermine the entire framework of givenness ((1963), p. 128). Talk of ‘the given’ was prevalent in earlier forms of foundationalism (see, for example, C.I. Lewis (1929), Ch 2). The phrase ‘the given’ refers to elements of experience that are putatively immediately known in experience. For instance, if one looks at a verdant golf course the sensation green is alleged to be given in experience. In a Cartesian moment one may doubt whether or not one is actually perceiving a golf course but, the claim is, one cannot rationally doubt that there is a green sensation present. Strong foundationalists appeal to the given to ground empirical knowledge. In “Empiricism and the Philosophy of Mind” Sellars argues that the idea of the given is a myth.
The details of Sellars’ actual argument are difficult to decipher. The most promising reconstruction of Sellars’ argument occurs in chapter 4 of BonJour’s (1985). BonJour formulates the dilemma using the notion of ‘assertive representational content’. Representational content is the kind of content possessed by beliefs, hopes, and fears. A belief, a hope, or a fear could be about the same thing; one could believe that it is raining, hope that it is raining, or fear that it is raining. These states all have in common the same representational content. Assertive representational content is content that is presented as being true but may, in fact, be false. A good case of assertive content comes from the Müller-Lyer illusion. In this well-known experiment a subject experiences two vertical lines as being unequal in length even though they have the same length. The subject’s experience presents as true the content that these lines are unequal.
Given the notion of assertive representational content BonJour reformulates the Sellarsian dilemma: either experience has assertive representational content or not. If experience has assertive representational content then one needs an additional reason to think that the content is correct. If, however, experience lacks this content then experience cannot provide a reason for thinking that some proposition is true. The dilemma focuses on non-doxastic foundationalism and is used to argue that anyway the view is filled out, it cannot make good on the intuition that experience is a proper foundation for justification.
Let us examine each option of the dilemma staring with the second option. A defense of this option observes that it is difficult to understand how experience could provide a good reason for believing some claim if it failed to have representational content. Think of the olfactory experience associated with a field of flowers in full bloom. Apart from a formed association between that experience and its cause, it is difficult to understand how that experience has representational content. In other words, the experience lacks any content; it makes no claim that the world is one way rather than another. However, if that is right, how could that experience provide any reason for believing that the world is one way rather than another? If the experience itself is purely qualitative then it cannot provide a reason to believe that some proposition is true. In short, there is a strong judgment that apart from the representational content of experience, experience is powerless to provide reasons.
A defense of the first option of the dilemma takes us back to issues raised by the arbitrariness objection. If experience does have assertive representational content then that content can be true or false. If the content is possibly false, the experience is not a proper stopping point in the regress of reasons. The whole idea behind the appeal to the given was to stop the regress of reasons in a state that did not require further justification because it was not the sort of thing that needed justification. If experience, like belief, has representational content then there is no good reason to stop the regress of reasons with experience rather than belief. In brief, if experience functions as a reason in virtue of its assertive representational content then there is nothing special about experience as opposed to belief in its ability to provide reasons. Since the arbitrariness objection shows that belief is not a proper stopping point in the regress, the Sellarsian dilemma shows that experience is not a proper stopping point either.
Probably the best foundationalist response to the Sellarsian dilemma is to argue that the first option of the dilemma is mistaken; experience has assertive propositional content and can still provide a regress stopping reason to believe that some claim is true. There are broadly two kinds of responses here depending on whether one thinks that the content of experience could be false. On one view, experience carries a content that may be false but that this experiential content provides a basic reason for thinking that this content is true. For instance, it may perceptually seem to one that there is a coffee mug on the right corner of the desk. This content may be false but in virtue of its being presented as true in experience one has a basic reason for thinking that it is true (see Pryor (2000) & Huemer (2001) for developments of this view). The other view one might take is that experiential content—at least the kind that provides a basic reason—cannot be false. One this view the kind of content that experience provides for a basic reason is something like this: it perceptually seems that there is a red disk before me. Laurence BonJour (in BonJour & Sosa (2003)) develops a view like this. On his view, one has a built-in constitutive awareness of experiential content, and in virtue of that awareness of content one has a basic reason to believe that the content is true. For a good criticism of BonJour’s strategy, see Bergmann (2006), Chapter 2. For a different, externalist response to the dilemma see Jack Lyons (2008).
See the Encyclopedia article “Coherentism” for more criticism of foundationalism.
This section surveys varieties of foundationalist views. As remarked above foundationalists have two main projects: providing a suitable theory of noninferential justification and providing an adequate theory of proper inference. We will examine three views on non-inferential justification and three views on inferential justification.
An adequate theory of noninferential justification is essential for foundationalism. Foundationalist views differ on the nature of noninferential justification. We can distinguish three types of foundationalist views corresponding to the strength of justification possessed by the basic beliefs: strong, modest, and weak foundationalism. In the following we shall examine these three views and the arguments for and against them.
Strong foundationalists hold that the properly basic beliefs are epistemically exalted in some interesting sense. In addition to basic beliefs possessing the kind of justification necessary for knowledge (let us refer to this as “knowledge level justification”) strong foundationalists claim the properly basic beliefs are infallible, indubitable, or incorrigible. Infallible beliefs are not possibly false. Indubitable beliefs are not possible to doubt even though the content may be false, and incorrigible beliefs cannot be undermined by further information. The focus on these exalted epistemic properties grows out of Descartes’ method of doubt. Descartes aimed to locate secure foundations for knowledge and dismissed any claims that were fallible, dubitable, or corrigible. Thus, Descartes sought the foundations of knowledge in restricted mental states like I am thinking. Before we examine arguments against strong foundationalism let us investigate some arguments in favor of it.
Probably the most widespread argument for strong foundationalism is the need for philosophical assurance concerning the truth of one’s beliefs (see Fumerton (2006)). If one adopts the philosophical undertaking to trace out the ultimate reasons for one’s view it can seem particularly remiss to stop this philosophical quest with fallible, dubitable, or corrigible reasons. As Descartes realized if the possibility that one is dreaming is compatible with one’s evidence then that evidence is not an adequate ground for a philosophical satisfying reconstruction of knowledge. Consequently, if a philosophically satisfying perspective of knowledge is to be found it will be located in foundations that are immune from doubt.
Another argument for strong foundationalism is C.I. Lewis’s contention that probability must be grounded in certainty (see Lewis (1952); also see Pastin (1975a) for a response to Lewis’s argument). Lewis’s argument appeals explicitly to the probability calculus but we can restate the driving intuition apart from utilizing any formal machinery. Lewis reasoned that if a claim is uncertain then it is rationally acceptable only given further information. If that further information is uncertain then it is acceptable only given additional information. If this regress continues without ever coming to a certainty then Lewis conjectures that the original claim is not rationally acceptable.
We can get a sense of Lewis’s intuition by considering a conspiracy theorist that has a defense for every claim in his convoluted chain of reasoning. We might think that, in general, the theorist is right about the conditional claims—if this is true then that is probably correct—but just plain wrong that the entire chain of arguments supports the conspiracy theory. We correctly realize that the longer the chain of reasoning the less likely the conclusion is true. The chance of error grows with the amount of information. Lewis’s argument takes this intuition to its limit: unless uncertainties are grounded in certainties no claim is ever rationally acceptable.
Let us examine several arguments against strong foundationalism. The most repeated argument against strong foundationalism is that its foundations are inadequate for a philosophical reconstruction of knowledge. We take ourselves to know much about the world around us from mundane facts about our immediate surroundings to more exotic facts about the far reaches of the universe. Yet if the basic material for this reconstruction is restricted to facts about an individual’s own mind it is nearly impossible to figure out how we can get back to our ordinary picture of the world. In this connection strong foundationalists face an inherent tension between the quest for epistemic security and the hope for suitable content to reconstruct commonsense. Few strong foundationalists have been able to find a suitable balance between these competing demands. Some philosophers with a more metaphysical bent aimed to reduce each statement about the material world to a logical construction of statements about an individual’s own sense experience. This project is known as phenomenalism. The phenomenalist’ guiding idea was that statements about the physical world were really complex statements about sensations. If this guiding idea could be worked out then strong foundationalist would have a clear conception of how the “commonsense” picture of the world could be justified. However, this guiding idea could never be worked out. See, for instance, Roderick Chisholm’s (1948) article.
Another argument against strong foundationalism is David Armstrong’s ‘distinct existence’ argument ((1968), 106-7). Armstrong argues that there is a difference between an awareness of X and X, where X is some mental state. For instance, there is a difference between being in pain and awareness of being in pain. As long as awareness of X is distinct from X, Armstrong argues that it is possible for one to seemingly be aware of X without X actually occurring. For instance, an intense pain that gradually fades away can lead to a moment in which one has a false awareness of being in pain. Consequently, the thought that one can enjoy an infallible awareness of some mental state is mistaken.
A recent argument against strong foundationalism is Timothy Williamson’s anti-luminosity argument (see Williamson (2000)). Williamson does not talk about foundationalism but talks rather in terms of the ongoing temptation in philosophy to postulate a realm of luminous truths, truths that shine so brightly they are always open to our view if we carefully consider the matter. Even though Williamson doesn’t mention foundationalism his argument clearly applies to the strong foundationalist. Williamson’s actual argument is intricate and we cannot go into it in much detail. The basic idea behind Williamson’s argument is that appearance states (for example, it seems as if there is a red item before you) permit of a range of similar cases. Think of color samples. There is a string of color samples from red to orange in which each shade is very similar to the next. If appearance states genuinely provided certainty, indubitability, or the like then one should be able to always tell what state one was in. But there are cases that are so similar that one might make a mistake. Thus, because of the fact that appearance states ebb and flow, they cannot provide certainty, indubitability or the like. There is a burgeoning discussion of the anti-luminosity argument; see Fumerton (2009) for a strong foundationalist response and Meeker & Poston (2010) for a recent discussion and references).
Prior to 1975 foundationalism was largely identified with strong foundationalism. Critics of foundationalism attacked the claims that basic beliefs are infallible, incorrigible, or indubitable. However, around this time there was a growing recognition that foundationalism was compatible with basic beliefs that lacked these epistemically exalted properties. William Alston (1976a; 1976b), C.F. Delaney (1976), and Mark Pastin (1975a; 1975b) all argued that a foundationalist epistemology merely required that the basic beliefs have a level of positive epistemic status independent of warranting relations from other beliefs. In light of this weaker form of foundationalism the attacks against infallibility, incorrigibility, or indubitability did not touch the core of a foundationalist epistemology.
William Alston probably did the most to rehabilitate foundationalism. Alston provides several interrelated distinctions that illustrate the limited appeal of certain arguments against strong foundationalism and also displays the attractiveness of modest foundationalism. The first distinction Alston drew was between epistemic beliefs and non-epistemic beliefs (see 1976a). Epistemic beliefs are beliefs whose content contains an epistemic concept such as knowledge or justification, whereas a non-epistemic belief does not contain an epistemic concept. The belief that there is a red circle before me is not an epistemic belief because its content does not contain any epistemic concepts. However, the belief that I am justified in believing that there is a red, circle before me is an epistemic belief on account of the epistemic concept justified figuring in its content. Alston observes that prominent arguments against foundationalism tend to run together these two beliefs. For instance, an argument against foundationalism might require that to be justified in believing that p one must justifiedly believe that I am justified in believing that p. That is, the argument against foundationalism assumes that epistemic beliefs are required for the justification of non-epistemic beliefs. As Alston sees it, once these two types of belief are clearly separated we should be suspicious of any such argument that requires epistemic beliefs for the justification of non-epistemic beliefs (for details see (1976a) and (1976b)).
A closely related distinction for Alston is the distinction between the state of being justified and the activity of exhibiting one’s justification. Alston argues in a like manner that prominent objections to foundationalism conflate these two notions. The state of being justified does not imply that one can exhibit one’s justification. Reflection on actual examples support Alston’s claim. Grandma may be justified in believing that she has hands without being in a position to exhibit her justification. Timmy is justified in believing that he has existed for more than five minutes but he can do very little to demonstrate his justification. Therefore, arguments against foundationalism should not assume that justification requires the ability to exhibit one’s justification.
A final, closely allied, distinction is between a justification regress argument and a showing regress argument. Alston argues that the standard regress argument is a regress of justification that points to the necessity of immediately justified beliefs. This argument is distinct from a showing regress in which the aim is to demonstrate that one is justified in believing p. This showing regress requires that one proves that one is justified in believing p for each belief one has. Given Alston’s earlier distinctions this implies that one must have epistemic beliefs for each non-epistemic belief and further it conflates the distinction between the state of being justified and the activity of exhibiting one’s justification.
With these three distinctions in place and the further claim that immediately justified beliefs may be fallible, revisable, and dubitable Alston makes quick work of the standard objections to strong foundationalism. The arguments against strong foundationalism fail to apply to modest foundationalism and further have no force against the claim that some beliefs have a strong presumption of truth. Reflection on actual cases supports Alston’s claim. Grandma’s belief that she has hands might be false and revised in light of future evidence. Perhaps, Grandma has been fitted with a prosthetic device that looks and functions just like a normal hand. Nonetheless when she looks and appears to see a hand, she is fully justified in believing that she has hands.
Alston’s discussion of modest foundationalism does not mention weaker forms of foundationalism. Further Alston is not clear on the precise epistemic status on these foundations. Alston describes the ‘minimal’ form of foundationalism as simply being committed to non-inferentially justified beliefs. However, as we shall shortly see BonJour identifies a modest and weak form of foundationalism. For purposes of terminological regimentation we shall take ‘modest’ foundationalism to be the claim that the basic beliefs possess knowledge adequate justification even though these beliefs may be fallible, corrigible, or dubitable. A corollary to modest foundationalism is the thesis that the basic beliefs can serve as premises for additional beliefs. The picture then the modest foundationalist offers us is that of knowledge (and justification) as resting on a foundation of propositions whose positive epistemic status is sufficient to infer other beliefs but whose positive status may be undermined by further information.
A significant development in modest foundationalism is the rise of reformed epistemology. Reformed epistemology is a view in the epistemology of religious belief, which holds that the belief that there is a God can be properly basic. Alvin Plantinga (1983) develops this view. Plantinga holds that an individual may rationally believe that there is a God even though the individual does not possess sufficient evidence to convince an agnostic. Furthermore, the individual need not know how to respond to various objections to theism. On Plantinga’s view as long as the belief is produced in the right way it is justified. Plantinga has developed reformed epistemology in his (2000) volume. Plantinga develops the view as a form of externalism that holds that the justification conferring factors for a belief may include external factors.
Modest foundationalism is not without its critics. Some strong foundationalists argue that modest foundationalism is too modest to provide adequate foundations for empirical knowledge (see McGrew (2003)). Timothy McGrew argues that empirical knowledge must be grounded in certainties. McGrew deploys an argument similar to C.I. Lewis’s argument that probabilities require certainties. McGrew argues that every statement that has less than unit probability is grounded in some other statement. If the probability that it will rain today is .9 then there must be some additional information that one is taking in account to get this probability. Consequently, if the alleged foundations are merely probable then they are really no foundations at all. Modest foundationalists disagree. They hold that some statements may have an intrinsic non-zero probability (see for instance Mark Pastin’s response to C.I. Lewis’s argument in Pastin (1975a)).
Weak foundationalism is an interesting form of foundationalism. Laurence BonJour mentions the view as a possible foundationalist view in his (1985) book The Structure of Empirical Knowledge. According to BonJour the weak foundationalist holds that some non-inferential beliefs are minimally justified, where this justification is not strong enough to satisfy the justification condition on knowledge. Further this justification is not strong enough to allow the individual beliefs to serve as premises to justify other beliefs (see BonJour (1985), 30). However, because knowledge and inference are fundamental features to our epistemic practices, a natural corollary to weak foundationalism is that coherence among one’s beliefs is required for knowledge-adequate justification and also for one’s beliefs to function as premises for other beliefs. Thus for the weak foundationalist, coherence has an ineliminable role for knowledge and inference.
This form of foundationalism is a significant departure from the natural stress foundationalists place on the regress argument. Attention on the regress argument focuses one back to the ultimate beliefs of one’s view. If these beliefs are insufficient to license inference to other beliefs it is difficult to make good sense of a reconstruction of knowledge. At the very least the reconstruction will not proceed in a step by step manner in which one begins with a limited class of beliefs—the basic ones—and then moves to the non-basic ones. If, in addition, coherence is required for the basic beliefs to serve as premises for other beliefs then this form of weak foundationalism looks very similar to refined forms of coherentism.
Some modest foundationalists maintain that weak foundationalism is inadequate. James Van Cleve contends that weak foundationalism is inadequate to generate justification for one’s beliefs (van Cleve (2005)). Van Cleve presents two arguments for the claim that some beliefs must have a high intrinsic credibility (pp. 173-4). First, while coherence can increase the justification for thinking that one’s ostensible recollections are correct, one must have significant justification for thinking that one has correctly identified one’s ostensible recollection. That is to say, one must have more than weak justification for thinking one’s apparent memory does report that p, whether or not this apparent memory is true. Apart from the thought that one has strong justification for believing that one’s ostensible memory is as one takes it to be, Van Cleve argues it is difficult to see how coherence could increase the justification for believing that those apparent memories are true.
The second argument Van Cleve offers comes from Bertrand Russell ((1948), p. 188). Russell observes that one fact makes another probable or improbable only in relation to a law. Therefore, for coherence among certain facts, to make another fact probable one must have sufficient justification for believing a law that connects the facts. Van Cleve explains that we might not require a genuine law but rather an empirical generalization that connects the two facts. Nonetheless Russell’s point is that for coherence to increase the probability of some claim we must have more than weak justification for believing some generalization. The problem for the weak foundationalist is that our justification for believing an empirical generalization depends on memory. Consequently, memory must supply the needed premise in a coherence argument and it can do this only if memory supplies more than weak justification. In short, the coherence among ostensible memories increases justification only if we have more than weak justification for believing some generalization provided by memory.
Much of the attention on foundationalism has focused on the nature and existence of basic beliefs. Yet a crucial element of foundationalism is the nature of the inferential relations between basic beliefs and non-basic beliefs. Foundationalists claim that all of one’s non-basic beliefs are justified ultimately by the basic beliefs, but how is this supposed to work? What are the proper conditions for the justification of the non-basic beliefs? The following discusses three approaches to inferential justification: deductivism, strict inductivism, and liberal inductivism.
Deductivists hold that proper philosophical method consists in the construction of deductively valid arguments whose premises are indubitable or self-evident (see remarks by Nozick (1981) and Lycan (1988)). Deductivists travel down the regress in order to locate the epistemic atoms from which they attempt to reconstruct the rest of one’s knowledge by deductive inference. Descartes’ epistemology is often aligned with deductivism. Descartes locates the epistemically basic beliefs in beliefs about the ideas in one’s mind and then deduces from those ideas that a good God exists. Then given that a good God exists, Descartes deduces further that the ideas in his mind must correspond to objects in reality. Therefore, by a careful deductive method, Descartes aims to reconstruct our knowledge of the external world.
Another prominent example of deductivism comes from phenomenalism. As mentioned earlier, phenomenalism is the attempt to analyze statements about physical objects in terms of statements about patterns of sense data. Given this analysis, the phenomenalist can deduce our knowledge of the external world from knowledge of our own sensory states. Whereas Descartes’ deductivism took a theological route through the existence of a good God, the phenomenalist eschews theology and attempts a deductive reconstruction by a metaphysical analysis of statements about the external world. Though this project is a momentous failure, it illustrates a tendency in philosophy to grasp for certainty.
Contemporary philosophers dismiss deductivism as implausible. Deductivism requires strong foundationalism because the ultimate premises must be infallible, indubitable, or incorrigible. However, many philosophers judge that the regress stopping premises need not have these exalted properties. Surely, the thought continues, we know things like I have hands and the world has existed for more than five minutes? Additionally, if one restricts proper inference to deduction then one can never expand upon the information contained in the premises. Deductive inference traces out logical implications of the information contained in the premises. So if the basic premises are limited to facts about one’s sensory states then one can’t go ‘beyond’ those states to facts about the external world, the past, or the future. To accommodate that knowledge we must expand either our premises or our conception of inference. Either direction abandons the deductivist picture of proper philosophical method.
One response to the above challenge for deductivism is to move to modest foundationalism, which allows the basic premises to include beliefs about the external world or the past. However, even this move is inadequate to account for all our knowledge. In addition to knowing particular facts about the external world or the past we know some general truths about the world such as all crows are black. It is implausible that this belief is properly basic. Further, the belief that every observed and unobserved crow is black is not implied by any properly basic belief such as this crow is black. In addition to moving away from a strong foundationalist theory of non-inferential justification, one must abandon deductivism.
To accommodate knowledge of general truths, philosophers must allow for other kinds of inference beside deductive inference. The standard form of non-deductive inference is enumerative induction. Enumerative induction works by listing (that is, enumerating) all the relevant instances and then concluding on the basis of a sufficient sample that all the relevant instances have the target property. Suppose, for instance, one knows that 100 widgets from the Kenosha Widget Factory have a small k printed on it and that one knows of no counterexamples to this. Given this knowledge, one can infer by enumerative induction that every widget from the Kenosha Widget Factory has a small k printed on it. Significantly, this inference is liable to mislead. Perhaps, the widgets one has examined are special in some way that is relevant to the small printed k. For example, the widgets come from an exclusive series of widgets to celebrate the Kafka’s birthday. Even though the inference may mislead, it is still intuitively a good inference. Given a sufficient sample size and no counterexamples, one may infer that the sample is representative of the whole.
The importance of enumerative induction is that it allows one to expand one’s knowledge of the world beyond the foundations. Moreover, enumerative induction is a form of linear inference. The premises of the induction are known or justifiedly believed prior to the conclusion being justified believed. This suggests that enumerative induction is a natural development of the foundationalist conception of knowledge. Knowledge rests on properly basic beliefs and those other beliefs that can be properly inferred from the best beliefs by deduction and enumerative induction.
Strict inductivism is motivated by the thought that we have some kind of inferential knowledge of the world that cannot be accommodated by deductive inference from epistemically basic beliefs. A fairly recent debate has arisen over the merits of strict inductivism. Some philosophers have argued that there are other forms of non-deductive inference that do not fit the model of enumerative induction. C.S. Peirce describes a form of inference called “abduction” or “inference to the best explanation.” This form of inference appeals to explanatory considerations to justify belief. One infers, for example, that two students copied answers from a third because this is the best explanation of the available data—they each make the same mistakes and the two sat in view of the third. Alternatively, in a more theoretical context, one infers that there are very small unobservable particles because this is the best explanation of Brownian motion. Let us call ‘liberal inductivism’ any view that accepts the legitimacy of a form of inference to the best explanation that is distinct from enumerative induction. For a defense of liberal inductivism see Gilbert Harman’s classic (1965) paper. Harman defends a strong version of liberal inductivism according to which enumerative induction is just a disguised from of inference to the best explanation.
A crucial task for liberal inductivists is to clarify the criteria that are used to evaluate explanations. What makes one hypothesis a better explanation than another? A standard answer is that hypotheses are rated as to their simplicity, testability, scope, fruitfulness, and conservativeness. The simplicity of a hypothesis is a matter of how many entities, properties, or laws it postulates. The theory that the streets are wet because it rained last night is simpler than the theory that the streets are wet because there was a massive water balloon fight between the septuagenarians and octogenarians last night. A hypothesis’s testability is a matter of its ability to be determined to be true or false. Some hypotheses are more favorable because they can easily be put to the test and when they survive the test, they receive confirmation. The scope of a hypothesis is a matter of how much data the hypothesis covers. If two competing hypotheses both entail the fall of the American dollar but another also entails the fact that the Yen rose, the hypothesis that explains this other fact has greater scope. The fruitfulness of a hypothesis is a matter of how well it can be implemented for new research projects. Darwin’s theory on the origin of the species has tremendous fruitfulness because, for one, it opened up the study of molecular genetics. Finally, the conservativeness of a hypothesis is a matter of its fit with our previously accepted theories and beliefs.
The liberal inductivist points to the alleged fact that many of our commonsense judgments about what exists are guided by inference to the best explanation. If, for instance, we hear the scratching in the walls and witness the disappearance of cheese, we infer that there are mice in the wainscoting. As the liberal inductivist sees it, this amounts to a primitive use of inference to the best explanation. The mice hypothesis is relatively simple, testable, and conservative.
The epistemological payout for accepting the legitimacy of inference to the best explanation is significant. This form of inference is ideally suited for dealing with under-determination cases, cases in which one’s evidence for a hypothesis is compatible with its falsity. For instance, the evidence we possess for believing that the story of general relativity is correct is compatible with the falsity of that theory. Nonetheless, we judge that we are rational in believing that general relativity is true based on the available evidence. The theory of general relativity is the best available explanation of the data. Similarly, epistemological under-determination arguments focus on the fact that the perceptual data we possess is compatible with the falsity of our common sense beliefs. If a brain in the vat scenario obtained then one would have all the same sensation states and still believe that, for example, one was seated at a desk. Nevertheless, the truth of our commonsense beliefs is the best available explanation for the data of sense. Therefore, our commonsense beliefs meet the justification condition for knowledge. See Jonathan Vogel (1990) for a response to skepticism along these lines and see Richard Fumerton (1992) for a contrasting perspective.
Liberal inductivism is not without its detractors. Richard Fumerton argues that every acceptable inductive inference is either a straightforward case of induction or a combination of straightforward induction and deduction. Fumerton focuses on paradigm cases of alleged inference to the best explanation and argues that these cases are enthymemes (that is, arguments with suppressed premises). He considers a case in which someone infers that a person walked recently on the beach from the evidence that there are footprints on the beach and that if a person walked recently on the beach there would be footprints on the beach. Fumerton observes that this inference fits in to the standard pattern of inference to the best explanation. However, he then argues that the acceptability of this inference depends on our justification for believing that in the vast majority of cases footprints are produced by people. Fumerton thus claims that this paradigmatic case of inference to the best explanation is really a disguised form of inference to a particular: the vast majority of footprints are produced by persons; there are footprints on the beach; therefore, a person walked on the beach recently. The debate of the nature and legitimacy of inference to the best explanation is an active and exciting area of research. For an excellent discussion and defense of inference to the best explanation see Lipton (2004).
There are non-trivial connections between a foundationalist theory of inference and theory of concepts. This is one of the points at which epistemology meets the philosophy of mind. Both deductivists and strict inductivists tend to accept a thesis about the origin of our concepts. They both tend to accept the thesis of concept empiricism in which all of our concepts derive from experience. Following Locke and Hume, concept empiricists stress that we cannot make sense of any ideas that are not based in experience. Some concept empiricists are strong foundationalists in which case they work with a very limited range of sensory concepts (for example, C.I. Lewis) or they are modest foundationalist in which they take concepts of the external world as disclosed in experience (that is, direct realists). Concept empiricists are opposed to inference to the best explanation because a characteristic feature of inference to the best explanation is inference to an unobservable. As the concept empiricist sees it this is illegitimate because we lack the ability to think of genuine non-observables. For a sophisticated development of this view see Van Fraassen (1980).
Concept rationalists, by contrast, allow that we possess concepts that are not disclosed in experience. Some concept rationalists, like Descartes, held that some concepts are innate such as the concepts God, substance, or I. Other concept rationalists view inference to the best explanation as a way of forming new concepts. In general concept rationalists do not limit the legitimate forms of inference to deduction and enumerative induction. For a discussion of concept empiricism and rationalism in connection with foundationalism see Timothy McGrew (2003).
Foundationalism is a multifaceted doctrine. A well-worked out foundationalist view needs to naturally combine a theory of non-inferential justification with a view of the nature of inference. The nature and legitimacy of non-deductive inference is a relatively recent topic and there is hope that significant progress will be made on this score. Moreover, given the continued interest in the regress problem foundationalism provides to be of perennial interest. The issues that drive research on foundationalism are fundamental epistemic questions about the structure and legitimacy of our view of the world.
Email: poston “at” jaguar1.usouthal.edu
University of South Alabama
U. S. A.
Last updated: June 10, 2010 | Originally published: