Epistemic value is a kind of value which attaches to cognitive successes such as true beliefs, justified beliefs, knowledge, and understanding. These kinds of cognitive success do of course often have practical value. True beliefs about local geography help us get to work on time; knowledge of mechanics allows us to build vehicles; understanding of general annual weather patterns helps us to plant our fields at the right time of year to ensure a good harvest. By contrast, false beliefs about the existence of weapons of mass destruction can lead nations to fight hugely expensive wars that are ultimately both destructive and useless.
It is fairly uncontroversial that we tend to care about having various cognitive or epistemic goods, at least for their practical value, and perhaps also for their own sakes as cognitive successes. But this uncontroversial point raises a number of important questions. For example: it’s natural to wonder whether there really are all these different kinds of things (true beliefs, knowledge, and so on) which have distinct value from an epistemic point of view, or whether the value of some of them is reducible to, or depends on, the value of others.
It’s also natural to think that knowledge is more valuable than mere true belief, but it has proven to be no easy task explaining where the extra value of knowledge comes from. Similarly, it’s natural to think that understanding is more valuable than any other epistemic state which falls short of understanding, such as true belief or knowledge. But there is disagreement about what makes understanding the highest epistemic value, or what makes it distinctly valuable, or even whether it is distinctly valuable.
Indeed, it’s no easy task saying just what makes something an epistemic value in the first place. Do epistemic values just exist on their own, independent of other kinds of value? Or are cognitive goods valuable because we care about having them for their own sakes? Or are they are valuable because they help us to achieve other things which we care about for their own sakes?
Furthermore, if we accept that there are things which are epistemically valuable, then we might be tempted to accept what is sometimes called the instrumental (or consequentialist, or teleological) conception of epistemic rationality or justification, which is the view that a belief is epistemically rational just in case it appropriately promotes the achievement of an epistemic goal. If this idea is correct, then we need to know which epistemic values to include in the formulation of the epistemic goal, where the “epistemic goal” is an epistemically valuable goal in light of which we evaluate beliefs as epistemically rational or irrational.
Table of Contents
- Claims about Value
- The Value Problem
- Truth and other Epistemic Values
- Instrumentalism and Epistemic Goals
- References and Further Reading
Philosophers working on questions of value typically draw a number of distinctions which are good to keep in mind when thinking about particular kinds of value claims. We’ll look at three particularly useful distinctions before getting into the debates about epistemic value.
The first important distinction to keep in mind is the distinction between instrumental and final value. An object (or state, property, etc.) is instrumentally valuable if and only if it brings about something else that is valuable. An object is finally valuable if and only if it’s valuable for its own sake.
For example, it’s valuable to have a hidden pile of cash in your mattress: when you have a pile of cash readily accessible, you have the means to acquire things which are valuable, such as clothing, food, and so on. And, depending on the kind of person you are, it might give you peace of mind to sleep on a pile of cash. But of course piles of cash aren’t valuable for their own sake – money is obviously only good for what it can get you. So money is only instrumentally valuable.
By contrast, being healthy is something we typically think of as finally valuable. Although being healthy is instrumentally good because it enables us to do other valuable things, we also care about being healthy just because it’s good to be healthy, whether or not our state of health allows us to achieve other goods.
The existence of instrumental value depends on the existence of final value. But it’s possible for final value to exist without any instrumental value. There are possible worlds where there simply are no causal relations at all, for example. In worlds like that, there could exist some final value (for instance, there could be sentient beings who feel great pleasure), but nothing would ever count as a means for bringing about anything else, so there would be no instrumental value. In the actual world, though, it’s pretty clear that there is both instrumental and final value.
The second distinction is between subjective and objective value. Subjective value is a matter of the satisfaction of people’s desires (or the fulfillment of their plans, intentions, etc.). Objective value is a kind of value which doesn’t depend on what people desire, care about, plan to do, etc. (Of course, to say that an object or event O is subjectively valuable for a subject S isn’t to say anything about why S thinks that O is valuable; O can be subjectively valuable in virtue of S’s desiring to bring O about, even if the reason S desires to bring O about is precisely because S thinks that O is objectively valuable. In a case like that, if O is objectively valuable, then it is both objectively and subjectively valuable; but if S is mistaken, and O is not objectively valuable, then O is only subjectively valuable.)
Some philosophers think that there is really only subjective value (and correspondingly, subjective reasons, obligations, and so on); others think that there is only objective value, and that there is value in achieving one’s actual desires only when the desires are themselves objectively good. Still other philosophers allow both kinds of value. Many of the views which we’ll see below can be articulated in terms of either subjective or objective value, and when a view is committed to allowing only one type of value, the context will usually make it clear whether it’s subjective or objective. So, to keep things simple, except when it needs to be made explicit, claims about value will not be qualified as subjective or objective.
Suppose that God declares that it is maximally valuable, always and everywhere, to feed the hungry. Assuming that God is omniscient and doesn’t lie, it necessarily follows that it’s true that it’s maximally valuable, always and everywhere, to feed the hungry. So there’s nothing that could ever outweigh the value of feeding the hungry. This would be an indefeasible kind of value: it is a kind of value that cannot be defeated by any contrary values or considerations.
Most value, however, is defeasible: it can be defeated, either by being overridden by contrary value-considerations, or else by being undermined. For an example of undermining: it’s instrumentally valuable to have a policy of getting an annual physical exam done, because that’s the kind of thing that normally helps catch medical issues before they become serious. But suppose that Sylvia visits the doctor for her annual physical, and it turns out that the doctor discovers that she has a terminal case of cancer, and that she has only days to live. In this case, nothing medically valuable comes about as a result of Sylvia’s policy of getting her physical done. The instrumental medical value which that policy would have enjoyed is undermined by the fact that annual physicals are no longer able to help keep Sylvia in good health.
By contrast, imagine that Roger goes to the emergency room for a dislocated shoulder. The doctors fix his shoulder, but while sitting in the waiting room, Roger inhales droplets from another patient’s sneeze, and he contracts meningitis as a result, which ends up causing him brain damage. In this case, there is some medical value which resulted from Roger’s visit to the emergency room: his shoulder was fixed. But because brain damage is more disvaluable than a fixed shoulder is valuable, the value of having a fixed should is outweighed, or overridden, by the disvalue of having brain damage. So all things considered, Roger’s visit to the emergency room is disvaluable. But at least there is still something positive to be said for it.
In cases where some value V1 of an object O (or action, event, etc.) is overridden by some contrary value V2, but where V1 still at least counts in favour of O’s being valuable, we can say that V1 is a pro tanto kind of value (literally, this means value “so far as it goes”). So the value of Roger’s fixed shoulder is pro tanto: it counts in favour of the value of his visit to the emergency room, even though it is outweighed by the disvalue of his resulting brain damage. The disvalue of getting brain damage is also pro tanto: although brain damage outweighs a dislocated shoulder, there can be contrary values which would outweigh it So we can say that, all things considered, Roger’s visit to the emergency room is disvaluable.
Knowledge and true belief both tend to be things we want to have, but all else being equal, we tend to prefer to have knowledge over mere true belief. The Primary Value Problem is the problem of explaining why that should be the case. Many epistemologists think that we should take it as a criterion of adequacy for theories of knowledge that they be able to explain the fact that we prefer knowledge to mere true belief, or at least that they be consistent with a good explanation of why that should be the case.
To illustrate: suppose that Steve believes that the Yankees are a good baseball team, because he thinks that their pinstriped uniforms are so sharp-looking. Steve’s belief is true—the Yankees always field a good team—but he holds his belief for such a terrible reason that we are very reluctant to think of it as an item of knowledge.
Cases like Steve’s motivate the view that knowledge consists of more than just true belief. In order to count as knowledge, a belief has to be well justified in some suitable sense, and it should also meet a suitable Gettier-avoidance condition (see Gettier Problems). But not only do beliefs like Steve’s motivate the view that knowledge consists of more than mere true belief: they also motivate the view that knowledge is better to have than true belief. For suppose that Yolanda knows the Yankees’ stats, and on that basis she believes that the Yankees are a good team. It seems that Yolanda’s belief counts as an item of knowledge. And if we compare Steve and Yolanda, it seems that Yolanda is doing better than Steve; we’d prefer to be in Yolanda’s epistemic position rather than in Steve’s. This seems to indicate that we prefer knowledge over mere true belief.
The challenge of the Primary Value Problem is to explain why that should be the case. Why should we care about whether we have knowledge instead of mere true belief? After all, as is often pointed out, true beliefs seem to bring us the very same practical benefits as knowledge. (Steve would do just as well as Yolanda betting on the Yankees, for example.) Socrates makes this point in the Meno, arguing that if someone wants to get to Larisa, and he has a true belief but not knowledge about which road to take, then he will get to Larisa just as surely as if he had knowledge of which road to take. In response to Socrates’s argument, Meno is moved to wonder why anyone should care about having knowledge instead of mere true belief. (Hence, the Primary Value Problem is sometimes called the Meno Problem.)
So in short, the problem is that mere true beliefs seem to be just as likely as knowledge to guide us well in our actions. But we still seem to have the persistent intuition that any given item of knowledge is more valuable than the corresponding item of mere true belief. The challenge is to explain this intuition. Strategies for addressing this problem can either try to show that knowledge really is always more valuable than corresponding items of mere true belief, or else they can allow that knowledge is sometimes (or even always) no more valuable than mere true belief. If we adopt the latter kind of response to the problem, it is incumbent on us to explain why we should have the intuition that knowledge is more valuable than mere true belief, in cases where it turns out that knowledge isn’t in fact more valuable. Following Pritchard (2008; 2009), we can call strategies of the first kind vindicating, and we can call strategies of the second kind revisionary.
There isn’t a received view among epistemologists about how we ought to respond to the Primary Value Problem, so the most useful thing to do at this point is to consider a number of the more interesting proposals from the literature, and to look at their problems and prospects.
A very straightforward way to respond to the problem is to deny one of the intuitions on which the problem depends, the intuition that knowledge is distinct from true belief. Meno toys with this idea in the Meno, though Socrates disabuses him of the idea. (Somewhat more recently, Sartwell (1991; 1992) has defended this approach to knowledge.) If knowledge is identical with true belief, then we can simply reject the value problem as resting on a mistaken view of knowledge. If knowledge is true belief, then there’s no discrepancy in value to explain.
The view that knowledge is just true belief is almost universally rejected, however, and with good reason. Cases where subjects have true beliefs but lack knowledge are so easy to construct and so intuitively obvious that identifying knowledge with true belief represents an extreme departure from what most epistemologists and laypeople think of knowledge. Consider once again Steve’s belief that the Yankees are a good baseball team, which he holds because he thinks their pinstriped uniforms are so sharp. It seems like an abuse of language to call Steve’s belief an item of knowledge. At the very least, we should be hesitant to accept such an extreme view until we’ve exhausted all other theoretical options.
Of course it could still be the case that knowledge is no more valuable than mere true belief, even though knowledge is not identical with true belief. But, as we’ve seen, there is a widespread and resilient intuition that knowledge is more valuable than mere true belief (recall, for instance, that we tend to think that Yolanda’s epistemic state is better than Steve’s). If knowledge were identical with true belief, then we would have to take that intuition to be mistaken; but, since we can see that knowledge is more than mere true belief, we can continue looking for an acceptable account which would explain why knowledge is more valuable than mere true belief.
Most attempts to explain why knowledge is more valuable than mere true belief proceed by identifying some condition which must be added to true belief in order to yield knowledge, and then explaining why that further condition is valuable. Socrates’s own view, at least as presented in the Meno, is that knowledge is true opinion plus an account of why the opinion is true (where the account of why it is true is itself already present in the soul; it must only be recalled from memory). So, Socrates proposes, a known true belief will be more stable than a mere true belief, because having an account of why a belief is true helps to keep us from losing it. If you don’t have an account of why a proposition is true, you might easily forget it, or abandon your belief in it when you come across some reason for doubting it. But if you do have an account of why a proposition is true, you likely have a greater chance of remembering it, and if you come across some reason for doubting it, you’ll have a reason available to you for continuing to believe it.
A worry for this solution is that it seems to be entirely possible for a subject S to have some entirely unsupported beliefs, which do not count as knowledge, but where S clings to these beliefs dogmatically, even in the face of good counterevidence. S’s belief in a case like this can be just as stable as many items of knowledge—indeed, dogmatically held beliefs can even be more stable than knowledge. For if you know that p, then presumably your belief is a response to some sort of good reason for believing that p. But if your belief is a response to good reasons, then you’d likely be inclined to revise your belief that p, if you were to come across some good evidence for thinking that p is false, or for thinking that you didn’t have any good reason for believing that p in the first place. On the other hand, if p is something you cling to dogmatically (contrary evidence be damned), then you’ll likely retain p even when you get good reason for doubting it. So, even though having stable true beliefs is no doubt a good thing, knowledge isn’t always more stable than mere true belief, and an appeal to stability does not seem to give us an adequate explanation of the extra value of knowledge over mere true belief.
One way to defend the stability response to the value problem is to hold that knowledge is more stable than mere true beliefs, but only for people whose cognitive faculties are in good working order, and to deny that the cognitive faculties of people who cling dogmatically to evidentially unsupported beliefs are in good working order (Williamson 2000). This solution invites the objection, however, that our cognitive faculties are not all geared to the production of true beliefs. Some cognitive faculties are geared towards ensuring our survival, and the outputs of these latter faculties might be held very firmly even if they are not well supported by evidence. For example, there could be subjects with cognitive mechanisms which take as input sudden sounds and generate as output the belief that there’s a predator nearby. Mechanisms like these might very well generate a strong conviction that there’s a predator nearby. Such mechanisms would likely yield many more false positive predator-identifications than they would yield correct identifications, but their poor true-to-false output-ratio doesn’t prevent mechanisms of this kind from having a very high survival value, as long as they do correctly identify predators when they are present. So it’s not really clear that knowledge is more stable than mere true beliefs, even for mere true beliefs which have been produced by cognitive systems which are in good working order, because it’s possible for beliefs to be evidentially unsupported, and very stable, and produced by properly functioning cognitive faculties, all at the same time. (See Kvanvig 2003, ch1. for a critical discussion of Williamson’s appeal to stability.)
Virtue epistemologists are, roughly, those who think that knowledge is true belief which is the product of intellectual virtues. (See Virtue Epistemology.) They seem to have a plausible solution to the Primary (and, as we’ll see, to the Secondary) Value Problem.
According to a prominent strand of virtue epistemology, knowledge is true belief for which we give the subject credit (Greco 2003), or true belief which is a cognitive success because of the subject’s exercise of her relevant cognitive ability (Greco 2008; Sosa 2007). For example (to adapt Sosa’s analogy): an archer, in firing at a target, might shoot well or poorly. If she shoots poorly but hits the target anyway (say, she takes aim very poorly but sneezes at the moment of firing, and luckily happens to hit the target), her shot doesn’t display skill, and her hitting the target doesn’t reflect well on her. If she shoots well, on the other hand, then she might hit the target or miss the target. If she shoots well and misses the target, we will still credit her with having made a good shot, because her shot manifests skill. If she shoots well and hits the target, then we will credit her success to her having made a good shot—unless there were intervening factors which made it the case that the shot hit the mark just as a matter of luck. For example: if a trickster moves the target while the arrow is in mid-flight, but a sudden gust of wind moves the arrow to the target’s new location, then in spite of the fact that the archer makes a good shot, and she hits the target, she doesn’t hit the target because she made a good shot. She was just lucky, even though she was skillful. But when strange factors don’t intervene, and the archer hits the target because she made a good shot, we give her credit for having hit the target, since we think that performances which succeed because they are competent are the best kind of performances. And, similarly, when it comes to belief-formation, we give people credit for getting things right as a result of the exercise of their intellectual virtues: we think it’s an achievement to get things right as the result of one’s cognitive competence, and so we tend to think that there’s a sense in which people who get things right because of their intellectual competence deserve credit for getting things right.
According to another strand of virtue epistemology (Zagzebski 2003), we don’t think of knowledge as true belief which meets some further condition. Rather, we should think of knowledge as a state which a subject can be in, which involves having the propositional attitude of belief, but which also includes the motivations for which the subject has the belief. Virtuous motivations might include things like diligence, integrity, and a love of truth. And, just as we think that, in ethics, virtuous motives make actions better (saving a drowning child because you don’t want children to suffer and die is better than saving a drowning child because you don’t want to have to give testimony to the police, for example), we should also think that the state of believing because of a virtuous motive is better than believing for some other reason.
Some concerns have been raised for both strands of virtue epistemology, however. Briefly, a worry for the Sosa/Greco type of virtue epistemology is that (as we’ll see in section 3) knowledge might not after all in general be an achievement—it might be something we can come by in a relatively easy or even lazy fashion. A worry for Zagzebski’s type of virtue epistemology is that there seem to be possible cases where subjects can acquire knowledge even though they lack virtuous intellectual motives. Indeed, it seems possible to acquire knowledge even if one has only the darkest of motives: if a torturer is motivated by the desire to waterboard people until they go insane, for example, he can thereby gain knowledge of how long it takes to break a person by waterboarding.
Still, the idea that knowledge can be analyzed as true belief which is somehow virtuously produced and creditable to the agent seems to be worth pursuing. Because the virtue-approach seems to be able to handle most of the Gettier-style problems which plague previous analyses of knowledge, and because it can provide what is on the face of it a plausible solution to the Primary Value Problem, virtue epistemology represents a promising research program, and its problems and prospects deserve careful exploration.
The Primary Value Problem is sometimes thought to be especially bad for reliabilists about knowledge. Reliabilism in its simplest form is the view that beliefs are justified if and only if they’re produced by reliable processes, and they count as knowledge if and only if they’re produced by reliable processes and they’re not Gettiered. (See, for example, Goldman and Olsson (2009, p. 22), as well as the article on Reliabilism.) The apparent trouble for reliabilism is that reliability only seems to be valuable as a means to truth—so, in any given case where we have a true belief, it’s not clear that the reliability of the process which produced the belief is able to add anything to the value that the belief already has in virtue of being true. The value which true beliefs have in virtue of being true completely “swamps” the value of the reliability of their source, if reliability is only valuable as a means to truth. (Hence the Primary Value Problem for reliabilism has often been called the “swamping problem.”)
To illustrate with an example borrowed from Zagzebski (2003): the value of a cup of coffee seems to be a matter of how good the coffee tastes. And we value reliable coffeemakers because we value good cups of coffee. But when it comes to the value of any particular cup of coffee, its value is just a matter of how good it tastes; whether the coffee was produced by a reliable coffeemaker doesn’t add to or detract from the value of the cup of coffee. Similarly, we value true beliefs, and we value reliable belief-forming processes because we care about getting true beliefs. So we have reason to prefer reliable processes over unreliable ones. But whether a particular belief was reliably or unreliably produced doesn’t seem to add to or detract from the value of the belief itself.
Responses have been offered on behalf of reliabilism. Brogaard (2006) points out that critics of reliabilism seem to have been presupposing a Moorean conception of value, according to which the value of an object (or state, condition, and so forth) is entirely a function of the internal properties of the object. (The value of the cup of coffee is determined entirely by its internal properties, not by the reliability of its production, or by the fineness of a particular morning when you enjoy your coffee.) But this is a mistaken view about value in general. External features can add value to objects. We value a genuine Picasso painting more than a flawless counterfeit, for example. If that’s correct, then extra value can be conferred on an object, if it has a valuable source, and perhaps the value of reliable processes can transfer to the beliefs which they produce.
(That is a negative response to the value problem for reliabilism, in the sense that its aim is to show that critics of reliabilism haven’t shown that reliabilists can’t account for the value of knowledge.)
Goldman and Olsson (2009) offer two further responses on behalf of reliabilism. Their first response is that we can hold that true belief is always valuable, and that reliability is only valuable as a means to true belief, but that it is still more valuable to have knowledge (understood as reliabilists understand knowledge, that is, as reliably-produced and unGettiered true belief) than a mere true belief. For if S knows that p in circumstances C, then S has formed the belief that p through some reliable process in C. So S has some reliable process available to her, and it generated a belief in C. This makes it more likely that S will have a reliable process available to her in future similar circumstances, than it would be if S had an unreliably produced true belief in C. So, when we’re thinking about how valuable it is to be in circumstances C, it seems to be better for S to be in C if S has knowledge in C than if she has mere true belief in C, because having knowledge in C makes it likelier that she’ll get more true beliefs in future similar circumstances.
This response, Goldman and Olsson think, accounts for the extra value which knowledge has in many cases. But there will still be cases where S’s having knowledge in C doesn’t make it likelier that she’ll get more true beliefs in the future. For example, C might be a unique set of circumstances which is unlikely to come up again. Or S might be employing a reliable process which is available to her in C, but which is likely to become unavailable to her very soon. Or S might be on her deathbed. So this response isn’t a completely validating solution to the value problem, and it’s incumbent on Goldman and Olsson to explain why we should tend to think that knowledge is more valuable than mere true belief in those cases when it’s not.
So Goldman and Olsson offer a second response to the Primary Value Problem: when it comes to our intuitions about the value of knowledge, they argue, it’s plausible that these intuitions began long ago with the recognition that true belief is always valuable in some sense to have, and that knowledge is usually valuable because it involves both true belief and the probability of getting more true beliefs; and then, over time, we have come to simply think that knowledge is valuable, even in cases when having knowledge doesn’t make it more probable that the subject will get more true beliefs in the future.
An approach similar to Goldman and Olsson’s is to consider the values of contingent features of knowledge, rather than the value of its necessary and/or sufficient conditions. Although we might think that the natural way to account for the value of some state or condition S1, which is composed of other states or conditions S2-Sn, is in terms of the values of S2-Sn, perhaps S1 can be valuable in virtue of some other conditions which typically (but not always) accompany S1, or in terms of some valuable result which S1 is typically (but not always) able to get us. For example: it’s normal to think that air travel is valuable, because it typically enables people to cover great distances safely and quickly. Of course, sometimes airplanes are diverted, and slow travellers down, and sometimes airplanes crash. But even so, we might continue to think, air travel is typically a valuable thing, because in ordinary cases, it gets us something good.
Similarly, we might think that knowledge is valuable because we need to rely on the information which people give us in order to accomplish just about anything in this life, and being able to identify people as having knowledge means being able to rely on them as informants. And we also might think that there’s value in being able to track whether our own beliefs are held on the basis of good reasons, and we typically have good reasons available to us for believing p when we know that p. Of course we aren’t always in a position to identify when other people have knowledge, and if externalists about knowledge are right, then we don’t always have good reasons available to us when we have knowledge ourselves. Nevertheless, we can typically identify people as knowers, and we can typically identify good reasons for the things we know. These things are valuable, so they make typical cases of knowledge valuable, too. (See Craig (1990) for an account of the value of knowledge in terms of the characteristic function of knowledge-attribution. Jones (1997) further develops the view.)
Like Goldman and Olsson’s responses, this strategy for responding to the value problem doesn’t give us an account of why knowledge is always more valuable than mere true belief. For those who think that knowledge is always preferable to mere true belief, and who therefore seek a validating solution to the Primary Value Problem, this strategy will not be satisfactory. But for those who are willing to accept a somewhat revisionist response, according to which knowledge is only usually or characteristically preferable to mere true belief, this strategy seems promising.
Suppose you’ve applied for a new position in your company, but your boss tells you that your co-worker Jones is going to get the job. Frustrated, you glance over at Jones, and see that he has ten coins on his desk, and you then watch him put the coins in his pocket. So you form the belief that the person who will get the job has at least ten coins in his or her pocket (call this belief “B”). But it turns out that your boss was just toying with you; he just wanted to see how you would react to bad news. He’s going to give you the job. And it turns out that you also have at least ten coins in your pocket.
So you have a justified true belief, B, which has been Gettiered. In cases like this, once you’ve found out that you were Gettiered, it’s natural to react with annoyance or intellectual embarrassment: even though you got things right (about the coins, though not about who would get the job), and even though you had good reason to think you had things right, you were just lucky in getting things right.
If this is correct—if we do tend to prefer to have knowledge over Gettiered justified true beliefs—then this suggests that there’s a second value problem to be addressed. We seem to prefer having knowledge over having any proper subset of the parts of knowledge. But why should that be the case? What value is added to justified true beliefs, when they meet a suitable anti-Gettier condition?
An initial response is to deny that knowledge is more valuable than mere justified true belief. If we’ve got true beliefs, and good reasons for them, of course we might be Gettiered, if for some reason it turns out that we’re just lucky in having true beliefs. When we inquire into whether p, we want to get to the truth regarding p, and we want to do so in a rationally defensible way. If it turns out that we get to the truth in a rationally defensible way, but strange factors of the case undermine our claim to knowing the truth about p, perhaps it just doesn’t matter that we don’t have knowledge.
Few epistemologists have defended this view, however (though Kaplan (1985) is an exception). We do after all find it irritating when we find out that we’ve been Gettiered; and when we are considering corresponding cases of knowledge and of Gettiered justified true belief, we tend to think that the subject who has knowledge is better off than the subject who is Gettiered. Of course we might be mistaken; there might be nothing better in knowledge than in mere justified true belief. But the presumption seems to be that knowledge is more valuable, and we should try to explain why that is so. Skepticism about the extra value of knowledge over mere justified true belief might be acceptable if we fail to find an adequate explanation, but we shouldn’t accept skepticism before searching for a good explanation.
We saw above that some virtue epistemologists think of knowledge in terms of the achievement of true beliefs as a result of the exercise of cognitive skills or virtues. And we do generally seem to value success that results from our efforts and skills (that is, we value success that’s been achieved rather than stumbled into (for example, Sosa (2003; 2007) and Pritchard (2009)). So, because we have a cognitive aim of getting to the truth, and we can achieve that aim either as a result of luck or as a result of our skillful cognitive performance, it seems that the value of achieving our aims as a result of a skillful performance can help explain why knowledge is more valuable than mere true belief.
That line of thought works just as well as a response to the Secondary Value Problem as to the Primary Value Problem. For in a Gettier case, the subject has a justified true belief, but it’s just as a result of luck that she arrived at a true belief rather than a false one. By contrast, when a subject arrives at a true belief because she has exercised a cognitive virtue, it’s plausible to think that it’s not just lucky that she’s arrived at a true belief; she gets credit for succeeding in the aim of getting to the truth as a result of her skillful performance. So cases of knowledge do, but Gettier cases do not, exemplify the value of succeeding in achieving our aims as a result of a skillful performance.
Williamson (2000) is at the forefront of “knowledge-first” epistemology. This is the approach to epistemology that does not attempt to analyze knowledge in terms of other more basic concepts; rather, it takes knowledge to be fundamental, and it analyzes other concepts in terms of knowledge. Of course knowledge-first epistemologists still want to say informative things about what knowledge is, but they don’t accept the traditional idea that knowledge can be analyzed in terms of informative necessary and sufficient conditions.
Williamson thinks that knowledge is the most general factive mental state. At least some mental states have propositional contents (the belief that p has the content p; the desire that p has the content p; and so on). Factive mental states are mental states which you can only be in when their contents are true. Belief isn’t a factive mental state, because you can believe p even if p is false. By contrast, knowledge is a factive mental state, because you can only know that p if p is true. Other factive mental states include seeing that (for example, you can only see that the sun is up, if the sun really is up) and remembering that. Knowledge is the most general factive mental state, for Williamson, because any time you are in a factive mental state with the content that p, you must know that p. If you see that it’s raining outside, then you know that it’s raining outside. Otherwise—say, if you have a mere true belief that it’s raining, or if your true belief that it’s raining is justified but Gettiered—you only seem to see that it’s raining outside.
Williamson’s view is of course controversial. But if he is right, and knowledge really is the most general factive mental state, then it is easy enough to explain the value of knowledge over mere justified true belief. We care, for one thing, about having true beliefs, and we dislike being duped. We would especially dislike it if we found out that we were victims of widespread deception. (Imagine your outrage and intellectual embarrassment, for example, if you were to discover that you were living in your own version of The Truman Show!) But not only that: we also care about being in the mental states we think we’re in (we care about really remembering what we think we remember, for example), and we would certainly dislike being duped about our own mental states, including when we take ourselves to be in factive mental states. So if having a justified true belief that p which is Gettiered prevents us from being in the factive mental states we think we’re in, but having knowledge enables us to be in these factive mental states, then it seems that we should care about having knowledge.
Finally, internalists about knowledge have an interesting response to offer to the Secondary Value Problem. Internalism about knowledge is the view that a necessary condition on S’s knowing that p is that S must have good reasons available for believing that p (where this is usually taken to mean that S must be able to become aware of those reasons, just by reflecting on what reasons she has). Internalists will normally hold that you have to have good reasons available to you, and you have to hold your belief on the basis of those reasons, in order to have knowledge.
Brogaard (2006) argues that the fact that beliefs must be held on the basis of good reasons gives the internalist her answer to the Secondary Value Problem. Roughly, the idea is that, if you hold the belief that p on the basis of a reason q, then you must believe (at least dispositionally) that in your current circumstances, q is a reliable indicator of p’s truth. So you have a first-order belief, p, and you have a reason for believing p, which is q, and you have a second-order belief, r, to the effect that q is a reliable indicator of p’s truth. And when your belief that p counts as knowledge, your reason q must in fact be a reliable indicator of p’s truth in your current circumstances—which means that your second-order belief r is true. So, assuming that the extra-belief requirement for basing beliefs on reasons is correct, it follows that when you have knowledge, you also have a correct picture of how things stand more broadly speaking.
When you are in a Gettier situation, by contrast, there is some feature of the situation which makes it the case that your belief that q is not a reliable indicator of the truth of p. That means that your second-order belief r is false. So, even though you’ve got a true first-order belief, you have an incorrect picture of how things stand more broadly speaking. Assuming that it’s better to have a correct picture of how things stand, including a correct picture of what reasons are reliable indicators of the truth of our beliefs, knowledge understood in an internalist sense is more valuable than Gettiered justified true belief.
Pritchard (2007; 2010) suggests that there’s a third value problem to address (compare also Zagzebski 2003). We often think of knowledge as distinctively valuable—that it’s a valuable kind of thing to have, and that its value isn’t the same kind of value as (for example) the value of true belief. If that’s correct, then simply identifying a kind of value which true beliefs have, and showing that knowledge has that same kind of value but to a greater degree, does not yield a satisfactory solution to this value problem.
By analogy, think of two distinct kinds of value: moral and financial. Suppose that both murders and mediocre investments are typically financially disvaluable, and suppose that murders are typically more financially disvaluable than mediocre investments. Even if we understand the greater financial disvalue of murders over the financial disvalue of mediocre investments, if we do not also understand that murders are disvaluable in a distinctively moral sense, then we will fail to grasp something fundamental about the disvalue of murder.
If knowledge is valuable in a way that is distinct from the way that true beliefs are valuable, then the kind of solution to the Primary Value Problem offered by Goldman and Olsson which we saw above isn’t satisfactory, because the extra value they identify is just the extra value of having more true beliefs. By contrast, as Pritchard suggests, if knowledge represents a cognitive achievement, in the way that virtue theorists often suggest, then because we do seem to think of achievements as being valuable just insofar as they are achievements (we value the overcoming of obstacles, and we value success which is attributable to a subject’s exercise of her skills or abilities), it follows that thinking of knowledge as an achievement provides a way to solve the Tertiary Value Problem. (Though, as we’ll see in section 3, Pritchard doesn’t think that knowledge in general represents an achievement.)
However, it’s not entirely clear that the Tertiary Value Problem is a real problem which needs to be addressed. (Haddock (2010) explicitly denies it, and Carter, Jarvis, and Rubin (2013) also register a certain skepticism before going on to argue that if there is a Tertiary Value Problem, it’s easy to solve.) Certainly most epistemologists who have attempted to solve the value problem have not worried about whether the extra value they were identifying in knowledge was different in kind from the value of mere true belief, or of mere justified true belief. Perhaps it is fair to say that it would be an interesting result if knowledge turned out to have a distinctive kind of value; maybe that would even be a mark in favour of an epistemological theory which had that result. But the consensus seems to be that, if we can identify extra value in knowledge, then that is enough to solve the value problem, even if the extra value is just a greater degree of the same kind of value which we find in the proper parts of knowledge such as true belief.
We have been considering ways to try to explain why knowledge is more valuable than its proper parts. More generally though, we might wonder what sorts of things are epistemically valuable, and just what makes something an epistemic value in the first place.
The most natural way of proceeding is simply to identify some state which epistemologists have traditionally been interested in, or which seems like it could or should be important for a flourishing cognitive life—such as the states of having knowledge, true belief, justification, wisdom, empirically adequate theories, and so on—and try to give some reason for thinking that it’s valuable to be in such a state.
Epistemologists who work on epistemic value usually want to explain either why true beliefs are valuable, or why knowledge is valuable, or both. Some also seek to explain the value of other states, such as understanding, and some seek to show that true beliefs and knowledge are not always as valuable as we might think.
Sustained arguments for the value of knowledge are easy to come by; the foregoing discussion of the Value Problem was a short survey of such arguments. Sustained arguments for the value of true belief, on the other hand, are not quite so plentiful. But it is especially important that we be able to show that true belief is valuable, if we are going to allow true belief to play a central role in epistemological theories. It is, after all, very easy to come up with apparently trivial true propositions, which no one is or ever will be interested in. Truths about how many grains of sand there are on some random beach, for example, seem to be entirely uninteresting. Piller suggests that “the string of letters we get, when we combine the third letters of the first ten passenger’s family names who fly on FR2462 to Bydgoszcz no more than seventeen weeks after their birthday with untied shoe laces” is an uninteresting truth, which no one would care about (2009, p.415). (Though see Treanor (2014) for an objection to arguments that proceed by comparing what appear to be more and less interesting truths.) What is perhaps even worse, it is easy to construct cases where having a true belief is positively disvaluable. For example, if someone tells you how a movie will end before you see it, you will probably not enjoy the movie very much when you do get around to seeing it (Kelly 2003). Now, maybe these apparently trivial or disvaluable truths are after all at least a little bit valuable, in an epistemic sense—but on the face of them, these truths don’t seem valuable, so the claim that they are valuable needs to be argued for. We’ll see some such arguments shortly.
Keep in mind that although epistemologists often talk about the value of having true beliefs, this is usually taken to be short for the value of having true beliefs and avoiding false beliefs. These two aspects of what is usually referred to as a truth-goal are clearly related, but they are distinct, and sometimes they can pull in opposite directions. An extreme desire to avoid false beliefs can lead us to adopt some form of skepticism, for example, where we abandon all or nearly all of our beliefs, if we’re not careful. But in giving up all of our beliefs, we do not only avoid having false beliefs; we also lose all of the true beliefs we would have had. When the goals of truth-achievement and error-avoidance pull in opposite directions, we need to weigh the importance of getting true beliefs against the importance of avoiding false ones, and decide how much epistemic risk we’re willing to take on in our body of beliefs (compare James 1949, Riggs 2003).
Still, because the twin goals of achieving true beliefs and avoiding errors are so closely related, and because they are so often counted as a single truth-goal, we can continue to refer to them collectively as a truth-goal. We just need to be careful to keep the twin aspects of the goal in mind.
One argument for thinking that true beliefs are valuable is that without true beliefs, we cannot succeed in any of our projects. Since even the most unambitious of us care about succeeding in a great many things (even making breakfast is a kind of success, which requires a great many true beliefs), we should all think that it’s important to have true beliefs, at least when it comes to subjects that we care about.
An objection to this argument for the value of true beliefs is that, as we’ve already seen, there are many true propositions which seem not to be worth caring about, and some which can be positively harmful. So although true beliefs are good when they can get us things we want, that is not always the case. So this argument doesn’t establish that we should always care about the truth.
A response to this worry is that we will all be faced with new situations in the future, and we will need to have a broad range of true beliefs, and as few false beliefs mixed in with the true ones as we can, in order to have a greater chance of succeeding when such situations come up (Foley 1993, ch.1). So it’s a good idea to try to get as many true beliefs as we can. This line of argument gives us a reason to think that it’s always at least pro tanto valuable to have true beliefs (that is, there’s always something positive to be said for true beliefs, even if that pro tanto value can sometimes be overridden by other considerations).
This is a naturalistically acceptable kind of value for true beliefs to enjoy. Although it doesn’t ground the value of true beliefs in the fact that people always desire to have true beliefs, it does ground their value in their instrumental usefulness for getting us other things which we do in fact desire. The main drawback for this approach, however, is that when someone positively desires not to have a given true belief—say, because it will cause him pain, or prevent him from having an enjoyable experience at the movies—it doesn’t seem like his desires can make it at all valuable for him to have the true belief in question. The idea here was to try to ground the value of truths in their instrumental usefulness, in the way that they are good for getting us what we want. But if there are true beliefs which we know will not be useful in that way (indeed, if there are true beliefs which we know will be harmful to us), then those beliefs don’t seem to have anything to be said in favour of them—which is to say that they aren’t even pro tanto valuable.
Whether we think that this is a serious problem will depend on whether we think that the claim that true beliefs are valuable entails that true beliefs must always have at least pro tanto value. Sometimes epistemologists (for example, White 2007) explicitly claim that true beliefs are not always valuable in any real sense, since we just don’t always care about having them. But, just as money is valuable even though it isn’t something that we always care about having, so too, true beliefs are still valuable, in a hypothetical sense: when we do want to have true beliefs, or when true beliefs are necessary for getting us what we want, they are valuable. So we can always say that they have value; it’s just that the kind of value in question is only hypothetical in nature. (One might worry, however, that “hypothetical” seems to be only a fancy way to say “not real.”)
A similar way to motivate the claim that true beliefs are valuable is to say that there are some things that we morally ought to care about, and we need to have true beliefs in order to achieve those things (Zagzebski 2003; 2009). For example, I ought to care about whether my choices as a consumer contribute to painful and degrading living and working conditions for people who produce what I’m consuming. (Of course I do care about that, but even if I didn’t, surely, I ought to care about it.) But in order to buy responsibly, and avoid supporting corporations that abuse their workers, I need to have true beliefs about the practices of various corporations.
So, since there are things we should care about, and since we need true beliefs to successfully deal with things which we should care about, it follows that we should care about having true beliefs.
This line of argument is unavailable to anyone who wants to avoid positing the existence of objective values which exist independently of what people actually desire or care about, and of course it doesn’t generate any value for true beliefs which aren’t relevant to things we ought to care about. But if there are things which we ought to care about, then it seems correct to say that at least in many cases, true beliefs are valuable, or worth caring about.
Lynch (2004) gives a related argument for the objective value of truth. Although he doesn’t ground the value of true beliefs in things that we morally ought to care about, his central argument is that it’s important to care about the truth for its own sake, because caring for the truth for its own sake is part of what it is to have intellectual integrity, and intellectual integrity is an essential part of a healthy, flourishing life. (He also argues that a concern for the truth for its own sake is essential for a healthy democracy.) Whether this argument gets us the result that all true beliefs are at least pro tanto valuable is still an open question.
Some epistemologists (for example, Plantinga 1993; Bergmann 2006; Graham 2011) invoke the proper functions of our cognitive systems in order to argue for (or to explain) the value of truth, and to explain the connection between truth and justification or warrant. Proper functions are usually given a selected-effects gloss, following Millikan (1984). The basic idea is that an organ or a trait (T), which produces an effect (E), has the production of effects of type E as its proper function just in case the ancestors of T also produced effects of type E, and the fact that they produced effects of type E is part of a correct explanation of why the Ts (or the organisms which have Ts) survived and exist today. For example, hearts have the proper function of pumping blood because hearts were selected for their ability to pump blood—the fact that our ancestors had hearts that pumped blood is part of a correct explanation of why they survived, reproduced, and why we exist today and have hearts that pump blood.
Similarly, the idea goes, we have cognitive systems which have been selected for producing true beliefs. And if that’s right, then our cognitive systems have the proper function of producing true beliefs, which seems to mean that there is always at least some value in having true beliefs.
It’s not clear whether selected-effect functions are in fact normative, however (in the sense of being able by themselves to generate reasons or value). Millikan, at least, thought that proper functions are normative. Others disagree (for example, Godfrey-Smith 1998). Whether we can accept this line of argument for the value of true beliefs will depend on whether we think that selected-effects functions are capable of generating value by themselves, or whether they only generate value when taken in a broader context which includes reference to the desires and the wellbeing of agents.
A further potential worry with the proper-function explanation of the value of true beliefs is that there do in fact seem to be cognitive mechanisms which have been selected for, and which systematically produce, false beliefs. (See Hazlett (2013), for example, who considers cognitive biases such as the self-enhancement bias at considerable length.) Plantinga (1993) suggests that we should distinguish truth-directed cognitive mechanisms from others, and say that it’s only the proper functioning of well-designed, truth-conducive mechanisms that yield warranted beliefs. But if this response works, it’s only because there’s some way to explain why truth is valuable, other than saying that our cognitive mechanisms have been selected for producing true beliefs; otherwise there would be no reason to suggest that it’s only the truth-directed mechanisms that are relevant to warranted and epistemically valuable beliefs.
Many epistemologists don’t think that we need to argue that truth is a valuable thing to have (for example, BonJour 1985, Alston 1985; 2005, Leplin 2009, Sosa 2007). They argue that all we need to do is to assume that there is a standpoint which we take when we are doing epistemology, or when we’re thinking about our cognitive lives, and stipulate that the goal of achieving true beliefs and avoiding errors is definitive of that standpoint. We can simply assume that truth is a real and fundamental epistemic value, and proceed from there.
Proponents of this approach still sometimes argue for the claim that achieving the truth and avoiding error is the fundamental epistemic value. But when they do, their strategy is to assume that there must be some distinctively epistemic value which is fundamental (that is, which orients our theories of justification and knowledge, and which explains why we value other things from an epistemic standpoint), and then to argue that achieving true beliefs does a better job as a fundamental epistemic value than other candidate values do.
The strategy here isn’t to argue that true beliefs are always valuable, all things considered. The strategy is to argue only that true belief is of fundamental value insofar as we are concerned with evaluating beliefs (or belief-forming processes, practices, institutions, and so forth) from an epistemic point of view. True beliefs are indeed sometimes bad to have, all things considered (as when you know how a movie will end), and not everyone always cares about having true beliefs. But enough of us care about having true beliefs in a broad enough range of cases that a critical domain of evaluation has arisen, which takes true belief as its fundamental value.
In support of this picture of epistemology and epistemic value, Sosa (2007) compares epistemology to the critical domain of evaluation which centers on good coffee. That domain takes the production and consumption of good cups of coffee as its fundamental value, and it has a set of evaluative practices in light of that goal. Many people take that goal seriously, and we have enormous institutional structures in place which exist entirely for the purpose of achieving the goal of producing good cups of coffee. But of course there are people who detest coffee, and perhaps coffee isn’t really valuable at all. (Perhaps…) But even so, enough people take the goal of producing good coffee to be valuable that we have generated a critical domain of evaluation centering on the value of producing good coffee, and even people who don’t care about coffee can still recognize good coffee, and they can engage in the practices which go with taking good coffee as a fundamental value of a critical domain. And for Sosa, the value of true belief is to epistemology as the value of good cups of coffee is to the domain of coffee production and evaluation.
One might worry, however, that this sort of move cannot accommodate the apparently non-optional nature of epistemic evaluation. It’s possible to opt out of the practice of making evaluations of products and processes in terms of the way that they promote the goal of producing tasty cups of coffee, but our epistemic practices don’t seem to be optional in that way. Even if I were to foreswear any kind of commitment to the importance of having epistemically justified beliefs, for example, you could appropriately level criticism at me if my beliefs were to go out of sync with my evidence.
An important minority approach to epistemic value and epistemic normativity is a kind of anti-realism, or conventionalism. The idea is that there is no sense in which true beliefs are really valuable, nor is there a sense in which we ought to try to have true beliefs, except insofar as we (as individuals, or as a community) desire to have true beliefs, or we are willing to endorse the value of having true beliefs.
One reason for being anti-realist about epistemic value is that you might be dissatisfied with all of the available attempts to come up with a convincing argument for thinking that truth (or anything else) is something which we ought to value. Hazlett (2013) argues against the “eudaimonic ideal” of true belief, which is the idea that even though true beliefs can be bad for us in exceptional circumstances, still, as a rule, true beliefs systematically promote human flourishing better than false beliefs do. One of Hazlett’s main objections to this idea is that there are types of cases where true beliefs are systematically worse for us than false beliefs. For example, people who have an accurate sense of what other people think of them tend to be more depressed than people who have an inflated sense of what others think of them. When it comes to beliefs about what others think about us, then, true beliefs are systematically worse for our wellbeing than corresponding false beliefs would be.
Because Hazlett thinks that the problems facing a realist account of epistemic value and epistemic norms are too serious, he adopts a form of conventionalism, according to which epistemic norms are like club rules. Just as a club might adopt the rule that they will not eat peas with spoons, so too, we humans have adopted epistemic rules such as the rule that we should believe only what the evidence supports. The justification for this rule isn’t that it’s valuable in any real sense to believe what the evidence supports; rather, the justification is just that the rule of believing in accord with the evidence is in fact a rule that we have adopted. However, as Bondy (2015) suggests, and as we also saw with Sosa’s appeal to critical domains of evaluation, one might worry that epistemic rules seem to be non-optional in a way that club rules are not. Clubs can change their rules by taking a vote, for example, whereas it doesn’t seem as though epistemic agents can do any such thing.
We’ve been looking at some of the main approaches to the question of whether and why true beliefs are epistemically valuable. For a wide range of epistemologists, true beliefs play a fundamental role in their theories, so it’s important to try to see why we should think that truth is valuable. But, given that we tend to value knowledge more than we value true belief, one might wonder why true belief is so often taken to be a fundamental value in the epistemic domain. Indeed, not only do many of us think that knowledge is more valuable than mere true belief; we also think that there are a number of other things which should also count as valuable from the epistemic point of view: understanding, justification, simplicity, empirical adequacy of theories, and many other things too, seem to be important kinds of cognitive successes. These seem like prime candidates for counting as epistemically valuable—so why do they so often play such a smaller role in epistemological theories than true belief plays?
There are three main reasons why truth is often invoked as a fundamental epistemic value, and why these other things are often relegated to secondary roles. The first reason is that, as we saw in section 2(a), true beliefs do at least often seem to enable us to accomplish our goals and achieve what we want. And they typically enable us to do so whether or not they count as knowledge, or even whether or not they’re justified, or whether they represent relatively simple hypotheses. This seems like a reason to care about having true beliefs, which doesn’t depend on taking any other epistemic states to be valuable.
The second reason is that, if we take true belief to be the fundamental epistemic value, we will still be able to explain why we should think of many other things aside from true beliefs as epistemically valuable. If justified beliefs tend to be true, for example, and having true beliefs is the fundamental epistemic value, then justification will surely also be valuable, as a means to getting true beliefs (this is suggested in a widely-cited and passage in (BonJour 1985, pp.7-8)). Similarly, we might be able to explain the epistemic value of simplicity in terms of the value of truth, because the relative simplicity of a hypothesis can be evidence that the hypothesis is more likely than other competing hypotheses to be true. On one common way of thinking about simplicity, a hypothesis H1 is simpler than another hypothesis H2 if H1 posits fewer theoretical entities. Understanding simplicity in that way, it’s plausible to think that simpler hypotheses are likelier to be true, because there are fewer ways for them to go wrong (there are fewer entities for them to be mistaken about).
By contrast, it is not so straightforward to try to explain the value of truth in terms of other candidate epistemic values, such as simplicity or knowledge. If knowledge were the fundamental (as opposed to the highest, or one of the highest) epistemic value, so that the value of true beliefs would have to be dependent on the value of knowledge, then it seems that it would be difficult to explain why unjustified true beliefs should be more valuable than unjustified false beliefs, which they seem to be.
And the third reason why other candidate epistemic values are not often invoked in setting out epistemic theories is that, even if there are epistemically valuable things which do not get all of their epistemic value from their connection with true belief, there is a particular theoretical role which many epistemologists want the central epistemic goal or value to play, and it can only play that role if it’s understood in terms of achieving true beliefs and avoiding false ones (David 2001; compare Goldman 1979). Briefly, the role in question is that of providing a way to explain our epistemic notions, including especially the notions of knowledge and epistemic rationality, in non-epistemic terms. Since truth is not itself an epistemic term, it can play this role. But other things which seem to be epistemically valuable, like knowledge and rationality, cannot play this role, because they are themselves epistemic terms. We will come back to the relation between the analysis of epistemic rationality and the formulation of the epistemic goal in the final section of this article.
There is growing support among epistemologists for the idea that understanding is the highest epistemic value, more valuable even than knowledge. There are various ways of fleshing out this view, depending on what kind of understanding we have in mind, and depending on whether we want to remain truth-monists about what’s fundamentally epistemically valuable or not.
If you are a trained mechanic, then you understand how automobiles work. This is an understanding of a domain, or of a kind of object. To have an understanding of a domain, you need to have a significant body of beliefs about that domain, which fits together in a coherent way, and which involves beliefs about what would explain why things happen as they do in that domain. When you have such a body of beliefs, we can say that you have a subjective understanding of the domain (Grimm 2012). When, in addition, your beliefs about the domain are mostly correct, we can say that you have an objective understanding of the domain.
In addition to understanding a domain, you might also understand that p—you might understand that some proposition is true. There are several varieties of propositional understanding: there is simply understanding that p; there is understanding why p, which involves understanding that p because q; there is understanding when p, which involves understanding that p happens at time t, and understanding why p happens at time t; and so on, for other wh- terms, such as who and where. In what follows, we’ll talk in general in terms of propositional understanding, or understanding that p, to cover all these cases.
Understanding that p entails having at least some understanding of a domain. To borrow an example of Pritchard’s (2009): imagine that you come home to find your house burnt to the ground. You ask the fire chief what caused the fire, and he tells you that it was faulty wiring. Now you know why your house burnt to the ground (you know that it burnt down because of the faulty wiring), and you also understand why your house burnt to the ground (you know that the house burnt down because of faulty wiring, and you have some understanding of the kinds of things that tend to start fires, such as sparks, or overheating, both of which can be caused by faulty wiring.) You understand why the house burnt down, in other words, only because you have some understanding of how fires are caused.
As Kvanvig (2003) notes, it’s plausible that you only genuinely understand that p if you have a mostly correct (that is, an objective) understanding of the relevant domain. For suppose that you have a broad and coherent body of beliefs about celestial motion, but which centrally involves the belief that the earth is at the center of the universe. Because your body of beliefs involves mistaken elements at its core, we would normally say that you misunderstand celestial motions, and you misunderstand why (for example) we can observe the sun rising every day. In a case like this, where you misunderstand why p (for example, why the sun comes up), we can say that you have a subjective propositional understanding: your belief that the sun comes up every day because the earth is at the center of the Universe, and the celestial bodies all rotate around it, can be coherent with a broader body of justified beliefs, and it can provide explanations of celestial motions. But because your understanding of the domain of celestial motion involves false beliefs at its core, you have an incorrect understanding of the domain, and your explanatory propositional understanding, as a result, is also a misunderstanding.
By contrast, when your body of beliefs about a domain is largely correct, and your understanding of the domain leads you to believe that p is true because q is true, we can say that you have an objective understanding of why p is true. In what follows, except where otherwise specified, “understanding” refers to objective propositional understanding.
It seems natural to think that understanding that p involves knowing that p, plus something extra, where the extra bit is something like having a roughly correct understanding of some relevant domain to do with p: you understand that p when (and only when) you know that p, and your belief that p fits into a broader, coherent, explanatory body of beliefs, where this body of beliefs is largely correct. So the natural place to look for the special epistemic value of understanding is in the value of this broader body of beliefs.
Now, some authors (Kvanvig 2003, Hills 2009, and Pritchard 2009) have argued that propositional understanding does not require the corresponding propositional knowledge: S can understand that p, they argue, even if S doesn’t know that p. The main reason for this view is that understanding seems to be compatible with a certain kind of luck, environmental luck, which is incompatible with knowledge. For example, think again of the case where you ask the fire chief the cause of the fire, but now imagine that there are many pretend fire chiefs all walking around the area in uniform, and it’s just a matter of luck that you asked the real fire chief. In this case, it seems fairly clear that you lack knowledge of the cause of the fire, since you could so easily have asked a fake fire chief, and formed a false belief as a result. But, the argument goes, you do gain understanding of the cause of the fire from the fire chief. After all, you have gained a true belief about what caused the fire, and your belief is justified, and it fits in with your broader understanding of the domain of fire-causing. What we have here is a case of a justified true belief, where that belief fits in with your understanding of the relevant domain, but where you have been Gettiered, so you lack knowledge.
So it’s controversial whether understanding that p really presupposes knowing that p. But when it comes to the value of understanding, we can set this question aside. For even if there are cases of propositional understanding without the corresponding propositional knowledge, still, most cases of propositional understanding involve the corresponding propositional knowledge, and in those cases, the special value of understanding will lie in what is added to the propositional knowledge to yield understanding. In cases where there is Gettierizing environmental luck, so that S has a Gettierized justified true belief which fits in with her understanding of the relevant domain, the special value of understanding will lie in what is added to justified true belief. In other words, whether or not propositional understanding presupposes the corresponding propositional knowledge, the special value of propositional understanding will be located in the subject’s understanding of the relevant domain.
There are a few plausible accounts of why understanding should be thought of as distinctively epistemically valuable, and perhaps even as the highest epistemic value. One suggestion, which would be friendly to truth-monists about epistemic value, is that we can consistently hold both that truth is the fundamental epistemic value and that understanding is the highest epistemic value. Because understanding that p typically involves both knowing that p and having a broader body of beliefs, where this body of beliefs is coherent and largely correct, it follows from the fundamental value of true beliefs that in any case where S understands that p, S’s cognitive state involves greater epistemic value than if S were merely to truly believe that p, because S has many other true beliefs too. Of course, on this picture, understanding doesn’t have a distinctive kind of value, but it does have a greater quantity of value than true belief, or even than knowledge. But, for a truth-monist about epistemic value, this is just the result that should be desired—otherwise, the view would no longer be monistic.
An alternative suggestion, which does not rely on truth-monism about epistemic value, is that the value of having a broad body of beliefs which provide an explanation for phenomena is to be explained by the fact that whether you have such a body of beliefs is transparent to you: you can always tell whether you have understanding (Zagzebski 2001). Surely, if it’s always transparent to you whether you understanding something, that is a source of extra epistemic value for understanding on top of the value of having true belief or even knowledge, since we can’t in general tell whether we are in those states.
The problem with this suggestion, though, as Grimm (2006; 2012) points out, is that we cannot always tell whether we have understanding. It often happens that we think we understand something, when in fact we gravely misunderstand it. Of course it might be the case that we can always tell whether we have a subjective understanding—we might always be able to tell whether we have a coherent, explanatory body of beliefs—but we are not in general in a position to be able to tell whether our beliefs are largely correct. The subjective kind of understanding doesn’t entail the objective kind. Still, it is worth noting that there seems to be a kind of value in being aware of the coherence and explanatory power of one’s beliefs on a given topic, even if it’s never transparent whether one’s beliefs are largely correct. (See Kvanvig 2003 for more on the value of internal awareness and of having coherent bodies of beliefs.)
A third suggestion about the value of understanding, which is also not committed to truth-monism, is that having understanding can plausibly be thought of as a kind of success which is properly attributable to one’s exercise of a relevant ability, or in other words, an achievement. As we saw above, a number of virtue epistemologists think that we can explain the distinctive value of knowledge by reference to the fact that knowledge is a cognitive achievement. But others (notably, Lackey 2006 and 2009) have denied that subjects in general deserve credit for their true belief in cases of knowledge. Cases of testimonial knowledge are popular counterexamples to the view that knowledge is in general an achievement: when S learns some fact about local geography from a random bystander, for example, S can gain knowledge, but if anyone deserves credit for S’s true belief, it seems to be the bystander. So, if that’s right, then it’s not after all always much of an achievement to gain knowledge.
Pritchard (2009) also argues that knowledge is not in general an achievement, but he claims that understanding is. For when S gains an understanding that p, it seems that S must bring to bear significant cognitive resources, unlike when S only gains knowledge that p. Suppose, for example, that S asks bystander B where the nearest tourist information booth is, and B tells him. Now let’s compare S’s and B’s cognitive states. S has gained knowledge of how to get to the nearest information booth, but S doesn’t have an understanding of the location of the nearest information booth, since S lacks knowledge of the relevant domain (that is, local geography). B, on the other hand, both knows and understands the location of the nearest booth. And B’s understanding of the local geography, and her consequent understanding of the location of the nearest booth, involves an allocation of significant cognitive resources. (Anyone who has had to quickly memorize the local geography of a new city will appreciate how much cognitive work goes into having a satisfactory understanding of this kind of domain.)
If understanding that p requires both knowing that p (or having a justified true belief that p) and having a broader body of beliefs which is coherent, explanatory, and largely correct, then it’s plausible to think that the special value of understanding is in the value of having such a body of beliefs. But it’s possible to resist this view of the value of understanding in a number of ways. One way to resist it would be to deny that understanding is ever any different from knowing. Reductivists about understanding think that it’s not possible to have knowledge without having understanding, or understanding without knowledge. Other philosophers argue, for example, that when S knows that p, S must understand that p at least to some extent. S has a better understanding that p when S has a better understanding of the relevant domain, in the form of knowledge of more related propositions, but S knows that p if and only if S has some understanding that p.
For reductivists about understanding, there can obviously be no value in understanding beyond the value of having knowledge. There are better and worse understandings, but any genuine (objective) understanding involves at least some knowledge, and better understanding just involves more knowledge. If that’s right, then we don’t need to say that understanding has more value than knowledge.
A second way to resist the approach to the value of understanding presented in the previous section is to resist the claim that understanding requires that one’s beliefs about a domain must be mostly correct. Elgin (2007; 2009), for example, points out that in the historical progression of science, there have been stages at which scientific understanding, while useful and epistemically good, centrally involved false beliefs about the relevant domains. Perhaps even more importantly, scientists regularly employ abstract or idealized models, which are known to be strictly false—but they use these models to gain a good understanding of the domain or phenomenon in question. And the resulting understanding is better, rather than worse, because of the use of these models, which are strictly speaking false. So the elimination of all falsehoods from our theories is not even desirable, on Elgin’s view. (In the language of subjective and objective understanding, we might say that Elgin thinks that subjective understanding can be every bit as good to have as objective understanding. We need to keep in mind, though, that Elgin would reject the view that subjective understandings which centrally involve false beliefs are necessarily misunderstandings.)
The final topic we need to look at now is the relation between epistemic values and the concept of epistemic rationality or justification. According to one prominent way of analyzing epistemic rationality, the instrumental conception of epistemic rationality, beliefs are epistemically rational when and just to the extent that they appropriately promote the achievement of a distinctively epistemic goal. The instrumental conception has been widely endorsed by epistemologists over the past several decades (for example, BonJour 1985; Alston 1985, 2005; Foley 1987, 1993, 2008), though a number of important criticisms of it have emerged in recent years (for example, Kelly 2003; Littlejohn 2012; Hazlett 2013). For instrumentalists, getting the right accounts of epistemic goals and epistemic rationality are projects which constrain each other. Whether or not we want to accept instrumentalism in the end, it’s important to see the way that instrumentalists think of the relation of epistemic goals and epistemic rationality.
The first thing to note about the instrumentalist’s notion of an epistemic goal is that it has to do with what is valuable from an epistemic or cognitive point of view. But instrumentalists typically are not concerned to identify a set of goals which is exhaustive of what is epistemically valuable. Rather, they are concerned with identifying an epistemically valuable goal which is capable of generating a plausible, informative, and non-circular account of epistemic rationality in instrumental terms, and it’s clear that not all things that seem to be epistemically valuable can be included in an epistemic goal which is going to play that role. David (2001) points out that if we take knowledge or rationality (or, we might also add here, understanding) to be part of the epistemic goal, then the instrumental account of epistemic rationality becomes circular. This is most obvious with rationality: rationality is no doubt something we think is epistemically valuable, but if we include rationality in the formulation of the epistemic goal, and we think of epistemic rationality in terms of achieving the epistemic goal, then we’ve analyzed epistemic rationality as the appropriate promotion of the goal of getting epistemically rational beliefs—an unhelpfully circular analysis, at best. And, because knowledge and understanding plausibly presuppose rationality, we also cannot include knowledge or understanding in the formulation of the epistemic goal.
This is why most epistemologists take the epistemic goal to be about achieving true beliefs and avoiding false ones. That seems to be a goal which is valuable from an epistemic point of view, and it stands a good chance at grounding a non-circular analysis of epistemic rationality.
David in fact goes a step further, and claims that because true belief is the only thing that is epistemically valuable that is capable of grounding an informative and non-circular analysis of epistemic rationality, truth is the only thing that’s really valuable from an epistemic point of view; knowledge, he thinks, is an extra-epistemic value. But it’s possible for pluralists about epistemic value to appreciate David’s point that only some things that are epistemically valuable (such as having true beliefs) are suitable for being taken up in the instrumentalist’s formulation of the epistemic goal. In other words, pluralism about epistemic values is consistent with monism about the epistemic goal.
Now, there are two further important constraints on how to formulate the epistemic goal. First, it must be plausible to take as a goal—that is, as something we do in fact care about, or at least something that seems to be worth caring about even if people don’t in fact care about it. We might express this constraint by saying that the epistemic goal must be at least pro tanto valuable in either a subjective or an objective sense. And second, the goal should enable us to categorize clear cases of epistemically rational and irrational beliefs correctly. We can close this discussion of epistemic values and goals by considering three oft-invoked formulations of the epistemic goal, and noting the important differences between them. According to these formulations, the epistemic goal is:
(1) “to amass a large body of beliefs with a favorable truth-falsity ratio” (Alston 1985, p.59);
(2) “maximizing true beliefs and minimizing false beliefs about matters of interest and importance” (Alston 2005, p.32); and
(3) “now to believe those propositions that are true and now not to believe those propositions that are false” (Foley 1987, p.8).
Each of these formulations of the epistemic goal emphasizes the achievement of true beliefs and the avoidance of false ones. But there are two important dimensions along which they diverge.
The first difference is with respect to whether the epistemic goal includes all propositions (or, perhaps, all propositions which a person could conceivably grasp), or whether it includes only propositions about matters of interest or importance. Formulation (2) includes an “interest and importance” clause, whereas (1) and (3) do not. The reason for including a reference to interest and importance is that it makes the epistemic goal much more plausible to take as a goal which is pro tanto valuable. For, as we have seen, there are countless examples of apparently utterly trivial or even harmful true propositions, which one might think are not worth caring about having. This seems like a reason to restrict the epistemic goal to having true beliefs and avoiding false ones about matters of interest and importance: we want to have true beliefs, but only when it is interesting or important to us to have them.
The drawback of an interest and importance clause in the epistemic goal is that it seems to prevent the instrumental approach from providing a fully general account of epistemic rationality. For it seems possible to have epistemically rational or irrational beliefs about utterly trivial or even harmful propositions. Suppose I were to come across excellent evidence about the number times the letter “y” appears in the seventeenth space on all lines in the first three and the last three sections of this article. Even though that strikes me as an utterly trivial truth, which I don’t care about believing, I might still come to believe what my evidence supports regarding it. And if I do, then it’s plausible to think that my belief will count as epistemically rational, because it’s based on good evidence. If it is not part of the epistemic goal that we should achieve true beliefs about even trivial or harmful matters, then it doesn’t seem like instrumentalists have the tools to account for our judgments of epistemic rationality or irrationality in such cases. This seems to give us a reason to make the epistemic goal include all true propositions, or at least all true propositions which people can conceivably grasp. (Such a view might be supported by appeal to the arguments for the general value of truth which we saw above, in section 2.)
The second difference between the three formulations of the epistemic goal is regarding whether the goal is synchronic or diachronic. Formulation (3) is synchronic: it is about now having true beliefs and avoiding false ones. (Or, if we are considering a subject S’s beliefs at a time t other than the present, the goal is to believe true propositions and not believe false ones, at t.) Formulations (1) and (2) are neutral on that question.
A reason for accepting a diachronic formulation of the epistemic goal is that it is, after all, plausible to think that we do care about having true beliefs and avoiding false beliefs over the long run. Having true beliefs now is a fine thing, but having true beliefs now and still having them ten minutes from now is surely better. A second reason for adopting a diachronic formulation of the goal, offered by Vahid (2003), is to block Maitzen’s (1995) argument that instrumentalists who think that the epistemic goal is about having true beliefs cannot say that there are justified false beliefs, or unjustified true beliefs. Briefly, Maitzen argues that false beliefs can never, and true beliefs can never fail to, promote the achievement of the goal of getting true beliefs and avoiding false ones. Vahid replies that if the epistemic goal is about having true beliefs over the long run, then false beliefs can count as justified, in virtue of their truth-conducive causal histories.
The reason why instrumentalists like Foley formulate the epistemic goal instead in synchronic terms is to avoid the counterintuitive result that the epistemic status of a subject’s beliefs at t can depend on what happens after t. For example: imagine that you have very strong evidence at time t for thinking that you are a terrible student, but you are extremely confident in yourself anyway, and you hold the belief at t that you are a good student. At t+1, you consider whether to continue your studies or to drop out of school. Because of your belief about your abilities as a student, you decide to continue with your studies. And in continuing your studies, you go on to become a better student, and you learn all sorts of new things.
In this case, your belief at t that you are a good student does promote the achievement of a large body of beliefs with a favourable truth-falsity ratio over the long run. But by hypothesis, your belief is held contrary to very strong evidence at time t. The intuitive verdict in such cases seems to be that your belief at t that you are a good student is epistemically irrational. So, since the belief promotes the achievement of a diachronic epistemic goal, but not a synchronic one, we should make the epistemic goal synchronic. Or, if we want to maintain that the epistemic goal is diachronic, we can do so, as long as we are willing to accept the cost of adopting a partly revisionary view about what’s epistemically rational to believe in some cases where beliefs are held contrary to good available evidence.
We’ve gone through some of the central problems to do with epistemic value. We’ve looked at attempts to explain why and in what sense knowledge is more valuable than any of its proper parts, and we’ve seen attempts to explain the special epistemic value of understanding. We’ve also looked at some attempts to argue for the fundamental epistemic value of true belief, and the role that the goal of achieving true beliefs and avoiding false ones plays when epistemologists give instrumentalist accounts of the nature of epistemic justification or rationality. Many of these are fundamental and important topics for epistemologists to address, both because they are intrinsically interesting, and also because of the implications that our accounts of knowledge and justification have for philosophy and inquiry more generally (for example, implications for norms of assertion, for norms of practical deliberation, and for our conception of ourselves as inquirers, to name just a few).
- Alston, William (1985). Concepts of Epistemic Justification. The Monist. 68. Reprinted in his Epistemic Justification: Essays in the Theory of Knowledge. Ithaca, NY: Cornell University Press, 1989.
- Discusses concepts of epistemic justification. Espouses an instrumentalist account of epistemic evaluation.
- Alston, William (2005). Beyond Justification: Dimensions of Epistemic Evaluation. Ithaca, NY: Cornell University Press.
- Abandons the concept of epistemic justification as too simplistic; embraces the pluralist idea that there are many valuable ways to evaluate beliefs. Continues to endorse the instrumentalist approach to epistemic evaluations.
- Bergmann, Michael (2006). Justification without Awareness. Oxford: Oxford University Press.
- Bondy, Patrick (2015). A Luxury of the Understanding: On the Value of True Belief ALLAN HAZLETT. Dialogue. 54: 1, 202-204.
- BonJour, Laurence (1985). The Structure of Empirical Knowledge. Cambridge, Mass: Harvard University Press.
- Develops a coherentist internalist account of justification and knowledge. Gives a widely-cited explanation of the connection between epistemic justification and the epistemic goal.
- Brogaard, Berit (2006). Can Virtue Reliabilism Explain the Value of Knowledge? Canadian Journal of Philosophy. 36: 3, 335-354.
- Defends generic reliabilism from the Primary Value Problem; proposes an internalist response to the Secondary Value Problem.
- Carter, J. Adam, Benjamin Jarvis, and Katherine Rubin (2013). Knowledge: Value on the Cheap. Australasian Journal of Philosophy. 91: 2, 249-263.
- Presents the promising proposal that because knowledge is a continuing state rather than something that is achieved and then set aside, there are easy solutions to the Primary, Secondary, and even Tertiary Value Problems for knowledge.
- Craig, Edward (1990). Knowledge and the State of Nature. Oxford: Oxford University Press.
- David, Marian (2001). Truth as the Epistemic Goal. In Matthias Steup, ed., Knowledge, Truth, and Duty: Essays on Epistemic Justification. New York and Oxford: Oxford University Press. 151-169.
- A thorough discussion of how instrumentalists about epistemic rationality or justification ought to formulate the epistemic goal.
- David, Marian (2005). Truth as the Primary Epistemic Goal: A Working Hypothesis. In Matthias Steup and Ernest Sosa, eds. Contemporary Debates in Epistemology. Malden, MA: Blackwell. 296-312.
- Dogramaci, Sinan (2012). Reverse Engineering Epistemic Evaluations. Philosophy and Phenomenological Research. 84: 3, 513-530.
- Accepts the widely-endorsed thought that justification or rationality are only instrumentally valuable for getting us true beliefs. The paper inquires into what function our epistemic practices could serve, in cases where what’s rational to believe is false, or what’s irrational to believe is true.
- Elgin, Catherine (2007). Understanding and the Facts. Philosophical Studies. 132, 33-42.
- Elgin, Catherine (2009). Is Understanding Factive? In A. Haddock, A. Millar, and D. Pritchard, eds. Epistemic Value. Oxford: Oxford University Press.
- Field, Hartry (2001). Truth and the Absence of Fact. Oxford: Oxford University Press.
- Among other things, argues that there are no objectively correct epistemic goals which can ground objective judgments of epistemic reasonableness.
- Foley, Richard (1987). The Theory of Epistemic Rationality. Cambridge, Mass: Harvard University Press.
- A very thorough development of an instrumentalist and egocentric account of epistemic rationality.
- Foley, Richard (1993). Working Without a Net: A Study of Egocentric Rationality. New York and Oxford: Oxford University Press.
- Further develops and defends the instrumental approach to rationality generally and to epistemic rationality in particular.
- Foley, Richard (2008). An Epistemology that Matters. In P. Weithman, ed. Liberal Faith: Essays in Honor of Philip Quinn. Notre Dame, Indiana: University of Notre Dame Press. 43-55.
- Clear and succinct statement of Foley’s instrumentalism.
- Godfrey-Smith, Peter (1998). Complexity and the Function of Mind in Nature. Cambridge; Cambridge University Press.
- Goldman, Alvin (1979). What Is Justification? In George Pappas, ed. Justification and Knowledge. Dordrecht: D. Reidel Publishing Company, 1-23.
- Goldman, Alvin and Olsson, Erik (2009). Reliabilism and the Value of Knowledge. In A. Haddock, A. Millar, and D. Pritchard, eds. Epistemic Value. Oxford: Oxford University Press. 19-41.
- Presents two reliabilist responses to the Primary Value Problem.
- Graham, Peter (2011). Epistemic Entitlement. Noûs. 46: 3, 449-482.
- Greco, John (2003). Knowledge as Credit for True Belief. In Michael DePaul and Linda Zagzebski, eds. Intellectual Virtue: Perspectives from Ethics and Epistemology. Oxford: Oxford University Press. 111-134.
- Sets out the view that attributions of knowledge are attributions of praiseworthiness, when a subject gets credit for getting to the truth as a result of the exercise of intellectual virtues. Discusses praise, blame, and the pragmatics of causal explanations.
- Greco, John (2008). Knowledge and Success from Ability. Philosophical Studies. 142, 17-26.
- Further elaboration of ideas in Greco (2003).
- Grimm, Stephen (2006). Is Understanding a Species of Knowledge? British Journal for the Philosophy of Science. 57, 515–35.
- Grimm, Stephen (2012). The Value of Understanding. Philosophy Compass. 7: 2, 1-3-117.
- Good survey article of work on the value of understanding up to 2012.
- Haddock, Adrian (2010). Part III: Knowledge and Action. In Duncan Pritchard, Allan Millar, and Adrian Haddock, The Nature and Value of Knowledge: Three Investigations. Oxford: Oxford University Press.
- Hazlett, Allan (2013). A Luxury of the Understanding: On the Value of True Belief. Oxford: Oxford University Press.
- An extended discussion of whether true belief is valuable. Presents a conventionalist account of epistemic normativity.
- Hills, Alison (2009). Moral Testimony and Moral Epistemology. Ethics. 120: 1, 94-127.
- James, William (1949). The Will to Believe. In his Essays in Pragmatism. New York: Hafner. pp. 88-109. Originally published in 1896.
- Jones, Ward (1997). Why Do We Value Knowledge? American Philosophical Quarterly. 34: 4, 423-439.
- Argues that reliabilists and other instrumentalists cannot handle the Primary Value Problem. Proposes that we solve the problem by appealing to the value of contingent features of knowledge.
- Kaplan, Mark (1985). It’s Not What You Know that Counts. The Journal of Philosophy. 82: 7, 350-363.
- Denies that knowledge is any more important than justified true belief.
- Kelly, Thomas (2003). Epistemic Rationality as Instrumental Rationality: A Critique. Philosophy and Phenomenological Research. 66: 3, 612-640.
- Criticizes the instrumental conception of epistemic rationality, largely on the grounds that beliefs can be epistemically rational or irrational in cases where there is no epistemic goal which the subject desires to achieve.
- Kornblith, Hilary (2002). Knowledge and its Place in Nature. Oxford: Clarendon Press of Oxford University Press.
- Develops the idea that knowledge is a natural kind which ought to be studied empirically rather than through conceptual analysis. Grounds epistemic norms, including the truth-goal, in the fact that we desire anything at all.
- Kvanvig, Jonathan (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press.
- Considers and rejects various arguments for the value of knowledge. Argues that understanding rather than knowledge is the primary epistemic value.
- Kvanvig, Jonathan (2005). Truth is Not the Primary Epistemic Goal. In Matthias Steup and Ernest Sosa, eds. Contemporary Debates in Epistemology. Malden, MA: Blackwell. 285-296.
- Criticizes epistemic value monism.
- Lackey, Jennifer (2007). Why We Don’t Deserve Credit for Everything We Know. Synthese. 158: 3, 345-361.
- Lackey, Jennifer (2009). Knowledge and Credit. Philosophical Studies. 142: 1, 27-42.
- Lackey argues against the virtue-theoretic idea that when S knows that p, S’s getting a true belief is always creditable to S.
- Leplin, Jarrett (2009). A Theory of Epistemic Justification. Dordrecht: Springer.
- Littlejohn, Clayton (2012). Justification and the Truth-Connection. Cambridge: Cambridge University Press.
- Contains an extended discussion of internalism and externalism, and argues against the instrumental conception of epistemic justification. Also argues that there are no false justified beliefs.
- Lynch, Michael (2004). True to Life: Why truth Matters. Cambridge, Mass: MIT Press.
- Argues for the objective value of true beliefs.
- Lynch, Michael (2009). Truth, Value and Epistemic Expressivism. Philosophy and Phenomenological Research. 79: 1, 76-97.
- Argues against expressivism and anti-realism about the value of true beliefs.
- Maitzen, Stephen (1995). Our Errant Epistemic Aim. Philosophy and Phenomenological Research. 55: 4, 869-876.
- Argues that if we take the epistemic goal to be achieving true beliefs and avoiding false ones, then all and only true beliefs will count as justified. Suggests that we need to adopt a different formulation of the goal.
- Millikan, Ruth (1984). Language, Thought, and other Biological Categories. Cambridge, Mass.: MIT Press.
- Develops and applies the selected-effect view of the proper functions of organs and traits.
- Piller, Christian (2009). Valuing Knowledge: A Deontological Approach. Ethical Theory and Moral Practice. 12, 413-428.
- Plantinga, Alvin (1993). Warrant and Proper Function. New York: Oxford University Press.
- Develops a proper function analysis of knowledge.
- Plato. Meno. Trans. G. M. A. Grube. In Plato, Complete Works. J. M. Cooper and D. S. Hutcheson, eds. Indianapolis and Cambridge: Hackett, 1997. 870-897.
- Pritchard, Duncan. (2007). Recent Work on Epistemic Value. American Philosophical Quarterly. 44: 2, 85-110.
- Survey article on problems of epistemic value. Distinguishes Primary, Secondary, and Tertiary value problems.
- Pritchard, Duncan (2008). Knowing the Answer, Understanding, and Epistemic Value. Grazer Philosophische Studien. 77, 325–39.
- Pritchard, Duncan (2009). Knowledge, Understanding, and Epistemic Value. Epistemology (Royal Institute of Philosophy Lectures). Ed. Anthony O’Hear. New York: Cambridge University Press. 19–43.
- Pritchard, Duncan (2010) Part I: Knowledge and Understanding. In Duncan Pritchard, Allan Millar, and Adrian Haddock, The Nature and Value of Knowledge: Three Investigations. Oxford: Oxford University Press.
- Riggs, Wayne (2002). Reliability and the Value of Knowledge. Philosophy and Phenomenological Research. 64, 79-96.
- Riggs, Wayne (2003). Understanding Virtue and the Virtue of Understanding. In Michael DePaul & Linda Zagzebski, eds. Intellectual Virtue: Perspectives from Ethics and Epistemology. Oxford University Press.
- Riggs, Wayne (2008). Epistemic Risk and Relativism. Acta Analytica. vol. 23, no. 1, pp. 1-8.
- Sartwell, Crispin (1991). Knowledge is Merely True Belief. American Philosophical Quarterly. 28: 2, 157-165.
- Sartwell, Crispin (1992). Why Knowledge is Merely True Belief. The Journal of Philosophy. 89: 4, 167-180.
- These two articles by Sartwell are the only places in contemporary epistemology where the view that knowledge is just true belief is seriously defended.
- Sosa, Ernest (2003). The Place of Truth in Epistemology. In Michael DePaul and Linda Zagzebski, eds. Intellectual Virtue: Perspectives from Ethics and Epistemology. Oxford: Clarendon Press; New York: Oxford University Press.
- Sosa, Ernest (2007). A Virtue Epistemology: Apt Belief and Reflective Knowledge, Volume 1. Oxford: Clarendon Press; New York: Oxford University Press.
- Sets out a virtue-theoretic analysis of knowledge. Distinguishes animal knowledge from reflective knowledge. Responds to dream-skepticism. Argues that true belief is the fundamental epistemic value.
- Treanor, Nick (2014). Trivial Truths and the Aim of Inquiry. Philosophy and Phenomenological Research. 89: 3, pp.552-559.
- Argues against an argument for the popular claim that some truths are more interesting than others. Points out that the standard comparisons between what are apparently more and less interesting true sentences are unfair, because the sentences might not involve or express the same number of true propositions.
- Vahid, Hamid (2003). Truth and the Aim of Epistemic Justification. Teorema. 22: 3, 83-91.
- Discusses justification and the epistemic goal. Proposes that accepting a diachronic formulation of the epistemic goal solves the problem raised by Stephen Maitzen (1995).
- Weiner, Matthew (2009). Practical Reasoning and the Concept of Knowledge. In A. Haddock, A. Millar, and D. Pritchard, eds. Epistemic Value. Oxford: Oxford University Press. 163-182.
- Argues that knowledge is valuable in the same way as a Swiss Army Knife is valuable. A Swiss Army Knife contains many different blades which are useful in different situations; they’re not always all valuable to have, but it’s valuable to have them all collected in one easy-to-carry package. Similarly, the concept of knowledge has a number of parts which are useful in different situations; they’re not always all valuable in all cases, but it’s useful to have them collected together in one easy-to-use concept.
- Williamson, Timothy (2000). Knowledge and its Limits. Oxford: Oxford University Press.
- Among many other things, Williamson sets out and defends knowledge-first epistemology, adopts a stability-based solution to the Primary Value Problem, and suggests that his view of knowledge as the most general factive mental state solves the Secondary Value Problem.
- White, R. (2007). Epistemic Subjectivism. Episteme: A Journal of Social Epistemology. 4: 1, 115-129.
- Whiting, Daniel (2012). Epistemic Value and Achievement. Ratio. 25, 216-230.
- Argues against the view that the value of epistemic states in general should be thought of in terms of achievement (or success because of ability). Also argues against Pritchard’s achievement-account of the value of understanding in particular.
- Zagzebski, Linda (2001). Recovering Understanding. In Knowledge, Truth, and Duty: Essays on Epistemic Justification, Responsibility, and Virtue. Ed. Matthias Steup. New York: Oxford University Press, 2001. 235–56.
- Zagzebski, Linda (2003). The Search for the Source of Epistemic Good. Metaphilosophy. 34, 12-28.
- Gives a virtue-theoretic explanation of knowledge and the value of knowledge. Claims that it is morally important to have true beliefs, when we are performing morally important actions. Claims that knowledge is motivated by a love of the truth, and explains the value of knowledge in terms of that love and the value of that love.
- Zagzebski, Linda (2009). On Epistemology. Belmont, CA: Wadsworth.
- Accessible introduction to contemporary epistemology and to Zagzebski’s preferred views in epistemology. Useful for students and professional philosophers.
U. S. A.