Scientific Realism and Antirealism

Debates about scientific realism concern the extent to which we are entitled to hope or believe that science will tell us what the world is really like. Realists tend to be optimistic; antirealists do not. To a first approximation, scientific realism is the view that well-confirmed scientific theories are approximately true; the entities they postulate do exist; and we have good reason to believe their main tenets. Realists often add that, given the spectacular predictive, engineering, and theoretical successes of our best scientific theories, it would be miraculous were they not to be approximately correct. This natural line of thought has an honorable pedigree yet has been subject to philosophical dispute since modern science began.

In the 1970s, a particularly strong form of scientific realism was advocated by Putnam, Boyd, and others. When scientific realism is mentioned in the literature, usually some version of this is intended. It is often characterized in terms of these commitments:

  • Science aims to give a literally true account of the world.
  • To accept a theory is to believe it is (approximately) true.
  • There is a determinate mind-independent and language-independent world.
  • Theories are literally true (when they are) partly because their concepts “latch on to” or correspond to real properties (natural kinds, and the like) that causally underpin successful usage of the concepts.
  • The progress of science asymptotically converges on a true account.


Table of Contents

  1. Brief History before the 19th Century
  2. The 19th Century Debate
    1. Poincaré’s Conventionalism
    2. The Reality of Forces and Atoms
    3. The Aim of Science: Causal Explanation or Abstract Representation?
  3. Logical Positivism
    1. General Background
    2. The Logical Part of Logical Positivism
    3. The Positivism Part of Logical Positivism
  4. Quine’s Immanent Realism
  5. Scientific Realism
    1. Criticisms of the Observational-Theoretical Distinction
    2. Putnam’s Critique of Positivistic Theory of Meaning
    3. Putnam’s Positive Account of Meaning
    4. Putnam’s and Boyd’s Critique of Positivistic Philosophy of Science
    5. Inference to the Best Explanation
  6. Constructive Empiricism
    1. The Semantic View of Theories and Empirical Adequacy
    2. The Observable-Unobservable Distinction
    3. The Argument from Empirically Equivalent Theories
    4. Constructive Empiricism, IBE, and Explanation
  7. Historical Challenges to Scientific Realism
    1. Kuhn’s Challenge
    2. Laudan’s Challenge: The Pessimistic Induction
  8. Semantic Challenges to Scientific Realism
    1. Semantic Deflationism
    2. Pragmatist Truth Surrogates
    3. Putnam’s Internal Realism
  9. Law-Antirealism and Entity-Realism
  10. NOA: The Natural Ontological Attitude
  11. The 21st Century Debates
    1. Structuralism
    2. Stanford’s New Induction
    3. Selective Realism
  12. References and Further Reading

1. Brief History before the 19th Century

The debate begins with modern science. Bellarmine advocated an antirealist interpretation of Copernicus’s heliocentrism—as a useful instrument that saved the phenomena—whereas Galileo advocated a realist interpretation—the planets really do orbit the sun. More generally, 17th century protagonists of the new sciences advocated a metaphysical picture: nature is not what it appears to our senses—it is a world of objects (Descartes’ matter-extension, Boyle’s corpuscles, Huygens’ atoms, and so forth) whose primary properties (Cartesian extension, or the sizes, shapes, and hardness of atoms and corpuscles, of forces of attraction or repulsion, and so forth) are causally responsible for the phenomena we observe. The task of science is “to strip reality of the appearances covering it like a veil, in order to see the bare reality itself” (Duhem 1991).

This metaphysical picture quickly led to empiricist scruples, voiced by Berkeley and Hume. If all knowledge must be traced to the senses, how can we have reason to believe scientific theories, given that reality lies behind the appearances (hidden by a veil of perception)? Indeed, if all content must be traced to the senses, how can we even understand such theories? The new science seems to postulate “hidden” causal powers without a legitimate epistemological or semantic grounding. A central problem for empiricists becomes that of drawing a line between objectionable metaphysics and legitimate science (portions of which seem to be as removed from experience as metaphysics seems to be). Kant attempted to circumvent this problem and find a philosophical home for Newtonian physics. He rejected both a veil of perception and the possibility of our representing the noumenal reality lying behind it. The possibility of making judgments depends on our having structured what is given: experience of x qua object requires that x be represented in space and time, and judgments about x require that x be located in a framework of concepts. What is real and judgable is just what is empirically real—what fits our system of representation in the right way—and there is no need for, and no possibility of, problematic inferences to noumenal goings-on. In pursuing this project Kant committed himself to several claims about space and time—in particular that space must be Euclidean, which he regarded as both a priori (because a condition of the possibility of our experience of objects) and synthetic (because not derivable from analytical equivalences)—which became increasingly problematic as 19th century science and mathematics advanced.

2. The 19th Century Debate

Many features of the contemporary debates were fashioned in 19th century disputes about the nature of space and the reality of forces and atoms. The principals of these debates—Duhem, Helmholtz, Hertz, Kelvin, Mach, Maxwell, Planck, and Poincaré—were primarily philosopher-physicists. Their separation into realists and antirealists is complicated, but Helmholtz, Hertz, Kelvin, Maxwell, and Planck had realist sympathies and Duhem, Mach, and Poincaré had antirealist doubts.

a. Poincaré’s Conventionalism

By the late 19th century several consistent non-Euclidean geometries, mathematically distinct from Euclidean geometry, had been developed. Euclidean geometry has a unique parallels axiom and angle sum of triangles equals 180º, whereas, for example, spherical geometry has a zero-parallel axiom and angle sum of triangles greater than or equal to 180º. These geometries raise the possibility that physical space could be non-Euclidean. Empiricists think we can determine whether physical space is Euclidean through experiments. For example, Gauss allegedly attempted to measure the angles of a triangle between three mountaintops to test whether physical space is Euclidean. Realists think physical space has some determinate geometrical character even if we cannot discover what character it has. Kantians think that physical space must be Euclidean because only Euclidean geometry is consistent with the form of our sensibility.

Poincaré (1913) argued that empiricists, realists, and Kantians are wrong: the geometry of physical space is not empirically determinable, factual, or synthetic a priori. Suppose Gauss’s experiment gave the angle-sum of a triangle as 180º. This would support the hypothesis that physical space is Euclidean only under certain presuppositions about the coordination of optics with geometry: that the shortest path of an undisturbed light ray is a Euclidean straight line. Instead, for example, the 180º measurement could also be accommodated by presupposing that light rays traverse shortest paths in spherical space but are disturbed by a force, so that physical space is “really” non-Euclidean: the true angle-sum of the triangle is greater than 180º, but the disturbing force makes it “appear” that space is Euclidean and the angle-sum of the triangle is 180º.

Arguing that there is no fact of the matter about the geometry of physical space. Poincaré proposed conventionalism: we decide conventionally that geometry is Euclidean, forces are Newtonian, light travels in Euclidean straight lines, and we see if experimental results will fit those conventions. Conventionalism is not an “anything-goes” doctrine—not all stipulations will accommodate the evidence—it is the claim that the physical meaning of measurements and evidence is determined by conventionally adopted frameworks. Measurements of lines and angles typically rely on the hypothesis that light travels shortest paths. But this lacks physical meaning unless we decide whether shortest paths are Euclidean or non-Euclidean. These conventions cannot be experimentally refuted or confirmed since experiments only have physical meaning relative to them. Which group of conventions we adopt depends on pragmatic factors: other things being equal, we choose conventions that make physics simpler, more tractable, more familiar, and so forth. Poincaré, for example, held that, because of its simplicity, we would never give up Euclidean geometry.

b. The Reality of Forces and Atoms

Ever since Newton, a certain realist ideal of science was influential: a theory that would explain all phenomena as the effects of moving atoms subject to forces. By the 1880s many physicists came to doubt the attainability of this ideal since classical mechanics lacked the tools to describe a host of terrestrial phenomena: “visualizable” atoms that are subject to position-dependent central forces (so successful for representing celestial phenomena) were ill-suited for representing electromagnetic phenomena, “dissipative” phenomena in heat engines and chemical reactions, and so forth. The concepts of atom and force became questionable. The kinetic theory of gases lent support to atomism, yet no consistent models could be found (for example, spectroscopic phenomena required atoms to vibrate while specific heat phenomena required them to be rigid). Moreover, intermolecular forces allowing for internal vibration and deformation could not be easily conceptualized as Newtonian central forces. Newtonian action-at-a-distance forces also came under pressure with the increasing acceptance of Maxwell’s theory of electromagnetism, which attributed electromagnetic phenomena to polarizations in a dielectric medium propagated by contiguous action. Many thought that physics had become a disorganized patchwork of poorly understood theories, lacking coherence, unity, empirical determinacy, and adequate foundations. As a result, physicists became increasingly preoccupied with foundational efforts to put their house in order. The most promising physics required general analytical principles (for example, conservation of energy and action, Hamilton’s principle) that could not be derived from Newtonian laws governing systems of classical atoms. The abstract concepts (action, energy, generalized potential, entropy, absolute temperature) needed in order to construct these principles could not be built from the ordinary intuitive concepts of classical mechanics. They could, however, be developed without recourse to “hidden mechanisms” and independently of specific hypotheses about the reality underlying the phenomena. Most physicists continued to be realists: they believed in a deeper reality underlying the phenomena that physics can meaningfully investigate; for them, the pressing foundational problem was to articulate the concepts and develop the laws that applied to that reality. But some physicists became antirealists. Some espoused local antirealism (antirealist about some kinds of entities, as Hertz (1956) was about forces, while not espousing antirealism about physics generally).

c. The Aim of Science: Causal Explanation or Abstract Representation?

Others espoused global antirealism. Like contemporary antirealists, they questioned the relationship among physics, common sense and metaphysics, the aims and methods of science, and the extent to which science, qua attempt to fathom the depth and extent of the universe, is bankrupt. While their realist colleagues hoped for a unified, explanatorily complete, fundamental theory as the proper aim of science, these global antirealists argued on historical grounds that physics had evolved into its current disorganized mess because it had been driven by the unattainable metaphysical goal of causal explanation. Instead, they proposed freeing physics from metaphysics, and they pursued phenomenological theories, like thermodynamics and energetics, which promised to provide abstract, mathematical organizations of the phenomena without inquiring into their causes. To justify this pursuit philosophically, they proposed a re-conceptualization of the aim and scope of physics that would bring order and clarity to science and be attainable. The aim of science is: economy of thought (science is a useful instrument without literal significance (Mach 1893)), the discovery of real relations between hidden entities underlying the phenomena (Poincaré 1913), and the discovery of a “natural classification” of the phenomena (a mathematical organization of the phenomena that is the reflection of a hidden ontological order (Duhem 1991)). These affinities, between 19th century global antirealism and 20th century antirealism, mask fundamental differences. The former is driven by methodological considerations concerning the proper way to do physics whereas the latter is driven by traditional metaphysical or epistemological concerns (about the meaningfulness and credibility of claims about goings-on behind the veil of appearances).

3. Logical Positivism

Logical positivism began in Vienna and Berlin in the 1910s and 1920s and migrated to America after 1933, when many of its proponents fled Nazism. The entire post-1960 conversation about scientific realism can be viewed as a response to logical positivism. More a movement than a position, the positivists adopted a set of philosophical stances: pro-science (including pro-verification and pro-observation) and anti-metaphysics (including anti-cause, anti-explanation, anti-theoretical entities). They are positivists because of their pro-science stance; they are logical positivists because they embraced and used the formal logic techniques developed by Frege, Russell, and Wittgenstein to clarify scientific and philosophical language.

a. General Background

As physics developed in the early 20th century, many of the 19th century methodological worries sorted themselves out: Perrin’s experiments with Brownian motion persuaded most of the reality of atoms; special relativity unified mechanics and electromagnetism and signaled the demise of traditional mechanism; general relativity further unified gravity with special relativity; quantum mechanics produced an account of the microscopic world that allowed atoms to vibrate and was spectacularly supported empirically. Moreover, scientific developments undermined several theses formerly taken as necessarily true. Einstein’s famous analysis of absolute simultaneity showed that Newtonian absolute space and time were incorrect and had to be replaced by the space-time structure of Special Relativity. His Theory of General Relativity introduced an even stranger notion of space-time: a space-time with a non-Euclidean structure of variable curvature. This undermined Kant’s claims that space has to be Euclidean and that there is synthetic a priori knowledge. Moreover, quantum mechanics, despite its empirical success, led to its own problems, since quantum particles have strange properties—they cannot have both determinate position and momentum at a given time, for example—and the quantum world has no unproblematic interpretation. So, though everyone was converted to atomism, no one understood what atoms were.

Logical positivism developed within this scientific context. Nowadays the positivists are often depicted as reactionaries who developed a crude, ahistorical philosophical viewpoint with pernicious consequences (Kuhn 1970, Kitcher 1993). In their day, however, they were revolutionaries, attempting to come to grips with the profound changes that Einstein’s relativity and Bohr’s quantum mechanics had wrought on the worldview of classical physics and to provide firm logical foundations for all science.

Logical positivism’s philosophical ancestry used to be traced to Hume’s empiricism (Putnam 1962, Quine 1969). On this interpretation, the positivist project provides epistemological foundations for problematic sentences of science that purport to describe unobservable realities, such as electrons, by reducing sentences employing these concepts to unproblematic sentences that describe only observable realities. Friedman (1999) offers a different Kantian interpretation: this project provides objective content for science, as Kant had attempted, by showing how it organizes our experience into a structured world of objects, but without commitment to scientifically outdated aspects of Kant’s apparatus, such as synthetic a priori truths or the necessity of Euclidean geometry. Whichever interpretation is correct, the logical positivists clearly began with traditional veil-of-perception worries (§1) and insisted on a distinction that both Hume and Kant advocated—between meaningful science and meaningless metaphysics.

b. The Logical Part of Logical Positivism

This distinction rests on their verificationist theory of meaning, according to which the meaning of a sentence is its verification conditions; and understanding a sentence is knowing its verification conditions. For example, knowing the meaning of “This is blue” is being able to pick out the object referred to by “this” and to check that it is blue. While this works only for simple sentences built from terms that directly pick out their referents and built from predicates with directly verifiable content, it can be extended to other sentences. To understand “No emerald is blue” one need only know the verification conditions for “This is an emerald”, “This is blue” and the logical relations of such sentences to “No emerald is blue” (for example, that “no emerald is blue” implies “if this is an emerald, then this is not blue”, and so forth). Simple verification conditions plus some logical knowledge buys a lot. But it does not buy enough. For example, what are the verification conditions expressed by “This is an electron”,  where “this” does not pick out an ostendible object and where “is an electron” does not have directly verifiable content?

To deal with this, the positivists, especially Carnap, hit upon an ingenious program. First, they distinguished two kinds of linguistic terms: observational terms (O-terms), like “is blue”, which have relatively unproblematic, directly verifiable content, and theoretical terms (T-terms), like “is an electron”, which have more problematic content that is not directly verifiable. Second, they proposed to indirectly interpret the T-terms, using logical techniques inherited from Frege and Russell, by deductively connecting them within a theory to the directly interpreted O-terms. If each T-term could be explicitly defined using only O-terms, just as “x is a bachelor” can be defined as “x is an unmarried male human”, then one would understand the verification conditions for a T-term just by understanding the directly verifiable content of the O-terms used to define it, and a theory’s theoretical content would be just its observational content.

Unfortunately, the content of “is an electron” is open-ended and outstrips observational content so that no explicit definition of it in terms of a finite list of O-terms can be given in first-order logic. From the 1930s to the 1950s, Carnap (1936, 1937, 1939, 1950, 1956) struggled with this problem by using ever more elaborate logical techniques. He eventually settled for a less ambitious account: the meaning of a T-term is given by the logical role it plays in a theory (Carnap 1939). Although T-terms cannot be explicitly defined in first-order logic, the totality of their logical connections within the theory to other T-terms and O-terms specifies their meaning. Intuitively, the meaning of a theoretical term like “electron” is specified by: “electron” means “the thing x that plays the Θ-role”, where Θ is the theory of electrons. (This idea can be rendered precisely in second-order logic by a “Ramseyified” definition: “electron” means “the thing x such that Θ(x)”, where “Θ(x)” is the result of taking the theory of electrons Θ (understood as the conjunction of a set of sentences) and replacing all occurrences of “is an electron” with the (second-order) variable “x” (Lewis 1970).

Two features of this theory of meaning lay groundwork for later discussion. First, the meaning of any T-term is theory-relative since it is determined by the term’s deductive connections within a theory. Second, the positivists distinguished analytic truths (sentences true in virtue of meaning) and synthetic truths (sentences true in virtue of fact). “All bachelors are unmarried” and “All electrons have the property of being the x such that Θ(x)” are analytic truths, whereas “Kant was a bachelor” and “Electrons exist” are synthetic truths. The positivists inherited this distinction from Kant, but, unlike Kant, they rejected synthetic a priori truths. For them, there are only analytic a priori truths (all pure mathematics, for example) and synthetic a posteriori truths (all statements to the effect that a given claim is verified).

c. The Positivism Part of Logical Positivism

The positivists distinguished legitimate positive science, whose aim is to organize and predict observable phenomena, from illegitimate metaphysics, whose aim is to causally explain those phenomena in terms of underlying unobservable processes. We should restrict scientific attention to the phenomena we can know and banish unintelligible speculation about what lies behind the veil of appearances. This distinction rests on the observational-theoretical distinction (§3b): scientific sentences (even theoretical ones like “Electrons exist”) have meaningful verifiable content; sentences of metaphysics (like “God exists”) have no verifiable content and are meaningless.

Because of their hostility to metaphysics, the positivists “diluted” various concepts that have a metaphysical ring. For example, they replaced explanations in terms of causal powers with explanations in terms of law-like regularities so that “causal” explanations become arguments. According to the deductive-nomological (DN) model of explanation, pioneered by Hempel (1965), “Event b occurred because event a occurred” is elliptical for an argument like: “a is an event of kind A, b is an event of kind B, and if any A-event occurs, a B-event will occur; a occurred; therefore b occurred”. The explanandum logically follows from the explanantia, one of which is a law-like regularity.

Because they advocated a non-literal interpretation of theories, the positivists are considered to be antirealists. Nevertheless, they do not deny the existence or reality of electrons: for them, to say that electrons exist or are real is merely to say that the concept electron stands in a definite logical relationship to observable conditions in a structured system of representations. What they deny is a certain metaphysical interpretation of such claims—that electrons exist underlying and causing but completely transcending our experience. It is not that physical objects are fictions; rather, all there is to being a real physical object is its empirical reality—its system of relations to verifiable experience.

4. Quine’s Immanent Realism

Quine, an early critic of logical positivism, acknowledged their rejection of transcendental questions such as “Do electrons really exist (as opposed to being just useful fictions)?” Our evidence for molecules is similar to our evidence for everyday bodies, he argued; in each case we have a theory that posits an arrangement of objects that organizes our experience in a way that is simple, familiar, predictive, covering, and fecund. This is just what it is to have evidence for something. So, if we have such an organizing theory for molecules, then we can no more doubt the existence of molecules than we can doubt the existence of ordinary physical bodies (Quine 1955). Quine thus arrived at a realism not unlike the empirical realism of the logical positivists.

However, Quine rejected their theory of meaning and its central analytic-synthetic distinction, arguing that theoretical content cannot be analytically welded to observational content. The positivists, he argued, confuse the event of positing with the object posited. Yes, scientists conventionally introduce posits (an event) as Stoney introduced the term “electron” in 1894: “the fundamental unit of electric charge that permanently attaches to atoms”. But no, scientists do not treat the conventions as analytic truths that cannot be revised without a change of meaning. Scientists did not treat Stoney’s definition as binding analytic truth and “Electrons exist” as a synthetic hypothesis whose truth must be verified. More generally, Quine argued, once the explicit definitional route failed by Carnap's allowing the meaning of “electron” to be a function of the totality of its logical connections within a theory, Carnap had already adopted meaning holism, according to which one cannot separate the analytic sentences, whose truth-values are determined by the contribution of language, from the synthetic sentences, whose truth-values are determined by the contribution of fact.

Quine accepted meaning holism together with another thesis, epistemological holism, a doctrine often called “the Quine-Duhem Thesis”, because Duhem used it to argue against Poincaré’s conventionalism. The Quine-Duhem thesis says that only a group of hypotheses can be falsified because only a group of hypotheses has observational consequences. If a single hypothesis, H, implies an observational consequence O and we get evidence for not-O; then we can deduce not-H. But a single hypothesis will typically not imply any observational consequence. Take, for example, Gauss’s supposed mountaintop triangulation experiment to test whether space is Euclidean (§2a). Let H = “Space is Euclidean” and O = “The measured angle-sum of the triangle equals 180º”. Clearly H does not entail O without auxiliary assumptions: for example, A1 = “Light travels the shortest Euclidean paths”, A2 = “No physical force appreciably disturbs the light”, A3 = “The triangle is large enough for deviations from rectilinear paths to be experimentally detectable”, and so forth. Consequently, if the experiment yields not-O = “The measured angle-sum of the triangle is not equal to 180º”, we cannot deduce not-H = “Space is non-Euclidean”. We can only deduce not-(H and A1 and A2 and Aand so forth); that is, we can only deduce that one or more of the hypothesis and the auxiliary assumptions is false—perhaps space is Euclidean but some force is distorting the light paths to make it look non-Euclidean. Poincaré and the positivists reply that it is conventional or analytic that space is Euclidean; there is no fact of the matter. In rejecting conventionalism, Duhem and Quine claim that we may keep H and reject one of the Ais to accommodate not-O: any statement may be held true in light of disconfirming experience. [It is misleading, however, to call epistemological holism “the Quine-Duhem thesis”. For Duhem, epistemological holism holds only for physical theories for rather special reasons; it does not extend to mathematics or logic and is not connected with theses about meaning. Quine extends epistemological holism from physics to all knowledge, including all knowledge traditionally regarded as a priori, including allegedly analytic statements.] Quine, but not Duhem, believed that our reluctance to revise mathematics and logic (because of their centrality to our belief-systems) does not entail their a prioricity (irrevisability based on evidence).

Moreover, if the analytic-synthetic distinction collapses, so too does the positivist separation of metaphysics from science. For Quine, metaphysical questions are just the most general and abstract questions we ask and are decided on the grounds we use to decide whether electrons exist. All questions are “internal” in the sense that they must be formulated in our home language and answered with our standard procedures for gathering and weighing evidence. In particular, questions about the reality of some putative objects are to be answered in terms of whether they contribute to a useful organization of experience and whether they withstand the test of experience.

5. Scientific Realism

In the 1970s, a particularly strong form of scientific realism (SR) was advocated by Putnam, Boyd, and others (Boyd 1973, 1983; Putnam 1962, 1975a, 1975b). When scientific realism is mentioned in the literature, usually some version of SR is intended. SR is often characterized in terms of two commitments (van Fraassen 1980):

SR1     Science aims to give a literally true account of the world.

SR2     To accept a theory is to believe it is (approximately) true.

However, scientific realists’ arguments and their interpretation of SR1 and SR2 often presuppose further commitments:

SR3     There is a determinate mind-independent and language-independent world.

SR4     Theories are literally true (when they are) partly because their concepts “latch on to” or correspond to real properties (natural kinds, and the like) that causally underpin successful usage of the concepts.

SR5     The progress of science asymptotically converges on a true account.

a. Criticisms of the Observational-Theoretical Distinction

Critics of positivism argued that there is no workable, well-motivated distinction between observational and theoretical vocabulary that would make the former unproblematic and the latter problematic (for example, Putnam 1962; Maxwell 1962; van Fraassen 1980). First, O-terms apply to apparently theoretical entities (for example, red corpuscle) and T-terms apply to apparently observable entities (for example, the moon is a satellite). Second, if T-terms were epistemologically or semantically problematic, that would have to be due to the unobservable nature of their referents. But in the continuous gradation between seeing with the unaided eye, with binoculars, with an optical microscope, with an electron microscope, and so on, there is no sharp cut-off between being observable and being unobservable where we could non-arbitrarily say: beyond this we cannot trust the evidence of our senses or apply terms with confidence. Third, the "able" in “observable” cannot be specified in a way that motivates a plausible distinction. Most “theoretical” entities can be detected (like electrons) with scientific instruments or theoretically calculated (like lunar gravity). The positivist may respond that they cannot be directly sensed, and are thus unobservable, but why should being directly sensed be the criterion for epistemological or semantic confidence? Fourth, observation is theory-infected: what we can both observe and employ as evidence is a function of the language, concepts, and theories we possess. A primitive Amazonian may observe a tennis ball (he notices it), but without the relevant concepts he cannot use it as evidence for any claims about tennis. Such arguments undermine a central distinction of the positivist program.

b. Putnam’s Critique of Positivistic Theory of Meaning

Putnam (1975a, 1975b) provides a general argument against all theories of meaning (Frege, Russell, Carnap, Kuhn), including positivist theories, which are classical in the relevant sense. Classical concepts have two characteristics: they determine their extensions in the world, and we can “grasp” them. To know the meaning of a directly interpretable O-term is to associate it with a concept (verification condition) which determines the term’s extension. In turn, to know the meaning of an indirectly interpretable T-term is to know its logical connections to directly interpretable terms. These two features of the classical view are:

(1)  To know the meaning of F is to be in a certain psychological state (of grasping F’s associated concept and knowing it is the meaning of “F”);

(2)  The meaning of F determines the extension of F in the sense that, if two terms have the same meaning, they must have the same extension.

If the meaning of “water” is the concept the clear, tasteless, potable, nourishing liquid found in lakes and rivers, then by (1) I must associate that concept with “water” if I’m to know its meaning and by (2) something will be water just in case it satisfies that concept.

Putnam’s famous Twin Earth argument (Putnam 1975b) is intended to show that all classical theories fail because (1) and (2) are not co-tenable. Suppose the year is 1740 when speakers did not know that water is H2O. Suppose too that another planet, Twin-Earth, is just like Earth except that a different liquid, whose chemical nature is XYZ, is the clear, tasteless, potable, nourishing liquid found in lakes and rivers. Suppose finally that Earthling Oscar and Twin-Earthling Twin-Oscar are duplicates and share the very same internal psychological states so that Oscar thinks “water is the clear, tasteless, potable, nourishing liquid found in lakes and rivers” if and only if Twin-Oscar thinks “water is the clear, tasteless, potable, nourishing liquid found in lakes and rivers”. In other words, they grasp the same meaning and associate it with the word “water”; (1) is satisfied. But then (2) cannot be satisfied: meaning does not determine extension because the extension of “water” (in English) = H2O yet XYZ = the extension of “water” (in Twin-English). If (1), then not-(2). Conversely, if meaning does determine extension, then since the extension of “water” (on Earth) is the extension of “water” (on Twin-Earth), Oscar and Twin-Oscar must associate different meanings with the term. Consequently, either (1) or (2) must go. Putnam keeps (2) and revises (1).

c. Putnam’s Positive Account of Meaning

How is extension determined, if not classically? Putnam develops a causal-historical account of reference for natural kind terms (“water”) and physical magnitude terms (“temperature”). Think of these terms being introduced into the language via an introducing event or baptism. The introducer points to an object (or phenomenon) and intones: “let ‘t’ apply to all and only objects that are relevantly similar (same kind, same magnitude) to this sample (or to whatever is the cause of this phenomenon)”. Later t-users learn conditions that normally pick out the referent of t, use these conditions to triangulate their usage with that of others and with extra-linguistic conditions, and intend their t-utterances to conform to the t-practices initiated in the introducing event. The term passes through the community so that reference is preserved. Then, on Putnam’s view, the extension of the term is part of the meaning of the term, the kind or magnitude that the term “locked on to” in the course of its introduction and historical development. So H2O is part of the English meaning of “water” and (2) is satisfied: meaning determines extension since extension is part of the meaning. This gives an intuitively plausible reading of the Twin-Earth scenario: Oscar is talking about water (H2O) and Twin-Oscar is talking about Twin-water (XYZ).

On classical accounts, a speaker S correctly uses a term “t” to refer to an object x only if x uniquely satisfies a concept, description or verification procedure or theory that S associates with t. In the 1740s English-speakers lacked such uniquely identifying knowledge, though we would naturally say they were using “water” as we do — to refer to H2O. On Putnam’s account, S correctly uses t to refer to x only if S is a member of a linguistic community whose t-usage (via their linguistic and extra-linguistic interactions) is causally or historically tied to the things or stuff that are of the same kind as x. Realistic semantics ties correct usage to things in the world using causal relations. Because truth is defined in terms of reference (for example, “a is F” is true if and only if the referent of “a” has the property expressed by “F”), truth on Putnam’s account is also a causal notion.

We now see why SR is committed to SR3 and SR4 above. Clearly SR1 requires SR3: science can aim at a literally true account of the world only if the world is some determinate way that an account can be literally true of. But Putnam’s semantics requires more: that there be natural kinds and magnitudes that our terms lock onto, which is SR4. Note SR5 also seems to require SR3 and SR4. To many realists who accept SR3, SR4 seems extravagant and mysterious. Natural kinds seem to be an unnecessary traditional philosophical apparatus imposed on realism without the support of, and indeed undermined by, science. Our best science suggests that natural kinds do not exist: water, for example, is not a simple natural kind, H2O, but a more complicated structure of constantly changing polymeric variations, and biological species are anything but simple kinds. And even if there were natural kinds, it seems unreasonable to expect that language could neatly lock onto them: why should our accidental encounters with various samples in our limited part of the universe put us in a position to lock onto universal kinds? Continuity of reference of the kind advocated by Putnam may be too crude. More fine-grained accounts have been proposed (Kitcher 1993; Wilson 1982, 2006) which acknowledge the complicated evolution of science and language yet avoid metaphysical extravagance.

d. Putnam’s and Boyd’s Critique of Positivistic Philosophy of Science

A common argument for SR is the following:

  1. An acceptable philosophy of science should be able to explain standard scientific practice and its instrumental success.
  2. Only SR can explain standard scientific practice and its instrumental success.
  3. Thus SR is the only acceptable philosophy of science.

This is an instance of inference to the best explanation (§5e). Here we look at premise 2, which follows logically from:

2a. There are only two contending explanations: SR and Idealism.

2b. Idealism fails to explain the practice and its success, while SR succeeds.

Premise 2a: For Putnam the distinction between realism and idealism is fundamentally semantic. In realist (or externalist) semantics the world leads and content follows: content is determined causally and historically by the way world is; the content of “water” is H2O. In idealist (or internalist) semantics content drives and the world follows: the world is whatever satisfies the descriptive content of our thoughts; the content of “water” is the clear, tasteless, potable, nourishing liquid found in lakes and rivers. Idealism is a blanket category covering any account of meaning (including positivist and Kuhnian and pragmatist accounts (§§7-8)) in the family of classical theories (§5b).

Premise 2b: Idealism fails to explain scientific practice and success in several ways:

(i) For the positivist, “Electrons exist” means “Θi implies ‘electrons exist’ and Θi is observationally correct” and “‘electron’ refers to x” means “x is a member of the kind X such that Θi(X)” (§3b). Existence, reference, and truth are all theory-relative. Take “electron” in Thomson’s 1898 theory, in Bohr’s 1911 theory, and in full quantum theory (late 1920s). Since the meaning of “electron” changes from theory to theory and meaning determines reference, the referent of “electron” changes from theory to theory. So, Thomson, early Bohr, later Bohr, Heisenberg, and Schrödinger were (a) talking about a different entity and (b) changing the meaning of “electron”. Putnam argues that this is a bizarre re-description of what we would normally say: they were (a) talking about the same entity and (b) making new discoveries about it. By contrast, realist truth and reference are trans-theoretic: once “electron” was introduced into the language by Stoney, it causally “locked onto” the property being an electron; then the various theorists were talking about that entity and making new discoveries about it. So realism, unlike positivism, saves our ordinary ways of talking and acting.

(ii) The conjunction objection: in practice we conjoin theories we accept. Realist truth has the right kind of properties, such as closure under the logical operation of conjunction (if T1 is true and T2 is true, then (T1 and T2) is true), to underwrite this conjunction practice. But positivist surrogates for truth, reference, and acceptance cannot underwrite this practice. From “T1 is observationally correct” and “T2 is observationally correct”, it does not follow that (T1 and T2) is observationally correct—their theoretical parts could contradict each other, for example, so that their conjunction would imply all observational sentences, both true and false. Again realism, but not positivism, succeeds. Similarly, the practice of conjoining auxiliary hypotheses with a theory to extend and test the theory cannot be accounted for by positivism. In {Newton’s theory of gravitation + there is no transneptunian planet}, “gravitation” has one meaning; in {Newton’s theory of gravitation + there are transneptunian planets}, it has another meaning. But the discovery that the latter was true and the former false should not be described as a change of meaning or reference of the word “gravitation”. Again realism succeeds where positivism fails.

(iii) The No-Miracles Argument (NMA): everyone agrees that science is instrumentally successful and increasingly so. Scientists believe that newly proposed theories stand a better chance of success if they resemble current successful theories or if they are tested by methods informed by such theories, and they construct scientific instruments, experiments, and applications relying on current theories. Moreover, scientists are getting better at doing this—consider improvements in microscopy over the past three centuries. Their actions are successful and rely on their beliefs that current theories can be depended upon to produce a likelihood of success. These successes are a miracle on positivist principles. Why should reliance on observationally correct theories be expected to produce success, unless we believe what they say about unobservables? In contrast, SR explains these successes: scientists’ actions rely upon their belief that the theories they use are approximately true; those actions have a high degree of success; the best explanation of their success is that the theories relied upon are approximately true.

e. Inference to the Best Explanation

Argument 1-3 (§5d) is an instance of inference to the best explanation (IBE), an inferential principle that realists endorse and antirealists reject. IBE is the rule that we should infer the truth of the theory (if there is one) that best explains the phenomena. Thus we should infer SR because it best explains scientific practice and its instrumental success.

First, a few clarifications of IBE are in order. If IBE is to be non-trivial, the best explanation must not entail that what is best must antecedently be what is most likely, since of course we should infer the truth of the most likely explanation. Rather the best explanation must be characterized in terms of properties like “loveliest” or “most explainey” (Lipton 2004). Traditional examples of such properties are: it has wide scope and precision; it appeals to plausible mechanisms; it is simple, smooth, elegant, and non-ad hoc; and it underwrites contrasts (why this rather than that). Then IBE says we should accept the theory that optimizes such explanatory virtues when explaining the phenomena. The caveat “if there is one” blocks inferences to the best of a bad lot: the best explanation may not reach a minimally acceptable threshold. Finally, like any inferential principle that amplifies our knowledge, conclusions inferred by IBE are fallible: while they are more likely to be true, they could be false. Second, the “justification” for IBE is two-fold. (1) It is needed for science. Simple enumerative induction (which entitles us to move probabilistically from “All observed As are Bs” to “All As are Bs” cannot handle inferences from observed phenomena to their “hidden” causes. For example, we cannot inductively infer “Galaxy X is receding” from “Light from Galaxy X is red-shifted”, but we can infer by IBE that Galaxy X is receding because that is the best explanation of why its light is red-shifted. More strongly, Harman (1965) argues that IBE is needed to warrant straight enumerative induction: we are entitled to make the induction from “All observed As are Bs” to “All As are Bs” only if “All As are Bs” provides the best explanation of our total evidence. (2) Scientific uses of IBE are grounded in, and are just sophisticated applications of, a principle we use in everyday inferential practice. If I see nibbled cheese and little black deposits in my kitchen and hear scratching noises in the walls, I reasonably infer that I have mice, because that best explains my evidence. IBE thus needs no more justification than does modus ponens—each is part of the very practices that constitute what rational inference is.

Realists employ IBE at different levels. At the ground-level, they observe surprising regularities like the phenomenological gas laws relating pressure, temperature, and volume. These cannot be just cosmic coincidences. Realists argue that observed gas behavior is as it is because of underlying molecular behavior; we have reason to believe the molecular hypothesis (by IBE) because it best explains the observed gas behavior. At this level, antirealist rejections of IBE seem stretched: it seems unsatisfactory to say either that we do not need an explanation (since it appears to be a guiding aim of inquiry to explain regularities where possible) or that observed gas behavior is as it is because gases behave as if they are composed of molecules (since ordinary and scientific practice distinguishes genuine explanations from just-so stories).

Realists also employ IBE at a meta-level (§5d): we should be realists about our current theories because only realism can explain how our methodological reliance on them leads to the construction of empirically successful theories (Boyd) or only realism can explain the way in which scientific theories succeed each other and the methodological constraints scientists impose on themselves when constructing new theories (Putnam). Relativity theorists felt bound to have Newton’s theory derivable in the limit from Einstein’s theory. Why? The realist answer is: “because a partially correct account of a theoretical object (as the gravitational field) must be replaced by a better account of the same theory-independent object (as the metric structure of spacetime)”. Similarly, realists claim that scientific progress is best explained by SR5, the thesis that science is converging on a true account of the world. As Putnam says, realism is the only hypothesis that does not make the success of science a miracle. At the meta-level, the alleged phenomenon is that our best scientific traditions and theories are instrumentally and methodologically successful; SR is alleged to be the best (or only) explanation of that phenomenon; thus we should infer SR. As we will see (§§6d, 7, 11b), it is not clear that these uses of IBE are legitimate, because the alleged phenomenon itself is questionable, or the SR-“explanation” does not explain, or no explanation may be needed, or alternative antirealist explanations may be better.

6. Constructive Empiricism

Van Fraassen (1980) proposed constructive empiricism (CE), arguing that we can preserve the epistemological spirit of positivism without subscribing to its letter. Van Fraassen’s is an antirealism concerning unobservable entities. Recognizing the difficulties of basing antirealism on a “broken-backed” linguistic distinction between O-terms and T-terms, he allows our judgments about unobservables to be literally construed but, he argues, our evidence can never entitle us to our beliefs about unobservables. CE is consistent with SR3 and SR4 (though it does not commit to them, it has no quarrels with realist objectivity or semantics) but replaces SR1, SR2, and SR5 respectively with:

CE1     Science aims to provide empirically adequate theories of the phenomena.

CE2     To accept a theory is to believe it is empirically adequate, but acceptance has further non-epistemic/pragmatic features.

CE5     The progress of science produces increasing empirical adequacy.

A theory T is empirically adequate if and only if what T says about all actual observable things and events is true (that is, T saves all the phenomena, or T has a model that all actual phenomena fit in). Empirical adequacy is logically weaker than truth: T’s truth entails its empirical adequacy but not conversely. But it is still quite strong: an empirically adequate theory must correctly represent all the phenomena, both observed and unobserved. CE2 distinguishes epistemic and pragmatic aspects of acceptance. Epistemic acceptance is belief; beliefs are either true or false. Pragmatic acceptance involves non-epistemic commitments to use the theory in certain ways (basing research, experiments, and explanations on it, for example); commitments are neither true nor false; they are either vindicated or not. CE5 acknowledges that there is instrumental progress without trying to explain it. CE concedes a realist semantics (“electron”-talk is not highly derived talk about observables) but preserves the spirit of positivism by recommending agnosticism about a theory’s literal claims about unobservables.

a. The Semantic View of Theories and Empirical Adequacy

On the positivist view, a theory T is a syntactic object: T is the set of theorems in a language generated from a set of axioms (the laws of T) and derivation rules. The empirical content (the entire literal content) of T is T/O, the theorems expressible in the observational vocabulary. A theory T is empirically (observationally) adequate if T/O is the class of all true observational sentences.  Two theories, T and T’, are empirically (observationally) equivalent if T/O = T’/O. Since such theory pairs have the same literal content and differ only in their non-literal, theoretical content, they are merely inter-definable variants of a common observational basis: they say the same thing but express it differently. There is no fact of the matter whether T or T’ is true (both are or neither are), and whether we work with T or T’ is purely a pragmatic matter concerning which is simpler, more convenient, and so forth. For SR and CE there is a fact of the matter: at most one of T, T’ can be true. For SR there may be reasons to believe one of T, T’. For CE there can be no epistemic reason to believe one over the other, though there may be pragmatic reasons to accept (commit to using) one over the other. Van Fraassen needs a different account of theories if he is to agree with realists about literal content and there being a fact of the matter about empirically equivalent theories.

For him, a theory T is a semantic object, the class of models, A = <D, R1, R2, …, Rn>, that satisfy its laws (where D is a set of objects and Ri are properties and relations defined on them). For example, D might contain billiards and molecules; the property is elastic in A might be instantiated by both billiards and molecules, is a molecule by some members of D, and is a billiard ball by others. Now let A’ = <D’, R’1, R’2, …, R’m> (where m < n, D’ is a proper subset of D, and R’i = Ri/D’ (Ri restricted to D’)). Intuitively A’ is obtained from A by removing all unobservables, so D’ would contain billiard balls but not molecules, is elastic would now be restricted to billiard balls, is a molecule would not be instantiated, and so forth. Then A’ is an empirical substructure of A, the result of restricting the original domain to observables and its properties and relations accordingly. T is empirically adequate if and only if T has an empirical substructure that all observables fit in. Two theories, T and T’, are empirically equivalent if all the observables in a model of T are isomorphic to the observables in a model of T’. Such theory pairs agree in what they say about observables but may disagree in what they say about unobservables. Thus CE can agree with SR that at most one of T, T’ can be true and to be a realist about that theory is to believe it is true (SR2). Yet CE can preserve the spirit of positivism by holding that we can never have reason to believe a theory; at most we have reason to believe it is empirically adequate. Friedman (1982) questions whether van Fraassen achieves this.

b. The Observable-Unobservable Distinction

Since CE recommends agnosticism about unobservables but permits belief about observables, the policy requires an epistemologically principled distinction between the two. Though rejecting the positivists’ distinction between T-terms and O-terms, van Fraassen defends a distinction between observable and unobservable objects and properties, a distinction that grounds his policy of agnosticism concerning what science tells us about unobservables. There is a fact of the matter about what is observable-for-humans: given the nature of the world and of the human sensory apparatus, some objects/events/properties possess the property is observable-for-humans; others lack that property; the former are observables, the latter unobservables. For example, Jupiter’s moons are observable because a human could travel close enough to see them unaided, but electrons are unobservable because a human could never see one (that is just the nature of humans and electrons). Van Fraassen also claims that the limits of observation are disclosed by empirical science and not by philosophical analysis—what is observable is simply a fact disclosed by science. It should be noted that the distinction, as he draws it, has no a priori ontological implications: flying horses are observable but do not exist; electrons may exist but are unobservable.

Critics (Churchland 1985; Musgrave 1985; Fine 1986; Wilson 1985) complain that this distinction cannot ground a sensible epistemological policy. First, van Fraassen runs together different notions, none of which has special epistemological relevance. What is observable is variously taken as: what is detectable by human senses without instruments (Jupiter’s moons); what can be “directly” measured as opposed to “indirectly” calculated; what is detectable by humans-qua-natural-measuring-instruments (as thermometers measure temperature, humans “measure” observables). Critics ask why any of these should divide the safe from the risky epistemic bet. Why is it legitimate to infer the presence of mice from casual observation of their tell-tale signs but illegitimate to infer the presence of electrons from careful and meticulous observation of their tell-tale ionized cloud-chamber tracks?

Second, many critics find van Fraassen’s agnosticism about unobservables unwarrantedly selective. CE claims that we ought to believe what science tells us about all observables (both observed and unobserved) but not about unobservables. In each case there is a gap between our evidence (what has been observed) and what science arrives at (claims about all observables (CE) or claims about all observables and unobservables (SR)). Why is it legitimate to infer from what we have observed in our spatiotemporally limited surroundings to everything observable but not to what is unobservable (though detectable with reliable instruments or calculable with reliable theories)? Our experience is limited in many ways, including lacking direct access to: medium-sized events in spatiotemporally remote regions, events involving very small or very large dimensions, very small or very large mass-energy, and so forth. Why should inductions to claims about the first be legitimate but not to claims about the others?

Third, CE’s epistemic policy is pragmatically self-defeating or incoherent. Suppose a scientific theory T tells us “A is unobservable by humans”. In order to use T to set our epistemic policy we must accept T; that is, believe what T tells us about observables, but we should be agnostic about what T tells us about unobservables, including whether A is observable or unobservable. But if we should be agnostic about A’s observability, then we do not know whether or not we should believe in As. A consistent constructive empiricist will have trouble letting science determine what is unobservable and using that determination to guide her epistemic policy—often she will not know what not to believe.

Finally, if we interpret the language of science literally (as van Fraassen does), then we ought to accept that we see tables if and only if we see collections of molecules subject to various kinds of forces. But then if we are willing to assert there are tables we should be willing to assert that there are collections of molecules (Friedman 1980; Wilson 1985).

c. The Argument from Empirically Equivalent Theories

As realists rely on IBE, antirealists rely on EET:

  1. If T and T’ are empirically equivalent, then any evidence E confirms/infirms T to degree n if and only if E confirms/infirms T’ to degree n.
  1. If (E confirms/infirms T to degree n if and only if E confirms/infirms T’ to degree n), then we have no reason to believe T rather than T’ or vice versa.
  2. For any T, there exists a distinct empirically equivalent T’.
  3. Thus, for any theory T, we have no reason to believe it rather than its empirically equivalent rivals.

The argument appears to be valid, but each of its premises can be challenged (Boyd 1973; Laudan and Leplin 1991). Premise 1 is under-specified. Any abstract, sufficiently general theory (for example, Newton’s theory of gravitation) has no empirical consequences on its own. Trivially, two such theories are empirically equivalent since each has no empirical consequences; so any evidence equally confirms/infirms each. But no realist will worry about this. In order to give Premise 1 bite, the theories must have empirical consequences, which they will have only with the help of auxiliary hypotheses, A (§4). But then Premise 1 becomes:

1A. If (T and A) and (T’ and A) are empirically equivalent, then any evidence E confirms/infirms T to degree n if and only if E confirms/infirms T’ to degree n.

Whether 1A is plausible depends on what A is. If A is any hypothesis which has been accepted to date, then 1A is false because current empirical indistinguishability does not entail perpetual empirical indistinguishability, since evidence and auxiliary hypotheses change over time as we discover new instruments, methods, and knowledge. But if A is any hypothesis whatsoever, then there is no reason to think that the antecedent of Premise 1A is true, and thus 1A is again a trivial, vacuous truth. Moreover, the connection between empirical equivalence (agreement about observables in the sense of §6a) and evidential support is questionable (Laudan and Leplin 1991). Premise 1 presupposes that all and only what a theory says or implies about observables is evidentially relevant to that theory. But this is false: Brownian motion, though not an empirical consequence of atomic theory, supported it. Thus T and T’ could be empirically equivalent, yet one could have better evidential support than the other; for example, T, but not T’, might be derivable from a more comprehensive theory that entails evidentially well-supported hypotheses.

Some IBE-realists resist Premise 2: T and T’ may be equally confirmed by the evidence, yet one of them may possess superior explanatory virtues (§5e) that make it the best explanation of the evidence and thus, by IBE, more entitled to our assent—especially if the other is a less natural, ad hoc variant of the “nice” theory. The success of this response depends on whether explanatorily attractive theories are more likely to be true—why should nature care that we prefer simpler, more coherent, more unified theories?—and on whether a convincing case can be made for the claim that we are evolutionarily equipped with cognitive abilities that tend to select theories that are more likely to be true because their explanatory virtues appeal to us (Churchland 1985).

The very strong, very general conclusion of EET, however, depends on the very strong, very general Premise 3, which, critics argue, is typically supported either by “toy” examples of theory-pairs from the history of physics, by contrived examples of theories, one of which is transformed from the other by a general algorithm (Kukla 1998), or by some tricks of formal logic or mathematics. None is likely to convince any realist (Musgrave 1985; Stanford 2001).

d. Constructive Empiricism, IBE, and Explanation

For van Fraassen, a theory’s explanatory virtues (simplicity, unity, convenience of expression, power) are pragmatic—a function of its relationship to its users. This implies that explanatory power is not a rock bottom virtue like consistency (Newton could decline to explain gravity, but he could not decline to be consistent) and does not confer likelihood of truth or empirical adequacy (Newton’s theory explained lots of phenomena but is neither true nor empirically adequate). The fact that a theory satisfies our pragmatic desiderata has no implications for its being true or empirically adequate, contrary to what IBE-realists maintain.

IBE is a rule guiding rational choice among rival hypotheses. But there is always the option of declining to choose, of remaining agnostic. To undercut this general option, van Fraassen argues, the realist must commit to some claim like: every regularity and coincidence must be explained. Van Fraassen challenges this alleged requirement. First, the quest for explanation has to stop somewhere; even “realist” explanations must bottom out in brute fundamental laws; so, why cannot an antirealist bottom out in brute phenomenological laws? Second, scientists do not consider themselves bound by a principle that demands that every correlation be explained. In quantum mechanics, for example, spin states of entangled particles are perfectly correlated, yet every reasonable explanation-candidate has failed, and scientists no longer insist that they must be explained, contrary to what realists allegedly require (Fine 1986). However, these arguments may be directed at a straw man, since no realist is likely to require that every regularity be explained. Musgrave (1985), for example, suggests that these arguments confuse realism (the view that science aims to explain the phenomena where possible) with essentialism (the view that science aims to find theories that are fundamentally self-explanatory): it is not antirealist to claim that Newton explained a host of phenomena in terms of gravity but declined to explain gravity itself.

Van Fraassen also denies that only realism can explain the phenomena. There are rival explanations that are compatible with CE, and some of them are more plausible than realism. In §5e we distinguished ground-level and meta-level uses of IBE and suggested that this strategy might be more promising for the latter than the former. Recall the realists’ reasoning: there is a surprising phenomenon—our current scientific theories, scientific methodology, and the history of modern science, are surprisingly successful—which cries out for explanation; the only explanation is that the theories are approximately true; thus, by IBE, realism. But there is a more mundane explanation: many very smart people construct our scientific theories and methods, throwing out the unsuccessful ones (which we tend to ignore (Magnus and Callender 2004)) and refining and keeping only the successful ones. A variant of this success-by-design-and-trial-and-error is explanation of success in Darwinian terms: just as the mouse’s running away from its enemy the cat is better explained in Darwinian terms (only flight-successful mice survive and pass their genes along) than in representational terms (the mouse “sees” that the cat is his enemy and therefore runs), so too the instrumental success of science is better explained in Darwinian terms (only the successful theories survive) than in realist terms (they are successful because they are approximately true). These rival antirealist explanations of success are controversial, however (Musgrave 1985). The success-by-design explanation does not seem right, since scientists often construct theories that make completely unexpected, novel predictions.

7. Historical Challenges to Scientific Realism

A range of arguments attempt to show that scientific realism is often supported by an implausible history of science. (In what follows, T* and T are successor and predecessor theories in a sequence of theories; for example, think of the sequence <Aristotelian physics, Medieval physics, Cartesian physics, Newtonian physics, (Newtonian + Maxwellian physics), Special Theory of Relativity (STR), (General Theory of Relativity (GTR) + Quantum Mechanics (QM)), …> as ordered under the relation T* succeeds T.)  Both realists and empiricists think of science as being cumulative and progressive. For empiricists, cumulativeness requires at least that T* have more true (and perhaps less false) observational consequences than T. Since the content of a theory on logical positivists’ views is exhausted by its observational consequences, if T* has more true observational consequences than T, then T* is “more true than” T. However, SR-realists require more. Because of SR5, they are committed to a historical thesis: that science asymptotically converges on the truth. Because of their externalist semantics, they are committed to theses about the reference: theoretical terms genuinely refer, reference is trans-theoretic, and reference is preserved in T-T* transitions (so that “electron” in Bohr’s earlier and later theories refers to the same object and the later theory provides a more adequate conception of that object). Finally, because of their meta-level appeals to IBE, they are committed to SR5 because it the best explains the instrumental success of our best theories and the increasing instrumental success of sequences of theories (where T* is more successful than T because T* is closer to the truth than T), and so forth.

a. Kuhn’s Challenge

According to Kuhn (1970), the standard view of science as steadily cumulative (presupposed by both positivism and realism) rests on a myth that is inculcated by science education and fostered by Whiggish historiography of science. When the myth is deconstructed, we see science as historically unfolding through stable cycles of cumulativeness, punctuated by periods of crisis and revolution.

During periods of normal science, practitioners subscribe to a paradigm. They have the same background beliefs about: the world, its fundamental ontology, processes, and laws (statements that are not to be given up); correct mathematical and linguistic expression; scientific values, goals, and methods; scientifically relevant questions and problems; and experimental and mathematical techniques. Within a given paradigm P—for example, Newtonian physics—there is a relatively stable background: a world of Newtonian particles moving in space and time subject to Newtonian forces (like gravity) and obeying Newton’s laws. There are exemplary methods and techniques—for example, to solve a problem of motion, bring it under the equation, F = ma, which manifests itself across the board and is treated as counterexample-free. And there are shared values—for example, unified mathematical representation of phenomena—and problems (for example, the solution of the arbitrary n-body problem for a system of gravitationally attracting bodies or the resolution of the anomaly in the orbit of Uranus) that require further articulation of the theory. In normal science, cumulativeness occurs: the theory becomes extended to answer its own questions and cover its phenomena. (Kuhn thinks that clean views of history come from focusing too much on normal science.) But sooner or later anomalies crop up that the paradigm cannot handle (for example, the failure to bring electromagnetism, black body radiation, and Mercury’s orbit under the Newtonian scheme). There is a crisis that only a revolutionary new paradigm (for example, STR, QM, and GTR) can handle. Once in place, the new paradigm P* provides a radically new way of looking at the world.

Kuhn (1970) was interpreted (wrongly, but with some justice given his sometimes incautious language) as arguing for an extremely radical constructivist/relativist position: P and P* are incommensurable in the sense that they are so radically distinct that they cannot be compared; the P and P* scientists work “in different worlds”, “see different things”, use different maps (theories and conceptual schemes) and also have different rules for map-making (methods), different languages, and different goals and values. As a result, during the transition, scientists have to learn a new way of seeing and understanding phenomena—Kuhn likens the experience to a “gestalt switch” or “religious conversion”. There is no commonality—in ontology, methodology, observational base, or goals/values—that P and P* scientists can use to rationally adjudicate their disagreements. There is no paradigm-independent reason for preferring P* over P, since such reasons would have to appeal to something common (common observations, methods, or norms), and they share no commonality. Even more strongly, there is no paradigm-independent, objective fact of the matter concerning which of them is correct. If this were true, then all standard theses about progress would be undermined. There is no referential or meaning continuity across paradigms; no sense can attach to theses like T* is more true than T, T is a limiting case of T*; or T* preserves all T’s true observational consequences, since such theses presuppose T-T* commensurability.

Critics have pointed out that this view is too extreme (McMullin 1991). The history of science shows more continuity and fewer radical revolutions than this account attributes to it. Scientists make rational choices between “paradigms” (for example, most scientists who were skeptical of atoms came to reasonably believe in them as a result of Perrin’s experiments). Many scientists work within two traditions without experiencing gestalt shifts (for example, 19th century energetics and molecular theories). T and T* advocates often argue, criticize each other, and rationally persuade each other that one of the two is incorrect. How could this be, if the radical interpretation of Kuhn were correct?

Kuhn clearly did not intend the radical reading, and in later writings (1970 Postscript, 1977) he distinguishes his views from such radical, subjectivist, and relativist interpretations. Paradigm transitions and incommensurability, he argues, are never as total as the radical interpretation assumes: enough background (history, instrumentation, and every-day and scientific language) is shared by P- and P*-adherents to underwrite good reasons they can employ to mount persuasive arguments. Moreover, he lists several properties any theory should have—accuracy (of description of experimental data), consistency (internal and with accepted background theories), scope (T should apply beyond original intended applications), fecundity (T should suggest new research strategies, questions, problems), and simplicity (T should organize complex phenomena in a simple tractable structure). Application of these criteria accounts for progress and theory choice. However, these are “soft” values that guide choices rather than “hard” rules that determine choices. Unlike rules, (i) they are individually imprecise and incomplete, and (ii) they can collectively conflict (and there is no a priori method to break ties or resolve conflicts). Moreover, Kuhn argues, an individual’s choice is guided by a mixture of objective (accuracy, and so forth) and subjective (individual preferences like cautiousness and risk-taking, and so forth) factors, the latter influencing her interpretation and weighing of the criteria. A cautious scientist may be unwilling to risk a high probability of being wrong for a small probability of being informative in novel ways, and vice versa for the risk-taker. In this way Kuhn (1977) offers a middle ground between theory choices being completely subjective and being objective (qua being determined by rules applied to evidence). This “softer” view of science, he argues, enables new theories to get off the ground: progress can be made only if there are values to allow rational discussion and argument but not hard rules that would pre-determine an answer (because then everyone would conform to the rule and not risk proposing new alternatives).

Kuhn has shown that evidence and reasons are sometimes incapable of deciding between P and P*. But a realist may concede that hard choices occur: at most one of P or P* is correct, and we may have to wait and see which, if either, pans out. Temporary gridlock need not amount to permanent undecidability: the lack of decisive reasons at a time does not imply that there will be no decisive reasons forever; when more evidence is acquired and its relevance better understood, convincing reasons usually emerge. Realists should concede these points; many in the 21st century do. But no SR-realist can accept the thesis, never abandoned by Kuhn, that there is no fact of the matter whether P or P* is correct.

b. Laudan’s Challenge: The Pessimistic Induction

Although it is widely agreed that our best theories are instrumentally successful and many T-T* sequences show increasing success, Laudan (1981) disputes that success and progress are to be explained in realist SR5-terms of increasing approach to the truth. The history of science, Laudan argues, shows that referential success is neither necessary nor sufficient for empirical success: not necessary because the central terms of many successful theories did not refer (19th century ether, caloric, and phlogiston theories, for example); not sufficient because the central terms of many failing theories did, by our lights, refer (18th century chemical atomism, Prout’s hypothesis for most of the 19th century, Wegener’s theory of continental drift in the first half of the 20th century, and so forth).

Moreover, realist notions of approximate truth and convergence-to-the-truth are problematic. Despite best efforts, no satisfactory metric has emerged that would characterize distance from the truth or the truth-distance between T and T* (Laudan 1981; Miller 1974; Niiniluoto 1987). For some T-T* sequences in mathematical physics, there are limit theorems whereby T can be derived as a special case of T* under appropriate limiting conditions. For example, special relativity passes asymptotically into Newtonian mechanics as (v/c) 2 approaches 0. Such theorems suggest that Newtonian mechanics yields close to correct answers for applications close to the relativistic limits (not too fast). In this way realists can appeal to them to argue that T* extends and improves upon T. However, for many T-T* sequences there are no analogous limit theorems: Lavoisier’s oxygen theory is a progressive successor of Priestley’s phlogiston theory, yet there is no neat mathematical relationship indicating that phlogiston theory is a limiting case of oxygen theory. Moreover, even for cases where T* approaches T as some parameter approaches a limit, it is controversial what to conclude. If reference is determined by meaning (§5b), then “massnewton” and “masseinstein” refer to different things, and the fact that there is a derivation of classical mass-facts from relativistic mass-facts under certain conditions does nothing to show that T* provides a more global, more accurate description of mass-facts than T (since they’re talking about different things); the limit theorems show at most that some structure of abstract relations but not semantic content gets preserved in the T-T* transition (§11a). But if reference is determined by causal-historical relations (§5c), then the references of some key terms of T get lost in the transition to T*—“ether” was a key referring term of classical physics, but there is no ether in special relativity; so how can classical physics capture part of the same facts that special relativity captures when all its claims about the ether are either plainly false or truth valueless?

These are serious challenges to SR. On one hand, it is hard to shake the idea that theories are successful because they are “onto something”. Yes, we build them to be successful, but their scope and novel predictions generally greatly outstrip our initial intentions. Realists tend to see the history of science as supporting an optimistic meta-induction: since past theories were successful because they were approximately true and their core terms referred, so too current successful theories must be approximately true and their central terms refer. On the other hand, skeptics see the history of science as supporting a pessimistic meta-induction: since some (many, most) past successful theories turned out to be false and their core terms not to refer, so too current successful theories may (are likely to) turn out to be false and their key terms not to refer. Realists must be careful not to interpret history of science blindly (ignoring the successes of ether theories and the failures of early atomic theories, for example) or Whiggishly (begging questions by wrongly attributing to our predecessors our referential intentions—by assuming, for example, that Newton’s “gravity” referred to properties of the space-time metric).

8. Semantic Challenges to Scientific Realism

Realist truth and reference are word-world/thought-world correspondences (SR4), an intuitively plausible view with a respectable pedigree going back to Aristotle. Moreover, some IBE-realists argue that real correspondences are needed to explain the successful working of language and science: we use representations of our environment to perform tasks; our success depends on the representations causally “tracking” environmental information; truth is a causal-explanatory notion. Several philosophical positions challenge this idea.

a. Semantic Deflationism

Tarski showed how to define the concept is true-in-L (where L is a placeholder for some particular language). Treating “is true” as predicated of sentences in a formal language, he provided a definition of the concept that builds it up recursively from a primitive reference relation that is specified by a list correlating linguistic items syntactically categorized with extra-linguistic items semantically categorized. Thus, for example, a clause like “‘electron’ refers to electrons” would be on this list if the language were English. Although Tarski’s definition is technically sophisticated, the main points for our purposes are these. First, it satisfies an adequacy condition (referred to as Convention T): for every sentence P (of L), when P is run through the procedure specified by the definition, “P” is true (in L) if and only if P. Thus, for example, “Electrons exist” is true-in-English if and only if electrons exist, and so forth. Second, truth and reference are disquotational devices: because of the T-equivalences, to assert that “snow is white” is true (in English) is just to assert that snow is white; similarly, to assert that “snow” refers (in English) to some stuff is just to assert that the stuff is snow.

Semantic deflationists (Fine 1996; Horwich 1990; Leeds 1995, 2007) argue that Tarski’s theory provides a complete account of truth and reference: truth and reference are not causal explanatory notions; they are merely disquotational devices that are uninformative though expressively indispensable—useful predicates that enable us to express certain claims (like “Everything Putnam said is true”) that would be otherwise inexpressible. So long as a truth theory satisfies Convention T, these things will be expressible, and a trivial list-like definition of reference (P refers to x iff x is P) will suffice to generate the T-sentences. As native speakers, we know, without empirical investigation, that “electron” refers to electrons just by having mastered the word “refers” in our language. Our beliefs about electrons could be mistaken, but not our belief that “electron” applies to electrons. In particular, we cannot coherently suppose that “electron” does not refer to electrons because this is but a step away from a formal contradiction—some electrons are not electrons. Deflationists argue that such “thin” concepts and trivial relations cannot bear the explanatory burdens that scientific realists expect of them.

Deflationism is a controversial position. Field, before he endorsed deflationism, argued that Tarski merely reduced truth to a list-like definition of reference, but such a definition is physicalistically unacceptable (Field 1972). Chemical valence was originally defined by a list pairing chemical elements with their valence numbers, but later this definition was unified in terms of the number of outer shell electrons in the element’s atoms. Field argued that reference should be similarly reduced to physical notions. While this seems an implausibly strong requirement, many philosophers think it obvious that the success of action depends on the truth of the actors’ beliefs: John’s success in finding rabbits in the upper field, they argue, depends on his rabbit-beliefs corresponding to the local rabbits (Liston 2005). Deflationists respond that John’s success is explained by there being rabbits there (no need to mention ‘true’), but deflated explanations become strained when John is not an English thinker, because the sentences Jean holds true (‘Des lapins habitent le champ supérieur’) must first be translated into sentences we hold true and then disquoted—a strategy known as extended disquotationalism—and it is difficult to see why Jean’s success has anything to do with his sentences translating into ours.

Deflationists reject SR4 and SR5, but this does not mean they cannot believe what our best scientific theories tell us: deflationists can and typically do accept SR3 as well as all the object-level inferences that science uses, including object-level IBE (Leeds 1995, 2007). It means only that deflationists reject the meta-level IBE deployed by realists (§5e)—such inferences must be rejected if truth is not an explanatory notion.

b. Pragmatist Truth Surrogates

Pragmatists question metaphysical realism (SR3): it presupposes a relation between our representations (to which we have access) and a mind-independent world (to which we lack access), and there cannot be such a relation, because mind-independent objects are in principle beyond our cognitive reach. Thus SR3 (and correspondence truth) is either vacuous or unintelligible. For them, word-world relations are between words and objects-as-conceived by us. If we cannot reach out to mind-independent objects, we must bring them into our linguistic and conceptual range.

Pragmatists also tend to supplement Tarski’s understanding of truth, like philosophers in a broadly idealist tradition (including Hume, Kant, the positivists, and Kuhn) who employ truth-surrogates that structure the “world” side of the correspondence relation in some way (impressions, sense data, phenomena, a structured given) that would render the correspondence intelligible. Depending on the kind of idealism adopted “p is true” might be rendered “p is warrantedly assertible”, “p is derivable from theory Θ”, or “p is accepted in paradigm P”, all of the form “p is E” where E is some epistemic surrogate for “true”. We have already seen (§5d) how realists object to this move: it assigns to the concepts truth and reference the wrong properties (it makes them intra-theoretic rather than trans-theoretic) and thus cannot properly capture key features of practice. More generally, Putnam argues, truth cannot be identified with any epistemic notion E: take any revisable proposition p that satisfies E, we already know that p might not be true; so being E does not amount to being true. For example, that Venus has CO2 in its atmosphere is currently warrantedly assertible, but future investigation could lead us to discover that it is not true. Thus, Putnam thinks, truth is epistemically transcendent: it cannot be captured by any epistemic surrogate (Putnam 1978).

c. Putnam’s Internal Realism

In his SR period, Putnam held that only real word-world correspondences could capture the epistemic transcendence and causal explanatory features of truth. In the late 1970s Putnam came to doubt SR3, reversed his position, and proposed a new program, internal realism (Putnam 1981). IR has negative and positive components.

The main negative component rejects metaphysical realism (SR3) and the associated thesis that truth and reference are word-world correspondences (SR4). The primary argument for this rejection is Putnam’s model-theoretic argument (Merrill 1980; Putnam 1978, 1981). Take our language and total theory of the world. Suppose the intended reference scheme (which correlates our word uses with objects in the world) is that which satisfies all the constraints our best theory imposes. This supposition is problematic because those constraints would fix at best the truth conditions of every sentence of our language, they would not determine a unique assignment of referents for our terms. Proof: Assume there are n individuals in the world W, and our theory T is consistent. Model theory tells us that since T is consistent it has a model M of cardinality n; that is, all the sentences of T will be true-in-M. Now define a 1-1 mapping f from the domain of M, D(M), to the domain of W, D(W), and use f to define a reference relation R* between L(T) (the language of our theory) and objects in D(W) as follows: if x is an object in D(W) and P is a predicate of L(T), then P refers* to x if and only if P refers-in-M to f-1x. Then any sentence S will be true* (of W) if and only if S is true-in-M. Intuitively, truth* and reference* are not truth and reference but gerrymandered relations that mimic truth-in-M and refers-in-M, where M can be entirely arbitrary, provided it has enough objects in its domain. Unfortunately, anything we do to specify the correct reference scheme for our language and incorporate it into our total theory is subject to this permutation argument. One might object, for example, that a necessary condition for (real) reference is that P refer to x only if x causes P and P is not causally related to the objects it refers* to (Lewis 1984). But if we add this condition to our theory, then we can redeploy a permutation whereby “x causes* P (in W)” will mimic “f-1x causes P (in-M)”; and instead of failing to fix the real reference relation we will be failing to fix the real causal relations. This formal result is the basis of Putnam’s argument that even our best theory must fail to single out its intended model (reference scheme). The permutation move is so global that no matter what trick X one uses to distinguish reference from reference*, the argument will be redeployed so that if X relates to cats in a way that it does not to cats*, then X* (a permutation of X) will relate to cats* in the same sort of way, and there will be no way of singling out whether we’re referring to X or X*.

The positive component of internal realism replaces SR3 and SR4 with IR3 and IR4:

IR3 We can understand a determinate world only as a world containing objects and properties as it would be described in the ideal theory we would reach at the limit of human inquiry;

IR4 Theories are true, when they are, partly because their concepts correspond to objects and properties that the ideal theory carves out; and it reinterprets references to truth in SR1, SR2, and SR5 in terms of IR3 and IR4.

IR3 replaces allegedly problematic, inaccessible mind-independent objects with unproblematic, accessible objects that would be produced by the conceptual scheme we would reach in the ideal theory, and IR4 relates our words to the world as it would be carved up according to the ideal theory. When truth, reference, objects, and properties are thus relativized to the ideal theory, then IR1, IR2, and IR5 are just IR counterparts of their SR analogs: we aim to give accounts that would be endorsed in the ideal theory; to accept a theory is to believe it approximates the ideal theory; science (trivially) progresses toward the ideal theory. Putnam believes he can avoid unintelligible correspondences to an inaccessible, God-eye view of the world yet still have a concept of truth that is explanatory and epistemically transcendent. While truth-in-the-ideal-limit is an epistemic concept—it is relativized to what humans can know—it transcends any particular epistemic context; so we can have the best reasons to believe that Venus has CO2 in its atmosphere though it may be false (for it may turn out not to be assertible in the ideal theory).

Objects and properties, according to IR3, are as much made as discovered. To many realists, this seems to be an extravagant solution to a non-problem (Field 1982): extravagant to claim we have a hand in making stars or dinosaurs; a non-problem, because many realists think the content of metaphysical realism (SR3) is just that there is a mind-independent world in the sense that stars and dinosaurs exist independently of what humans say, do, or think. The problem is not how to extend our epistemic and semantic grasp to objects separated from us by a metaphysical chasm; it is the more ordinary, scientific problem of how to extend our grasp from nearby middle-sized objects with moderate energies to objects that are very large, very small, very distant from us spatiotemporally, and so forth. (Kitcher 2001; Liston 1985). Moreover, realists point out, true-in-the-ideal-theory falls short of true. We know that either string theory is true and the material universe is composed of tiny strings or this is not the case. But it is conceivable that no amount of human inquiry, even taken to the ideal limit, will decide which; so though one disjunct is true, neither may be assertible in the ideal limit. Consequently, internalist truth lacks the properties of truth. (It is noteworthy that Putnam recanted internalist truth in his last writing on these matters (Putnam 2015)).

Rorty is another pragmatist who rejects, in a far more radical manner than Putnam, the fundamental presuppositions of the realist-antirealist debate (Rorty 1980).

9. Law-Antirealism and Entity-Realism

Cartwright (1983) and Hacking (1983) represent this mix of theoretical law antirealism and theoretical entity realism. The kind of account that Cartwright rejects has three main components. First is the facticity view of fundamental physical laws: adequate fundamental laws must be (approximately) true. The basic equations of Newton, Maxwell, Einstein (STR/GTR), quantum mechanics, relativistic quantum mechanics, and so forth, are typical examples of such laws. Second is the covering law (or DN) model of explanation (Hempel 1965, §3c): a correct explanation of a phenomenon or phenomenological law is a sound deduction of the explanandum from fundamental laws together with statements describing, for example, compositional details of the system, boundary and initial conditions, and so forth. The deduction renders the explanandum intelligible by showing it to be a special case of the general laws. Thus, for example, Galileo’s law of free fall is explained as a special case of Newtonian fundamental laws by its derivation from Newton’s gravitational theory plus background conditions close to the earth’s surface. Third is IBE: the success of DN-explanations in rendering large classes of phenomena intelligible can justify our inferring the truth of the covering laws. The fact that Galileo’s law, Kepler’s laws, the ideal gas laws, tidal phenomena, the behavior of macroscopic solids, liquids, and gases all find a deductive home under Newton’s laws provides warrant for belief in the facticity of Newton’s laws.

Cartwright rejects all three components. She begins by challenging the first two components: there is a trade-off between facticity and explanatory power. Newton’s law of gravitation, FG = Gm1m2/r122, tells us what the gravitational force between two massive bodies is. Coulomb’s law, FC = kq1q2/r122, tells us what the electrostatic force between two charged bodies is. Each law gives the total force only for bodies where no other forces are acting. But most actual bodies are charged and massive and have other forces acting on them; thus the laws either are not factive (if read literally) or do not cover (if read as subject to the ceteris paribus modifier “provided no other forces are acting”). In physics, we explain by combining the forces: the actual force acting on a charged massive body is FA = FG + FC, the vector-sum of the Newton and Coulomb forces, which determines the actual acceleration and path. Cartwright objects that (a) we lack general laws of interaction allowing us to add causal influences in this way, (b) there is no reason to think that we can get super-laws that will be true and cover, (c) in nature there is only the actual cause and resultant trajectory. But if the facticity and explanatory components clash in this way, the third component is in trouble also. Realists cannot appeal to IBE to justify belief in factive fundamental covering laws because good explanations that cover a host of phenomena rarely proceed from true (factive) laws. Consequently, the explanatory success of fundamental laws cannot be cited as evidence for their truth.

Cartwright’s own account has three corresponding components. First, fundamental laws are non-factive: they describe idealized objects in abstract mathematical models, not natural systems. In nature there are no purely Newtonian gravitational systems or purely electromagnetic systems. These are mathematical idealizations. Only messy phenomenological laws (describing empirical regularities and fairly directly supported by experiment) truly describe natural systems. Second, we should replace the DN model of explanation with a simulacrum account: explanations confer intelligibility by fitting staged mathematical descriptions of the phenomena to an idealized mathematical model provided by the theory by means of modeling techniques that are generally “rigged” and typically ignore (as negligible) disturbing forces or mathematically incorporate them (often inconsistently). To explain a phenomenon is to fit it in a theory so that we can derive fairly simple analogs of the messy phenomenological laws that are true of it. Intelligibility, not truth, is the goal of theoretical explanation. Third, although we should reject IBE, we should embrace inference to the most likely cause (ILC). Whereas theoretical explanations allow acceptable alternatives and need not be true, causal explanations prohibit acceptable alternatives and require the cause’s existence. ILC, on Cartwright’s view, can justify belief in unobservables that are experimentally detectible as the causes of phenomena. Thus, for example, Perrin’s experiments showed that the most likely cause of Brownian motion was molecular collisions with the Brownian particles; Rutherford’s experiments showed that the most likely cause of backward scattering of a-particles bombarded at gold foil were collisions with the nuclei of the gold atoms.

The laws of physics lie, Cartwright claims, and the hope of a true, unified, explanatory theory of physics is either based on a misunderstanding of physics practice or a vestige of 17th century metaphysical hankering for a neatly designed mechanical universe. The practice of physicists, she argues, indicates that we ought to be antirealists about fundamental laws and points instead to a messy, untidy universe that physicists cope with by constructing unified abstract stories (Cartwright 1999). Thus Cartwright is anti-realist about fundamental laws: contrary to realists, they are not (even approximately) true; contrary to van Fraassen, she is not recommending agnosticism—we now know they are non-factive. On the other hand, also contrary to van Fraassen, scientific practice indicates that we should be realists about “unobservable” entities that are the most likely causes of the phenomena we investigate.

Critics complain that Cartwright confuses metaphysics and epistemology: even if we lack general laws of interaction, it does not follow that there are none. Cartwright replies that the unifying ideal of such super-laws is merely a dogma. However, practice seems duplicitous here: the history of modern physics is one of disunity leading to unity leading to disunity, and so forth. Each time distinct fundamental laws resist combination, a new unifying theory emerges that combines them: electrodynamics and eventually Einstein’s theories succeeded in combining Newton and Coulomb forces. The quest for unity is a powerful force guiding progress in physics, and, while the ideal of a unified “theory of everything” continues to elude us, Cartwright’s examples hardly show that it is a vain quest. Moreover, Cartwright arguably conflates different kinds of laws: in classical settings, the fundamental laws are Newton’s laws of motion, and his F = ma is the super-law that combines Newton’s gravitational and Coulomb’s electrostatic laws (Wilson 1998).

Cartwright’s distinction between “theoretical” and “causal” explanations has also been criticized. Nothing about successful theoretical explanations, she claims, requires their truth, whereas successful causal explanations require the existence of the cause. To many this move seems fallacious—if “successful” means correct, then the truth of the former follows as much as the existence of the latter; if “successful” does not mean correct, then neither follows. Presumably, in the IBE context, “successful” does not entail truth, but similarly in the ILC context, “successful” does not entail existence: the most likely cause could turn out not to exist (for example, caloric flow or phlogiston escape) just as the best explanation could turn out to be false (caloric or phlogiston theory).

10. NOA: The Natural Ontological Attitude

Fine (1986, 1996) presented NOA, an influential response to the debates amounting to a complete rejection of their presuppositions. We generally trust what our senses tell us and take our everyday beliefs as true. We should similarly trust what scientists tell us: they can check what is going on behind the appearances using instruments that extend our senses and methods that extend ordinary methods. This is NOA: we should accept the certified results of science on a par with homely truths. Both realists and antirealists accept this core position, but each adds an unnecessary and flawed philosophical interpretation to it.

Realists add to the core position the redundant word “REALLY”: “electrons REALLY exist”. SR realists add substantive word-world correspondences, a policy that serves no useful purpose. The only correct notion of correspondence is the disquotational one: “P” refers to (or is true of) x if and only if x is P. Realist appeals to IBE are problematic for two reasons. First, they beg the question against antirealists, who ab initio question any connection between explanatory success and approximate truth. Moreover, there is no inferential principle that realists could employ and antirealists would accept. Straight induction will not work: we can induce from the observed to the unobserved, because the unobserved can be later observed to check the induction; but we cannot induce to unobservables, because there can be no such independent check (according to the antirealist). Second, IBE does not work without some logical connection between success and (approximate) truth. But the inference from success to (approximate) truth is either invalid if read as a deductive move (because many successful theories turned out to be false (§7b)), weak if read as an inductive move (because nearly all successful past theories turned out to be false), or circular if read as a primitive IBE move. The antirealist, by contrast, has a ready answer: if a scientific theory or method worked well in the past, tinker with it, and try it again. Finally, Fine argues, contrary to what realists often claim, realism blocks rather than promotes scientific progress. In the Einstein-Bohr methodological debates about the completeness of quantum mechanics, the realist Einstein saw QM as a degenerate theory, while the instrumentalist Bohr saw QM as a progressive theory. Subsequent history favored Bohr over Einstein.

However, antirealism is no better off. Empiricists attempt to set limits: we should believe only what science tells us about observables. Fine criticizes these limits for reasons given in §5a and §6b—the observable-unobservable distinction cannot be drawn in a manner that would motivate skepticism or agnosticism about unobservables but not about observables. We have standard ways of cross checking to ensure that what we are “seeing’ with an instrument or calculating with a theory is reliable even if not “directly” observable. Fine concludes that the checks that science itself uses should be the ones we appeal to when in doubt. Pragmatists and constructivists react to the inaccessible, unintelligible word-world correspondences posited by realists by pulling back and trying to reformulate the correspondences in terms of some accessible surrogate for truth and reference (§8). Fine reiterates the criticisms of §5d and §8: truth has properties that any epistemic truth-surrogate lacks.

Both realists and antirealists view science as a practice in need of a philosophical interpretation. In fact, science is a self-interpreting practice that needs no philosophical interpretation. It has local aims and goals, which are reconfigured as science progresses. Asking about the (global) aim of science is like asking about the meaning of life: it has no answer and needs none. NOA takes science on its own terms, a practice whose history and methods are rooted in, and extends from, everyday thinking (Miller 1987). NOA accepts ordinary scientific practices but rejects apriorist philosophical ideas like the realist’s God’s-Eye view and antirealist’s truth-surrogates.

Critics see NOA as a flight from, rather than a response to, the scientific realism question (Musgrave 1989). The core position, they argue, is difficult to characterize in a philosophically neutral manner that does not invite a natural line of philosophical questioning. Once one accepts that science delivers truths and explanations, it is natural to ask what that means, and realist and antirealist replies will naturally ensue—as they always have, since these interpretations are as old as philosophy itself. Moreover, it may be difficult to characterize NOA non-tendentiously: ground-level IBE and correspondence truth, for example, are arguably rooted in common sense and ought to be included in NOA; but then any antirealism that rejects them is incompatible with NOA.

11. The 21st Century Debates

Between 1990 and 2016 new versions of the debates, many focusing on Laudan’s PI (§7b), have emerged.

a. Structuralism

Structural Realism claims that: science aims to provide a literally true account only of the structure of the world (StR1); to accept a theory is to believe it approximates such an account (StR2); the world has a determinate and mind-independent structure (StR3); theories are literally true only if they correctly represent that structure (StR4); and the progress of science asymptotically approaches a correct representation of the world’s structure (StR5). (Here we replace each SR thesis in §5 with an analogous StR thesis.)

Structuralism comes from philosophy of mathematics. Consider the abstract structure <ω, o, ξ>, where ω is an infinite sequence of objects, o an initial object, and ξ a relation that well-orders the sequence. This structure is distinct from its many exemplifications: for example, the natural numbers ordered under successor, <0, 1, 2, 3, …>; the even natural numbers in their natural order, <0, 2, 4, 6, …>; and so forth. We can similarly consider the offices of the U.S. President, Vice-President, Speaker of the House, and so forth. as a collection of objects defined by the structure of relations given in the U.S. Constitution, distinct from its particular exemplars at a given time: Bush, Cheney, Pelosi (January 2007), and Obama, Biden, Boehner (January 2011). Similarly, structuralists suggest, the structure of relations that obtain between scientific objects is distinct from the nature of those objects themselves. The structure of relations is typically expressed (at least in physics) by mathematical equations of the theory (Frigg and Votsis 2011). For example, Hooke’s law, F = -ks describes a structure, the set of all pairs of reals <x, y> such that y = -kx in R2, which is distinct from any its concrete exemplifications like the direct proportionality between the restoring force F for a stretched spring and its elongation s. If the world is a structured collection of objects (StR3), then StR1 says that science aims to describe only the structure of the objects but not their intrinsic natures.

Structuralism is not new: precursors include Poincaré and Duhem in the 19th century (§2c), Russell (1927), Ramseyfied-theory versions of logical positivism (§3b), Quine (§4), and Maxwell (1970). Russell claimed that we can directly know (by acquaintance) only our percepts, but we can indirectly know (by structural description) the mind-independent objects that give rise to them. This approach presupposes a problematic distinction between acquaintance and description and a problematic isomorphism between the percept and causal-entity structures. Worse, it runs afoul of a devastating critique by the mathematician M.H.A. Newman (1928), closely related to Putnam’s model-theoretic argument (§8c), and never satisfactorily answered by Russell. Newman argues that a fixed structure of percepts can be mapped 1-1 onto a host of different causal-entity structures provided there are enough objects in the latter; thus the structural knowledge that science allegedly delivers is trivial—it merely amounts to a claim that the world has a certain cardinality, the size of the percept-structure. (The Ramseyfied-theory approach encounters similar problems (Psillos 2001).)

Contemporary proponents, beginning with Worrall (1989), hold that structuralism steers a middle path between standard versions of scientific realism and antirealism. StR, they argue, provides the best of both worlds by acknowledging and reconciling the pull of both pessimistic and optimistic inductions on the history of science. Pessimistic inductions (PI) argue against SR (§7b): the ontology of our current best theories (quarks, for example) will likely be discarded just like that of past best theories (for example, ether). Optimistic inductions (like the NMA) argue for SR (§5d): because past successful theories must have been approximately true, current more successful theories must be closer to the truth. Structuralists respond that, though ontologies come and go, our grip on the underlying structure of the world steadily improves. Underlying ontology need not be (and is not) preserved in theory change, but the mathematical structure is both preserved and improved upon: Fresnel’s correct claims about the structure of light (as a wave phenomenon) were retained in later theories, while his incorrect claims about the nature of light (as a mechanical vibration in a mechanical medium, the ether) were later discarded. Structuralists can also resist the argument from empirically equivalent theories (§6c)—to the extent that the theories are structurally equivalent they would capture the same structural facts, which is all a theory needs to capture—and do so without embracing a particular realist ontology occupying the nodes of the structure.

But can the needed distinction between structure and nature be drawn and can structures be rendered intelligible without the ontology that gives them flesh (Psillos 1995, 1999, 2001)? Two possible StR answers are suggested.

First, there is epistemological structural realism (EStR), endorsed by Poincaré, Worrall, and logical positivists in the Ramseyfied-theory tradition: electrons are objects as Obama is an object, but, unlike Obama, science can never discover anything about electrons’ natures other than their structural relations. For EStR to be a realist position, it will not suffice to say: we can know only observable objects (like Obama) and their (observable) structural relations; we must be agnostic about unobservable objects and their relations. This is merely a CE version of structuralism, as van Fraassen points out (2006, 2008), and inherits many problems of CE (§6). To be a realist position, EStR has to presuppose that, in addition to the structure of the phenomena whose objects are knowable, there is a mind-independent, knowable “underlying” structure, whose objects are unknowable. But now one must distinguish Obama from electrons so that Obama’s nature is knowable but electrons’ natures are not; the problematic observable-unobservable distinction (§§5a, 6b) has returned.

Critics argue that there is no sharp, epistemologically significant distinction between form (structure) and content (nature) of the kind needed for EStR. First, our knowledge of the nature of electrons is bound up with our knowledge of their structural relations so that we come to know them together: saying what an electron is includes saying how it is structured; our knowledge of its nature forms a continuum with our knowledge of its structure. Second, EStR requires a variant of the NMA (restricted to retention of structure) to uphold StR5. But this requires that, in progressive theory-change, structure (retained and improved) is what explains increased empirical success. But structure alone (without auxiliary hypotheses describing non-structural features of the world) never suffices to derive new empirical content. Finally, critics object to structuralists’ interpretations of the history. Worrall, for example, argues that Fresnel’s structural claims about light (the mathematics) were retained, but not his commitments to a mechanical ether; his critics question whether Fresnel could have been “just” right about the structure of light-propagation and completely wrong about the nature of light.

Second, there is ontological structural realism (OStR), advocated by Ladyman and others (Ladyman and Ross 2007) and similar to Quine’s realism (§4). OStR bites the bullet: we can know only structure because only structure exists. Obama is no more an object than electrons are; each is itself a structure; more strongly, everything is structure. Some of the attraction of this strange metaphysical position comes from its promise to handle problems in quantum mechanics that are orthogonal to our debates. Its proponents argue that it can account, for example, for apparently indistinguishable particles in entangled quantum states. In the context of our debates, OStR is supposed to avoid the epistemological problems of EStR: qua objects understood as structural nodes, electrons are in principle no more unknowable (or knowable) than Obama or ordinary physical objects. However, it runs into its own metaphysical problems, since it threatens to lose touch with concrete reality altogether. Even if God created nothing concrete, it would still be a structural (mathematical) fact that neutrons and protons, if they exist, form an isospin doublet related by SU(2) symmetry. For this to be a concrete (physical) fact, God would have had to create some objects—nucleons with symmetrically related isospin states or some more fundamental objects that compose nucleons—to occupy the neutron- and proton-nodes of the SU(2) group-structure. Even if those objects had only structural properties, they would have to have one non-structural property—existence (van Fraassen 2006, 2008). So, not everything is structure; there is a distinction between empty mathematical structures and realized physical structures; OStR can not capture that distinction.

b. Stanford’s New Induction

Kyle Stanford’s new induction provides the latest historical challenge to SR (Stanford 2001, 2006, 2016). Following Duhem (1991) Stanford poses what he calls the Problem of Unconceived Alternatives (PUA): for any fundamental domain of inquiry at any given time t there are alternative scientific hypotheses not entertained at t but which are consistent with (and even equally confirmed by) all the actual evidence available at t. PUA, were it true, would seem to create a serious underdetermination problem for SR: we opt for our current best confirmed theory, but there is a distinct alternative that is equally supported by all the evidence we possess, but which we currently lack the imagination to think of. (Two things about PUA are worth noting. First, it concerns the actual evidence we have at a time; it is not that the theory and the alternatives are underdetermined by all possible evidence; the underdetermination may be transient; future evidence may decide that the theory we have selected is not correct. Second, the unconceived alternative hypotheses are ordinary scientific hypotheses, not recherché philosophical hypotheses involving brains-in-vats, and so forth.)

Stanford argues that PUA is our general predicament. His New Induction on the history of science, he argues, shows that our epistemic situation is one of recurrent, transient underdetermination. Virtually all T-T* transitions in the past were affected by PUA: the earlier T-theorists selected T as the best supported theory of the available alternatives; they did not conceive of T* as an alternative; T* was conceived only later yet T* is typically better supported than T. At any given time, we could only conceive a limited set of hypotheses that were confirmed by all the evidence then available, yet subsequent inquiry revealed distinct alternatives that turned out to be equally or better confirmed by that evidence. We thus have good inductive reasons to believe we are now in the same predicament—our current best theories will be replaced by incompatible and currently unconceived successors that account for all the currently available evidence.

Stanford proposes a new instrumentalism. Like van Fraassen’s (§6), his instrumentalism is epistemic: it distinguishes claims we ought literally to believe from claims we ought only to accept as instrumentally reliable and argues that instrumental acceptance suffices to account for scientific practice. Unlike van Fraassen, Stanford bases his distinction, not on an observable-unobservable dichotomy, but on whether our access to a domain is based primarily on eliminative inference subject to PUA challenges: if it is, then we should adopt an instrumentalist stance; if it is not (as, for example, our access to the common sense world is not), then we may literally believe.

c. Selective Realism

Many debates in the early 21st century focus on historical inductions, especially on what representative basis would warrant an inductive extrapolation. Putnam and Boyd were aware that care was needed with the NMA and sometimes restricted their claims to mature theories so that we discount ab initio some theories on Laudan’s troublesome list—like the theory of crystalline spheres or of humoral medicine. Mature theories (with the credentials to warrant optimistic induction) must have passed a “take-off” point: there must be background beliefs that indicate their application boundaries and guide their theoretical development; their successes must be supported by converging but independent lines of inquiry and so forth. Moreover, many realists argue, a theory is suitable for optimistic induction unless it has yielded novel predictions; otherwise it could just have been rigged to fit the phenomena. Roughly, a prediction P (whether known or unexpected) is novel with respect to a theory T if no P-information is needed for the construction of T and no other available theory predicts P. Thus, for example, Newton’s prediction of tidal phenomena was novel because those phenomena were not used in (and not needed for) Newton’s construction of his theory and no other theory predicted the tides (Leplin 1997; Psillos 1999). Nevertheless, even thus restricted, the induction will not meet Laudan’s challenge, for that challenge includes an undermining argument (Stanford 2003a): many discarded yet empirically successful theories were mature and yielded novel predictions—for example, Newton’s theory, caloric theory, and Fresnel’s theory of light—so, if our current theories are correct, these theories were false.

More recent responses to these counterexamples attempt to steer a middle course between optimistic inductions like Putnam’s NMA (§5d) and pessimistic inductions like Laudan’s and Stanford’s (§§7b, 11b). These responses typically have a two-part normal form: (1) they concede to the pessimists that some parts of past empirically successful theories are discarded, yet (2) they argue with the optimists that some parts of past successful theories are retained, improved upon, and explain the successes of the old theories. Advocates of this “divide and conquer” strategy (Psillos 1999) try to have their cake and eat it too.

Variants of the strategy depend on how one separates those “good” features of past theories that are preserved, that explain empirical success, and that warrant optimistic induction from those “bad” features that are discarded. Structuralists, we saw, argue that structure (form), but not nature (content), is what is both preserved and responsible for success. Kitcher (1993) distinguishes a theory’s working and presuppositional posits. The term “light-wave” in Fresnel’s usage referred to light, no matter what its constitution is, in some contexts and to what satisfies the descriptionthe oscillations of ethereal molecules” in other contexts. In the former contexts, “light-wave” referred to high frequency electromagnetic waves, a mode of reference that was doing explanatory and inferential work and was retained in later theories. In the latter contexts, “light-wave” referred to the ether (that is, nothing), a mode of reference that was presupposed yet empty, idle, and not retained in later theories.

Other variants rely on the causal theory of reference. Hardin and Rosenburg (1982) exploit the idea that one can successfully refer to X (by being suitably causally linked to X) while having (largely) false beliefs about X. Thus, Fresnel and Maxwell were referring to the electromagnetic field when they used the term “ether”, and, though they had many false beliefs about it (that it was a mechanical medium, for example), the electromagnetic field was causally responsible for their theories’ success and was retained in later theories.  A big problem with this response is that referential continuity does not suffice for partial or approximate truth (Laudan 1984; Psillos 1999). Psillos (1999) employs causal descriptivism to deal with this problem: “ether” in 19th century theories refers to the electromagnetic field, since that (and only that) object has the properties (medium of light-propagation that is the repository of energy and transmits it locally) that are causally responsible for the relations between measurements we get when we perform optical experiments. By contrast, “phlogiston” does not refer since nothing has the properties that the phlogiston theorists mistakenly believed to be responsible for the body of information they had about oxidation of metals, and so forth. During theory change, the causal-theoretical descriptions of some terms are retained and thereby their references also; these are the essential parts of the theory that contribute to its success; but this is consistent with less central parts being completely wrong.

The latest twist to these divide and conquer strategies is Chakravartty’s doctrine of semirealism (Chakravartty 1998, 2007). Taking his cue from Hacking-Cartwright (§9), Chakravartty distinguishes detection and auxiliary properties. The former are causal properties of objects (and the structure of real relations between them) that are well-confirmed by experimental manipulation because they underwrite the causal interactions we and our instruments exploit in experimental set-ups; the latter are merely theoretical and inferential aids. The former are retained in later theories; the latter are not. Past theories that were on the right track were so because they mathematically coded in systematic ways the detection properties (as opposed to the idle auxiliary properties).

Any of these strategies must meet two further challenges, emphasized in (Stanford 2003a, 2003b). First, they must answer the undermining challenge (above) in a way that is not ad hoc, question-begging, or transparently Whiggish. Simply arguing (with Hardin and Rosenburg) for preservation of reference via preservation of causal role is too easy: do Aristotle’s natural place, Newton’s gravitational action, and Einstein’s space-time curvature all play the same causal role in explaining free-fall phenomena? And if we tighten the account by claiming that continuity requires retention of core causal descriptions (Psillos) or detection property clusters (Chakravartty), are we engaged in a self-serving enterprise? Are we using our own best theories to determine the core causal properties/descriptions and then “reading” those back into the past discarded theories?

Second, they must respond to the trust argument. Divide and conquer strategies argue that successful past theories were right about some things but wrong about others. But then we should expect our own theories to be right about some things and wrong about others. Though perhaps an advance, this does not provide us with a good reason to trust any particular part of our own theories, especially any particular assessment we make (from our vantage point) of the features of a past discarded theory that were responsible for its empirical success. We judge that X-s in a past theory were working posits (Kitcher), essentially contributing causes of success (Psillos), detection properties (Chakravartty), while Y-s in that theory were merely presuppositional posits, idle, or auxiliary properties. But the past theorists were generally unable to make these discriminations, so why do we think we can now make them in a reliable manner. Stanford argues that realists can avoid this problem only if they can provide prospectively applicable criteria of selective confirmation—criteria that past theorists could have used to distinguish the good from the bad in advance of future developments and that we could now use—but they did not have such criteria, nor do we.

12. References and Further Reading

  • Boyd, R. (1973), “Realism, Underdetermination and the Causal Theory of Evidence”, Nous 7, 1-12.
  • Boyd, R. (1983), “On the Current Status of the Issue of Scientific Realism”, Erkenntnis, 19, 45–90.
  • Carnap, R. (1936), “Testability and Meaning”, Philosophy of Science 3, 419-471.
  • Carnap, R. (1937), “Testability and Meaning–Continued”, Philosophy of Science 4, 1-40.
  • Carnap, R. (1939), “Foundations of Logic and Mathematics”, International Encyclopedia of Unified Science 1(3), Chicago: The University of Chicago Press.
  • Carnap, R. (1950), “Empiricism, Semantics and Ontology”, Revue Intérnationale de Philosophie 4, 20-40.
  • Carnap, R. (1956), “The Methodological Character of Theoretical Concepts”, in H. Feigl and M. Scriven (eds), Minnesota Studies in the Philosophy of Science I, Minneapolis: University of Minnesota Press.
  • Cartwright, N. (1983), How the Laws of Physics Lie. Oxford: Clarendon Press.
  • Cartwright, N. (1999), The Dappled World, Cambridge: Cambridge University Press.
  • Chakravartty, A. (1998), “Semirealism”, Studies in the History and Philosophy of Science 29 (3), 391-408.
  • Chakravartty, A. (2007), A Metaphysics for Scientific Realism: Knowing the Unobservable. Cambridge: Cambridge University Press.
  • Churchland, P. (1985), ‘The Ontological Status of Observables: In Praise of the Superempirical Virtues’, in Churchland and Hooker 1985.
  • Churchland, P. and C. Hooker (eds) (1985), Images of Science: Essays on Realism and Empiricism, (with a reply from Bas van Fraassen). Chicago: University of Chicago Press.
  • Duhem, P. (1991/1954/1906), The Aim and Structure of Physical Theory. trans. P Wiener, intro. Jules Vuillemin, Princeton: Princeton University Press.
  • Field, H. (1972), “Tarski’s Theory of Truth”, Journal of Philosophy 64 (13), 347-375.
  • Field, H. (1982), “Realism and Relativism”, Journal of Philosophy 79 (10), 553-567.
  • Fine, A. (1996/1986), The Shaky Game. Chicago: University of Chicago Press.
  • Friedman, M. (1980), “Review of The Scientific Image”, Journal of Philosophy 79 (5), 274-283.
  • Friedman, M. (1999), Reconsidering Logical Positivism. Cambridge: Cambridge University Press.
  • Frigg, R. and I. Votsis. (2011), “Everything You Always Wanted to Know about Structuralism but Were Afraid to Ask”, European Journal for the Philosophy of Science 1, 227-276.
  • Hacking, I. (1983), Representing and Intervening. Cambridge: Cambridge University Press.
  • Hardin, C. and A. Rosenburg. (1982), “In Defense of Convergent Realism”, Philosophy of Science 49, 604-615.
  • Harman, G. (1965), “The Inference to the Best Explanation”, The Philosophical Review 74, 88–95.
  • Hempel, C. G. (1965), Aspects of Scientific Explanation. New York: Free Press.
  • Hertz, H. (1956), The Principles of Mechanics. New York: Dover.
  • Horwich, P. (1990), Truth. Oxford: Blackwell.
  • Kitcher, P. (1993), The Advancement of Science. Oxford: Oxford University Press.
  • Kitcher, P. (2001). “Real Realism: The Galilean Strategy”, The Philosophical Review 110 (2), 151-197.
  • Kuhn, T.S. (1970/1962), The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
  • Kuhn, T.S. (1977/1974), The Essential Tension. Chicago: University of Chicago Press.
  • Kukla, A. (1998), Studies in Scientific Realism. Oxford: Oxford University Press.
  • Ladyman, J. and D. Ross. (2007), Every Thing Must Go: Metaphysics Naturalized. Oxford: Oxford University Press.
  • Laudan, L. (1981), “A Confutation of Convergent Realism”, Philosophy of Science, 48, 19–48.
  • Laudan, L. (1984), “Realism without the Real”, Philosophy of Science, 51, 156-162.
  • Laudan, L. and J. Leplin. (1991), “Empirical Equivalence and Underdetermination”, Journal of Philosophy 88 (9), 449-472.
  • Leeds, S. (1995), “Truth, Correspondence, and Success”, Philosophical Studies 79 (1), 1-36.
  • Leeds, S. (2007), “Correspondence Truth and Scientific Realism”, Synthese 159, 1–21.
  • Leplin, J. (1997), A Novel Defence of Scientific Realism. Oxford: Oxford University Press.
  • Lewis, D. (1970), “How to Define Theoretical Terms”, Journal of Philosophy 67, 427-446.
  • Lewis, D. (1984). ‘Putnam’s Paradox’, Australasian Journal of Philosophy 62: 221-236.
  • Lipton, P. (2004/1991), Inference to the Best Explanation. London: Routledge.
  • Liston, M. (1985), “Is a God’s-Eye-View an Ideal Theory?”, Pacific Philosophical Quarterly 66.3-4, 355-376.
  • Liston, M. (2005), “Does ‘Rabbit’ refer to Rabbits?”, European Journal of Analytic Philosophy 1, 39-56.
  • Mach, E. (1893), The Science of Mechanics, trans. T. J. McCormack, 6th edition., La Salle: Open Court.
  • Magnus, P.D. and C. Callender. (2004), “Realist Ennui and the Base Rate Fallacy”, Philosophy of Science 71, 320–338.
  • Maxwell, G. (1962), “On the Ontological Status of Theoretical Entities”, in H. Feigl and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science III, Minneapolis: University of Minnesota Press.
  • Maxwell, G. (1970), “Structural Realism and the Meaning of Theoretical Terms”, in S. Winoker and M. Radner (eds.), Minnesota Studies in the Philosophy of Science IV, Minneapolis: University of Minnesota Press.
  • McMullin, E. (1991), “Rationality and Theory Change in Science”, in P. Horwich (ed.), Thomas Kuhn and the Nature of Science, Cambridge: MIT Press.
  • Merrill, G. H. (1980), “The Model-Theoretic Argument Against Realism”, Philosophy of Science 47, 69-81.
  • Miller, D. (1974), “Popper's Qualitative Theory of Verisimilitude”, British Journal for the Philosophy of Science 25, 166–177.
  • Miller, R. (1987), Fact and Method. Princeton: Princeton University Press.
  • Musgrave, A. (1985), “Realism vs Constructive Empiricism”, in Churchland and Hooker 1985.
  • Musgrave, A. (1989), “Noa's Ark–Fine for Realism”, Philosophical Quarterly 39, 383–398.
  • Newman, M. H. A. (1928), “Mr. Russell’s ‘Causal Theory of Perception”’, Mind 37, 137-148.
  • Niiniluoto, I. (1987), Truthlikeness. Dordrecht: Reidel.
  • Poincaré, H. (1913), The Foundations of Science. New York: The Science Press.
  • Psillos, S. (1995), “Is Structural Realism the Best of Both Worlds?”, Dialectica 49, 15-46.
  • Psillos, S. (1999), Scientific Realism: How Science Tracks Truth. London: Routledge.
  • Psillos, S. (2001), “Is Structural Realism Possible?”, Philosophy of Science 68, S13–S24.
  • Putnam, H. (1962), “What Theories Are Not”, in Putnam 1975c.
  • Putnam, H. (1975a), “Explanation and Reference”, in Putnam 1975d.
  • Putnam, H. (1975b), “The Meaning of ‘Meaning”’, in (Putnam 1975d).
  • Putnam, H. (1975c), Philosophical Papers 1: Mathematics, Matter and Method. Cambridge: Cambridge University Press.
  • Putnam, H. (1975d), Philosophical Papers 2: Mind, Language and Reality. Cambridge: Cambridge University Press.
  • Putnam, H. (1978), Meaning and the Moral Sciences. London: Routledge.
  • Putnam, H. (1981), Reason, Truth and History. Cambridge: Cambridge University Press.
  • Putnam, H. (2015), “Naturalism, Realism, and Normativity”, Journal of the American Philosophical Association 1(2), 312-328.
  • Quine, W.V. (1955), “Posits and Reality”, in W. V. Quine, The Ways of Paradox and Other Essays. Cambridge: Harvard University Press (1976), 246-254.
  • Quine, W.V. (1969), “Epistemology Naturalized”, in W. V. Quine, Ontological Relativity and Other Essays. New York: Columbia University Press (1969): 69-90.
  • Rorty, R. (1980), Philosophy and the Mirror of Nature. Princeton: Princeton University Press.
  • Russell, B. (1927), The Analysis of Matter. London: Routledge, Kegan, Paul Stanford, P. K. (2001), “Refusing the Devil's Bargain: What Kind of Underdetermination Should We Take Seriously?”, Philosophy of Science 68 (3), S1-S12.
  • Stanford, P.K. (2003a), “Pyrrhic Victories for Scientific Realism”, Journal of Philosophy 100 (11), 553-572.
  • Stanford, P.K. (2003b), “No Refuge for Realism: Selective Confirmation and the History of Science”, Philosophy of Science 70, 917-925.
  • Stanford, P.K. (2006), Exceeding our Grasp. Oxford: Oxford University Press.
  • van Fraassen, B. (1980), The Scientific Image. Oxford: Clarendon Press.
  • van Fraassen, B. (2006), “Structure: its Shadow and Substance”, British Journal for Philosophy of Science 57, 275-307.
  • van Fraassen, B. (2008), Scientific Representation. Oxford: Clarendon Press.
  • Wilson, M. (1982), “Predicate Meets Property”, Philosophical Review 91(4), 549-589.
  • Wilson, M. (1985), “What can Theory Tell us about Observation?”, in Churchland and Hooker 1985.
  • Wilson, M. (1998), “Mechanics, Classical”, in Edward Craig (ed.), The Routledge Encyclopedia of Philosophy Vol. 6, 251-259, London: Routledge.
  • Wilson, M. (2006), Wandering Significance. Oxford: Oxford University Press.
  • Worrall, J. (1989), “Structural Realism: The Best of Both Worlds?”, Dialectica 43, 99–124.


Author Information

Michael Liston
University of Wisconsin-Milwaukee
U. S. A.