Karl Popper: Philosophy of Science

Karl PopperKarl Popper (1902-1994) was one of the most influential philosophers of science of the 20th century. He made significant contributions to debates concerning general scientific methodology and theory choice, the demarcation of science from non-science, the nature of probability and quantum mechanics, and the methodology of the social sciences. His work is notable for its wide influence both within the philosophy of science, within science itself, and within a broader social context.

Popper’s early work attempts to solve the problem of demarcation and offer a clear criterion that distinguishes scientific theories from metaphysical or mythological claims. Popper’s falsificationist methodology holds that scientific theories are characterized by entailing predictions that future observations might reveal to be false. When theories are falsified by such observations, scientists can respond by revising the theory, or by rejecting the theory in favor of a rival or by maintaining the theory as is and changing an auxiliary hypothesis. In either case, however, this process must aim at the production of new, falsifiable predictions, while Popper recognizes that scientists can and do hold onto theories in the face of failed predictions when there are no predictively superior rivals to turn to. He holds that scientific practice is characterized by its continual effort to test theories against experience and make revisions based on the outcomes of these tests. By contrast, theories that are permanently immunized from falsification by the introduction of untestable ad hoc hypotheses can no longer be classified as scientific. Among other things, Popper argues that his falsificationist proposal allows for a solution of the problem of induction, since inductive reasoning plays no role in his account of theory choice.

Along with his general proposals regarding falsification and scientific methodology, Popper is notable for his work on probability and quantum mechanics and on the methodology of the social sciences. Popper defends a propensity theory of probability, according to which probabilities are interpreted as objective, mind-independent properties of experimental setups. Popper then uses this theory to provide a realist interpretation of quantum mechanics, though its applicability goes beyond this specific case. With respect to the social sciences, Popper argued against the historicist attempt to formulate universal laws covering the whole of human history and instead argued in favor of methodological individualism and situational logic.

Table of Contents

  1. Background
  2. Falsification and the Criterion of Demarcation
    1. Popper on Physics and Psychoanalysis
    2. Auxiliary and Ad Hoc Hypotheses
    3. Basic Sentences and the Role of Convention
    4. Induction, Corroboration, and Verisimilitude
  3. Criticisms of Falsificationism
  4. Realism, Quantum Mechanics, and Probability
  5. Methodology in the Social Sciences
  6. Popper’s Legacy
  7. References and Further Reading
    1. Primary Sources
    2. Secondary Sources

1. Background

Popper began his academic studies at the University of Vienna in 1918, and he focused on both mathematics and theoretical physics. In 1928, he received a PhD in Philosophy. His dissertation, On the Problem of Method in the Psychology of Thinking, dealt primarily with the psychology of thought and discovery. Popper later reported that it was while writing this dissertation that he came to recognize “the priority of the study of logic over the study of subjective thought processes” (1976, p. 86), a sentiment that would be a primary focus in his more mature work in the philosophy of science.

In 1935, Popper published Logik der Forschung (The Logic of Research), his first major work in the philosophy of science.  Popper later translated the book into English and published it under the title The Logic of Scientific Discovery (1959). In the book, Popper offered his first detailed account of scientific methodology and of the importance of falsification. Many of the arguments in this book, as well as throughout his early work, are directed against members of the so-called “Vienna Circle,” such as Moritz Schlick, Otto Neurath, Rudolph Carnap, Hans Reichenbach, Carl Hempel, and Herbert Feigl, among others. Popper shared these thinkers’ concern with general issues of scientific methodology, and he sympathized with their distrust of traditional philosophical methodology. His proposed solutions to the problems arising from these concerns, however, were significantly different from those favored by the Vienna Circle.

Popper stayed in Vienna until 1937, when he took a teaching position at Canterbury University College in Christchurch, New Zealand, and he stayed there throughout World War II. His major works on the philosophy of science from this period include the articles that would eventually make up The Poverty of Historicism (1957). In these articles, he offered a highly critical analysis of the methodology of the social sciences, in particular, of attempts by social scientists to formulate predictive, explanatory laws.

In 1946, Popper took a teaching position at the London School of Economics, where he stayed until he retired in 1969. While there, he continued to work on a variety of issues relating to the philosophy of science, including quantum mechanics, entropy, evolution, and the realism vs. anti-realism debate, along with the issues already mentioned. His major works from this period include “The Propensity Interpretation of Probability” (1959) and Conjectures and Refutations (1963). He continued to publish until shortly before his death in 1994. In The Philosophy of Karl Popper (1974), Popper offers responses to many of his most important critics and provides clarifications of his mature views. His intellectual autobiography Unended Quest (1976) gives a detailed account of Popper’s evolving views, especially as they relate to the philosophy of science.

2. Falsification and the Criterion of Demarcation

Much of Popper’s early work in the philosophy of science focuses on what he calls the problem of demarcation, or the problem of distinguishing scientific (or empirical) theories from non-scientific theories. In particular, Popper aims to capture the logical or methodological differences between scientific disciplines, such as physics, and non-scientific disciplines, such as myth-making, philosophical metaphysics, Freudian psychoanalysis, and Marxist social criticism.

Popper’s proposals concerning demarcation can be usefully seen as a response to the verifiability criterion of demarcation proposed by logical empiricists, such as Carnap and Schlick. According to this criterion, a statement is cognitively meaningful if and only if it is, in principle, possible to verify. This criterion is intended to, among other things, capture the idea that the claims of empirical science are meaningful in a way that the claims of traditional philosophical metaphysics are not. For example, this criterion entails that claims about the locations of mid-sized objects are meaningful, since one can, in principle, verify them by going to the appropriate location. By contrast, claims about the fundamental nature of causation are not meaningful.

While Popper shares the belief that there is a qualitative difference between science and philosophical metaphysics, he rejects the verifiability criterion for several reasons. First, it counts existential statements (like “unicorns exist”) as scientific, even though there is no way of definitively showing that they are false. After all, the mere fact that one has failed to see a unicorn in a particular place does not establish that unicorns could not be observed in some other place. Second, it inappropriately counts universal statements (like “all swans are white”) as meaningless simply because they can never be conclusively verified. These sorts of universal claims, though, are common within science, and certain observations (like the observation of a black swan) can clearly show them to be false. Finally, the verifiability criterion is by its own light not meaningful, since it cannot be verified.

Partially in response to worries such as these, the logical empiricists’ later work abandons the verifiability criterion of meaning and instead emphasizes the importance of the empirical confirmation of scientific theories. Popper, however, argues that verification and confirmation played no role in formulating a satisfactory criterion of demarcation. Instead, Popper proposes that scientific theories are characterized by being bold in two related ways. First, scientific theories regularly disagree with accepted views of the world based on common sense or previous theoretical commitments. To an uneducated observer, for example, it may seem obvious that Earth is stationary, while the sun moves rapidly around it. However, Copernicus posited that Earth in fact revolved around the sun. In a similar way, it does not seem as though a tree and a human share a common ancestor, but this is what Darwin’s theory of evolution by natural selection claims. As Popper notes, however, this sort of boldness is not unique to scientific theories, since most mythological and metaphysical theories also make bold, counterintuitive claims about the nature of reality. For example, the accounts of world creation provided by various religions would count as bold in this sense, but this does not mean that they thereby count as scientific theories.

With this in mind, he goes on argue that scientific theories are distinguished from non-scientific theories by a second sort of boldness: they make testable claims that future observations might reveal to be false. This boldness thus amounts to a willingness to take a risk of being wrong. On Popper’s view, scientists investigating a theory make repeated, honest attempts to falsify the theory, whereas adherents of pseudoscientific or metaphysical theories routinely take measures to make the observed reality fit the predictions of the theory. Popper describes his proposal as follows:

Thus my proposal was, and is, that it is this second boldness, together with the readiness to look for tests and refutations, which distinguished “empirical” science from non-science, and especially from pre-scientific myths and metaphysics (1974, pp. 980-981)

In other places, Popper calls attention to the fact that scientific theories are characterized by possessing potential falsifiers—that is, that they make claims about the world that might be discovered to be false. If these claims are, in fact, found to be false, then the theory as a whole is said to be falsified. Non-scientific theories, by contrast, do not have any such potential falsifiers—there is literally no possible observation that could serve to falsify these theories.

Popper’s falsificationist proposal differs from the verifiability criterion in several important ways. First, Popper does not hold that non-scientific claims are meaningless. Instead, he argues that such unfalsifiable claims can often serve important roles in both scientific and philosophical contexts, even if we are incapable of ascertaining their truth or falsity. Second, while Popper is a realist who holds that scientific theories aim at the truth (see Section 4), he does not think that empirical evidence can ever provide us grounds for believing that a theory is either true or likely to be true. In this sense, Popper is a fallibilist who holds that while the particular unfalsified theory we have adopted might be true, we could never know this to be the case. For these same reasons, Popper holds that it is impossible to provide justification for one’s belief that a particular scientific theory is true. Finally, where others see science progressing by confirming the truth of various particular claims, Popper describes science as progressing on an evolutionary model, with observations selecting against unfit theories by falsifying them.

a. Popper on Physics and Psychoanalysis

In order to see how falsificationism works in practice, it will help to consider one of Popper’s most memorable examples: the contrast between Einstein’s theory of general relativity and the theories of psychoanalysis defended by Sigmund Freud and Alfred Adler. We might roughly summarize the theories as follows:

General relativity (GR): Einstein’s theory of special relativity posits that the observed speed of light in a vacuum will be the same for all observers, regardless of which direction or at what velocity these observers are themselves moving. GR allows this theory to be applied to cases where acceleration or gravity plays a role, specifically by treating gravity as a sort of distortion or bend in space-time created by massive objects.

Psychoanalysis: The theory of psychoanalysis holds that human behavior is driven at least in part by unconscious desires and motives. For example, Freud posited the existence of the id, an unconscious part of the human psyche that aims toward gratifying instinctive desires, regardless of whether this is rational. However, the desires of the id might be mediated or superseded in certain circumstances by its interaction with both the self-interested ego and the moral superego.

As we can see, both theories make bold, counter-intuitive claims about the fundamental nature of reality. Moreover, both theories can account for previously observed phenomena; for example, GR allows for an accurate description of the observed perihelion of Mercury, while psychoanalysis entails that it is possible for people to consistently act in ways that are against their own long-term best interest. Finally, both of these theories enjoyed significant support among their academic peers when Popper was first writing about these issues.

Popper argues, however, that GR is scientific while psychoanalysis is not. The reason for this has to do with the testability of Einstein’s theory. As a young man, Popper was especially impressed by Arthur Eddington’s 1919 test of GR, which involved observing during a solar eclipse the degree to which the light from distant stars was shifted when passing by the sun. Importantly, the predictions of GR regarding the magnitude shift disagreed with the then-dominant theory of Newtonian mechanics. Eddington’s observation thus served as a crucial experiment for deciding between the theories, since it was impossible for both theories to give accurate predictions. Of necessity, at least one theory would be falsified by the experiment, which would provide strong reason for scientists to accept its unfalsified rival. On Popper’s view, the continual effort by scientists to design and carry out these sorts of potentially falsifying experiments played a central role in theory choice and clearly distinguished scientific theorizing from other sorts of activities. Popper also takes care to note that insofar as GR was not a unified field theory, there was no question of GR’s being the complete truth, as Einstein himself repeatedly emphasized. The scientific status of GR, then, had nothing to do with neither (1) the truth of GR as a general theory of physics (the theory was already known to false) nor (2) the confirmation of GR by evidence (one cannot confirm a false theory).

In contrast to such paradigmatically scientific theories as GR, Popper argues that non-scientific theories such as Freudian psychoanalysis do not make any predictions that might allow them to be falsified. The reason for this is that these theories are compatible with every possible observation. On Popper’s view, psychoanalysis simply does not provide us with adequate details to rule out any possible human behavior. Absent of these sorts of precise predictions, the theory can be made to fit with, and to provide a purported explanation of, any observed behavior whatsoever.

To illustrate this point, Popper offers the example of two men, one who pushes a child into the water with the intent of drowning it, and another who dives into the water in order to save the child. Popper notes that psychoanalysis can explain both of these seemingly contradictory actions. In the first case, the psychoanalyst can claim that the action was driven by a repressed component of the (unconscious) id and in the second case, that the action resulted from a successful sublimation of this exact same sort of desire by the ego and superego. The point generalizes that regardless of how a person actually behaves, psychoanalysis can be used to explain the behavior. This, in turn, prevents us from formulating any crucial experiments that might serve to falsify psychoanalysis. Popper writes:

The point is very clear. Neither Freud nor Adler excludes any particular person’s acting in any particular way, whatever the outward circumstances. Whether a man sacrificed his life to rescue a drowning child (a case of sublimation) or whether he murdered the child by drowning (a case of repression) could not possibly be predicted or excluded by Freud’s theory (1974, p. 985).

Popper allows that there are often legitimate purposes for positing non-scientific theories, and he argues that theories which start out as non-scientific can later become scientific, as we determine methods for generating and testing specific predictions based on these theories. Popper offers the example of Copernicus’s theory of a sun-centered universe, which initially yielded no potentially falsifying predictions, and so would not have counted as scientific by Popper’s criteria. However, later astronomers determined ways of testing Copernicus’s hypothesis, thus rendering it scientific. For Popper, then, the demarcation between scientific and non-scientific theories is not grounded on the nature of entities posited by theories, by the truth or usefulness of theories, or even by the degree to which we are justified in believing in such theories. Instead, falsification provides a methodological distinction based on the unique role that observation and evidence play in scientific practice.

b. Auxiliary and Ad Hoc Hypotheses

While Popper consistently defends a falsification-based solution to the problem of demarcation throughout his published work, his own explications of it include a number of qualifications to ensure a better fit with the realities of scientific practice. It is in this context that Popper introduces several of his more notable contributions to the philosophy of science, including auxiliary versus ad hoc hypotheses, basic sentences, and degrees of verisimilitude.

One immediate objection to the simple proposal regarding falsification sketched in the previous section is based on the Duhem-Quine thesis, according to which it is in many cases impossible to test scientific theories in isolation. For example, suppose that a group of investigators uses GR to deduce a prediction about the perihelion of Mercury, but then discovers that this prediction disagrees with their measurements. This failure might lead them to conclude that GR is false; however, the failure of the prediction might also plausibly be blamed on the falsity of some other proposition that the scientists relied on to deduce the apparently falsifying prediction. There are generally a large number of such propositions, concerning everything from the absence of human error to the accuracy of the scientific theories underlying the construction and application of the measuring equipment.

Popper recognizes that scientists routinely attribute the failure of experiments to factors such as this, and further grants that there is in many cases nothing objectionable about their doing so. On Popper’s view, the distinctive mark of scientific inquiry concerns the investigators’ responses to failed predictions in cases where they do not abandon the falsified theory altogether. In particular, Popper argues that a scientific theory can be legitimately saved from falsification by the introduction of an auxiliary hypothesis that allows for the generation of new, falsifiable predictions. Popper offers an example taken from the early 19th century, when astronomers noticed that the orbit of Uranus deviated significantly from what Newtonian mechanics seemed to predict. In this case, the scientists did not treat Newton’s laws as being falsified by such an observation. Instead, they considered the auxiliary hypothesis that there existed an additional and so far unobserved planet that was influencing the orbit of Uranus. They then used this auxiliary hypothesis, together with equations of Newtonian mechanics, to predict where this planet must be located. Their predictions turned out to be successful, and Neptune was discovered in 1846.

Popper contrasts this legitimate, scientific method of theory revision with the illegitimate, non-scientific use of ad hoc hypotheses to rescue theories from falsification. Here, an ad hoc hypothesis is one that does not allow for the generation of new, falsifiable predictions. Popper gives the example of Marxism, which he argues had originally made definite predictions about the evolution of society: the capitalist, free-market system would self-destruct and be replaced by joint ownership of the means of production, and this would happen first in the most highly developed economies. By the time Popper was writing in the mid-20th century, however, it seemed clear to him that these predictions were false: free market economies had not self-destructed, and the first communist revolutions happened in relatively undeveloped economies. The proponents of Marxism, however, neither abandoned the theory as falsified nor introduced any new, falsifiable auxiliary hypotheses that might account for the failed predictions. Instead, they adopted ad hoc hypotheses that immunized Marxism against any potentially falsifying observations whatsoever. For example, the continued persistence of capitalism might be blamed on the action of counter-revolutionaries but without providing an account of which specific actions these were, or what specific new predictions about society we should expect instead. Popper concludes that, while Marxism had originally been a scientific theory:

It broke the methodological rule that we must accept falsification, and it immunized itself against the most blatant refutations of its predictions. Ever since then, it can be described only as non-science—as a metaphysical dream, if you like, married to a cruel reality (1974, p. 985).

c. Basic Sentences and the Role of Convention

A second complication for the simple theory of falsification just described concerns the character of the observations that count as potential falsifiers of a theory. The problem here is that decisions about whether to accept an apparently falsifying observation are not always straightforward. For example, there is always the possibility that a given observation is not an accurate representation of the phenomenon but instead reflects theoretical bias or measurement error on the part of the observer(s). Examples of this sort of phenomenon are widespread and occur in a variety of contexts: students getting the “wrong” results on lab tests, a small group of researchers reporting results that disagree with those obtained by the larger research community, and so on.

In any specific case in which bias or error is suspected, Popper notes that researchers might introduce a falsifiable, auxiliary hypothesis allowing us to test this. And in many cases, this is just what they do: students redo the test until they get the expected results, or other research groups attempt to replicate the anomalous result obtained. Popper argues that this technique cannot solve the problem in general, however, since any auxiliary hypotheses researchers introduce and test will themselves be open to dispute in just the same way, and so on ad infinitum. If science is to proceed at all then, there must be some point at which the process of attempted falsification stops.

In order to resolve this apparently vicious regress, Popper introduces the idea of a basic statement, which is an empirical claim that can be used to both determine whether a given theory is falsifiable and thus scientific and, where appropriate, to corroborate falsifying hypotheses. According to Popper, basic statements are “statements asserting that an observable event is occurring in a certain individual region of space and time” (1959, p. 85). More specifically, basic statements must be both singular and existential (the formal requirement) and be testable by intersubjective observation (the material requirement). On Popper’s view, “there is a raven in space-time region k” would count as a basic statement, since it makes a claim about an individual raven whose existence, or lack thereof, could be determined by appropriately located observers. By contrast, the negative existential claim “there are no ravens in space-time region k” does not do this, and thus fails to qualify as a basic statement.

In order to avoid the infinite regress alluded to earlier, where basic statements themselves must be tested in order to justify their status as potential falsifiers, Popper appeals to the role played by convention and what he calls the “relativity of basic statements.” He writes as follows:

Every test of a theory, whether resulting in its collaboration or falsification, must stop at some basic statement or other which we decide to accept. If we do not come to any decision, and do not accept some basic statement or other, then the test will have led nowhere… This procedure has no natural end. Thus if the test is to lead us anywhere, nothing remains but to stop at some point or other and say that we are satisfied, for the time being. (1959, p. 86)

From this, Popper concludes that a given statement’s counting as a basic statement requires the consensus of the relevant scientific community—if the community decides to accept it, it will count as a basic statement; if the community does not accept it as basic, then an effort must be made to test the statement by using it together with other statements to deduce a statement that the relevant community will accept as basic. Finally, if the scientific community cannot reach a consensus on what would count as a falsifier for the disputed statement, the statement itself, despite initial appearances, may not actually be empirical or scientific in the relevant sense.

d. Induction, Corroboration, and Verisimilitude

Falsification also plays a key role in Popper’s proposed solution to David Hume’s infamous problem of induction. On Popper’s interpretation, Hume’s problem involves the impossibility of justifying belief in general laws based on evidence that concerns only particular instances. Popper agrees with Hume that inductive reasoning in this sense could not be justified, and he thus rejects the idea that empirical evidence regarding particular individuals, such as successful predictions, is in any way relevant to confirming the truth of general scientific laws or theories. This places Popper’s view in explicit contrast to logical empiricists such as Carnap and Hempel, who had developed extensive, mathematical systems of inductive logic intended to explicate the degree of confirmation of scientific theories by empirical evidence.

Popper argues that there are in fact two closely related problems of induction: the logical problem of induction and the psychological problem of induction. The first problem concerns the possibility of justifying belief in the truth or falsity of general laws based on empirical evidence that concerns only specific individuals. Popper holds that Hume’s argument concerning this problem “establishes for good that all our universal laws or theories remain forever guesses, conjectures, [and] hypotheses” (1974, p. 1019). However, Popper claims that while a successful prediction is irrelevant to confirming a law, a failed prediction can immediately falsify it. On Popper’s view, then, observing 1,000 white swans does nothing to increase our confidence that the hypothesis “all swans are white” is true; however, the observation of a single black swan can, subject to the caveats mentioned in previous sections, falsify this same hypothesis.

In contrast to the logical problem of induction, the psychological problem of induction concerns the possibility of explaining why reasonable people nevertheless have the expectation that unobserved instances will obey the same general laws as did previously observed instances. Hume tries to resolve the psychological problem by appeal to habit or custom, but Popper rejects this solution as inadequate, since it suggests that there is a “clash between the logic and the psychology of knowledge” (1974, p. 1019) and hence that people’s beliefs in general laws are fundamentally irrational.

Popper proposes to solve these twin problems of induction by offering an account of theory preference that does not rely upon inductive inference and thus avoids Hume’s problems altogether. While the technical details of this account evolve throughout his writings, he consistently emphasizes two main points. First, he holds that a theory with greater informative content is to be preferred to one with less content. Here, informative content is a measure of how much a theory rules out; roughly speaking, a theory with more informative content makes a greater number of empirical claims, and thus has a higher degree of falsifiability. Second, Popper holds that a theory is corroborated by passing severe tests, or “by predictions which were highly improbable in the lights of our previous knowledge (previous to the theory which was tested and corroborated)” (1963, p. 220).

It is important to distinguish Popper’s claim that a theory is corroborated by surviving a severe test from the claim that the logical empiricist view that a theory is inductively confirmed by successfully predicting events that, were the theory to have been false, would have been highly unlikely. According to the latter view, a successful prediction of this sort, subject to certain caveats, provides evidence that the theory in question is actually true. The question of theory choice is tightly tied to that of confirmation: scientists should adopt whichever theory is most probable by light of the available evidence. On Popper’s view, by contrast, corroboration provides no evidence whatsoever the theory in question is true, or even that the theory is preferable to a so-far-untested but still unfalsified rival. Instead, a corroborated theory has shown merely that it is the sort of theory that could be falsified and thus can be legitimately classified as scientific. While a corroborated theory should obviously be preferred to an already falsified rival (see Section 2), the real work here is being done by the falsified theory, which has taken itself out of contention.

While Popper consistently rejects the idea that we are justified in believing that non-falsified, well-corroborated scientific theories with high levels of informative content are either true or likely to be true, his work on degrees of verisimilitude explores the idea that such theories are closer to the truth than were the falsified theories that they had replaced. The basic idea is as follows:

  1. For a given statement H, let the content of H be the class of all of the logical consequences of So, if H is true, then all of the members of this class would be true; if H were false however, then only some members of this class would be true, since every false statement has at least some true consequences.
  2. The content of H can be broken into two parts: the truth content consisting of all the true consequences of H, and the falsity content, consisting of all of the false consequences of
  3. The verisimilitude of H is defined as the difference between the truth content of H and falsity content of H. This is intended to capture the idea that a theory with greater verisimilitude will entail more truths and fewer falsehoods than does a theory will less verisimilitude.

With this definition in hand, it might now seem that Popper could incorporate truth into his account of his theory preference: non-falsified theories with high levels of informative content were closer to the truth than either the falsified theories they replaced or their unfalsified but less informative competitors. Unfortunately, however, this definition does not work, as arguments from Tichý (1974), Miller (1974), Harris (1974), and others show. Tichý and Miller in particular demonstrate that Popper’s proposed definition cannot be used to compare the relative verisimilitude of false theories, which is Popper’s main purpose in introducing the notion of verisimilitude. While Popper (1976) explores ways of modifying his proposal to deal with these problems, he is never able to provide a satisfactory formal definition of verisimilitude. His work on this area is nevertheless invaluable in identifying a problem that has continued to interest many contemporary researchers.

3. Criticisms of Falsificationism

While Popper’s account of scientific methodology has continued to be influential, it has also faced a number of serious objections. These objections, together with the emergence of alternative accounts of scientific reasoning, have led many philosophers of science to reject Popper’s falsificationist methodology. While a comprehensive list of these criticisms and alternatives is beyond the scope of this entry, interested readers are encouraged to consult Kuhn (1962), Salmon (1967), Lakatos (1970, 1980), Putnam (1974), Jeffrey (1975), Feyerabend (1975), Hacking (1983), and Howson and Urbach (1989).

One criticism of falsificationism involves the relationship between theory and observation. Thomas Kuhn, among others, argues that observation is itself strongly theory-laden, in the sense that what one observes is often significantly affected by one’s previously held theoretical beliefs. Because of this, those holding different theories might report radically different observations, even when they both are observing the same phenomena. For example, Kuhn argues those working within the paradigm provided by classical, Newtonian mechanics may genuinely have different observations than those working within the very different paradigm of relativistic mechanics.

Popper’s account of basic sentences suggests that he clearly recognizes both the existence of this sort of phenomenon and its potential to cause problems for attempts to falsify theories. His solution to it, however, crucially depends on the ability of the overall scientific community to reach a consensus as to which statements count as basic and thus can be used to formulate tests of the competing theories. This remedy, however, looks less attractive to the extent that advocates of different theories consistently find themselves unable to reach an agreement on what sentences count as basic. For example, it is important to Popper’s example of the Eddington experiment that both proponents of classical mechanics and those of relativistic mechanics could recognize Eddington’s reports of his observations as basic sentences in the relevant sense—that is, certain possible results would falsify the Newtonian laws of classical mechanics, while other possible results would falsify GR. If, by contrast, adherents of rival theories consistently disagreed on whether or not certain reports could be counted as basic sentences, this would prevent observations such as Eddington’s from serving any important role in theory choice. Instead, the results of any such potentially falsifying experiment would be interpreted by one part of the community as falsifying a particular theory, while a different section of the community would demand that these reports themselves be subjected to further testing.  In this way, disagreements over the status of basic sentences would effectively prevent theories from ever being falsified.

This purported failure to clearly distinguish the basic statements that formed the empirical base from other, more theoretical, statements would also have consequences for Popper’s proposed criterion of demarcation, which holds that scientific theories must allow for the deduction of basic sentences whose truth or falsity can be ascertained by appropriately located observers. If, contrary to Popper’s account, there is no distinct category of basic sentences within actual scientific practice, then his proposed method for distinguishing science from non-science fails.

A second, related criticism of falsifiability contends that falsification fails to provide an accurate picture of scientific practice. Specifically, many historians and philosophers of science have argued that scientists only rarely give up their theories in the face of failed predictions, even in cases where they are unable to identify testable auxiliary hypotheses. Conversely, it has been suggested that scientists routinely adopt and make use of theories that they know are already falsified. Instead, scientists will generally hold on to such theories unless and until a better alternative theory emerges.

For example, Lakatos (1970) describes a hypothetical case where pre-Einsteinian scientists discover a new planet whose behavior apparently violates classical mechanics. Lakatos argues that, in such a case, the scientists would surely attempt to account for these observed discrepancies in the way that Popper advocates—for example, by hypothesizing the existence of a hitherto unobserved planet or dust cloud. In contrast to what he takes Popper to be arguing, however, Lakatos contends that the failure of such auxiliary hypotheses would not lead them to abandon classical mechanics, since they had no alternative theory to turn to.

In a similar vein, Putnam (1975) argues that the initial widespread acceptance of Newtonian mechanics had little or nothing to do with falsifiable predictions, since the theory made very few of these. Instead, scientists were impressed by the theory’s success in explaining previously established phenomena, such as the orbits of the planets and the behavior of the tides. Putnam argues that, on Popper’s view, accepting such an uncorroborated theory would seem to be irrational. Finally, Hacking (1983) argues that many aspects of ordinary scientific practice, including a wide variety of observations and experiments, cannot plausibly be construed as attempts to falsify or corroborate any particular theory or hypothesis. Instead, scientists regularly perform experiments that have little or no bearing on their current theories and measure quantities about which these theories do not make any specific claims.

When considering the cogency of such criticisms, it is worth noting several things. First, it is worth recalling that Popper defends falsificationism as a normative, methodological proposal for how science ought to work in certain sorts of cases and not as an empirical description intended to accurately capture all aspects of historical scientific practice. Second, Popper does not commit himself to the implausible thesis that theories yielding false predictions about a particular phenomenon must immediately be abandoned, even if it is not apparent which auxiliary hypotheses must change. This is especially true in the absence of any rival theory yielding a correct prediction. For example, Newtonian mechanics had well-known problems with predicting certain sorts of phenomena, such as the orbit of Mercury, in the years preceding Einstein’s proposals regarding special and general relativity. Popper’s proposal does not entail that these failures of prediction should have led nineteenth century scientists to abandon this theory.

This being said, Popper himself argues that the methodology of falsificationism has played an important role in the history of science and that adopting his proposal would not require a wholesale revision of existing scientific methodology. If it turns out that scientists rarely, if ever, make theory choice on the basis of crucial experiments that falsify one theory or another, then Popper’s methodological proposal looks to be considerably less appealing.

A final criticism concerns Popper’s account of corroboration and the role it plays in theory choice. Popper’s deductive account of theory testing and adoption posits that it is rational to choose highly informative, well-corroborated theories, even though we have no inductive grounds for thinking that these theories are likely to be true. For example, Popper explicitly rejects the idea that corroboration is intended as an analogue to the subjective probability or logical probability that a theory is true, given the available evidence. This idea is central to both Popper’s proposed solution to the problem of induction and to his criticisms of competing inductivist or “Bayesian” programs.

Many philosophers of science, however, including Salmon (1967, 1981), Jeffrey (1975), Howson (1984a), and Howson and Urbach (1989), have objected to this aspect of Popper’s account. One line of criticism has focused on the extent to which Popper’s falsification offers a legitimate alternative to the inductivist proposals that Popper criticizes. For example, Jeffrey (1975) points out that it is just as difficult to conclusively falsify a hypothesis as it to conclusively verify it, and he argues that Bayesianism, with its emphasis on the degree to which empirical evidence supports a hypothesis, is much more closely aligned to scientific practice than Popper’s program.

A related line of objection has focused on Popper’s contention that it is rational for scientists to rely on corroborated theories, a claim that plays a central role in his proposed solution to the problem of induction. Urbach (1984) argues that, insofar as Popper is committed to the claim that every universal hypothesis has zero probability of being true, he cannot explain the rationality of adopting a corroborated theory over an already falsified one, since both have the same probability (zero) of being true. Taking a different tack, Salmon (1981) questions whether, on Popper’s account, it would be rational to use corroborated hypotheses for the purposes of prediction. After all, corroboration is entirely a matter of hypotheses’ past performance—a corroborated hypothesis is one that has survived severe empirical tests. Popper’s account, however, does not provide us with any reason for thinking that this hypothesis will have more accurate predictions about the future than any one of the infinite number of competing uncorroborated hypotheses that are also logically compatible with all of the evidence observed up to this point.

If these objections concerning corroboration are correct, it looks as though Popper’s account of theory choice is either (1) vulnerable to the same sorts of problems and puzzles that plague accounts of theory choice based on induction or (2) does not work as an account of theory choice at all.

While the sorts of objections mentioned here have led many to abandon falsificationism, David Miller (1998) provides a recent, sustained attempt to defend a Popperian-style critical rationalism. For more details on debates concerning confirmation and induction, see the entries on Confirmation and Induction and Evidence.

4. Realism, Quantum Mechanics, and Probability

While Popper holds that it is impossible for us to justify claims that particular scientific theories are true, he also defends the realist view that “what we attempt in science is to describe (and so far as possible) explain reality” (1975, p. 40). While Popper grants that realism is, according to his own criteria, an irrefutable metaphysical view about the nature, he nevertheless thinks we have good reasons for accepting realism and for rejecting anti-realist views such as idealism or instrumentalism. In particular, he argues that realism is both part of common sense and entailed by our best scientific theories. By contrast, he contends that the most prominent arguments for anti-realism are based on a “mistaken quest for certainty, or for secure foundations on which to build” (1975, p. 42). Once one accepts the impossibility of securing such certain knowledge, as Popper contends we ought to do, the appeal of these sorts of arguments is considerably diminished.

Popper consistently emphasizes that scientific theories should be interpreted as attempts to describe a mind-independent reality. Because of this, he rejects the Copenhagen interpretation of quantum mechanics, in which the act of human measurement is seen as playing a fundamental role in collapsing the wave-function and randomly causing a particle to assume a determinate position or momentum. In particular, Popper opposes the idea, which he associates with the Copenhagen interpretation, that the probabilistic equations describing the results of potential measurements of quantum phenomena are about the subjective states of the human observers, rather than concerning mind-independent existing physical properties such as the positions or momenta of particles.

It is in the context of this debate over quantum mechanics that Popper first introduces his propensity theory of probability. This theory’s applicability, however, extends well beyond the quantum world, and Popper argues that it can be used to interpret the sorts of claims about probability that arise both in other areas of science and in everyday life. Popper’s propensity theory holds that probabilities are objective claims about the mind-independent external world and that it is possible for there to be single-case probabilities for non-recurring events.

Popper proposes his propensity theory as a variant of the relative frequency theories of probability defended by logical positivists such as Richard von Mises and Hans Reichenbach. According to simple versions of frequency theory, the probability of an event of type e can be defined as the relative frequency of e in a large, or perhaps even infinite, reference class. For example, the claim that the “the probability of getting a six on a fair die is 1/6” can be understood as the claim that, in a long sequence of rolls with a fair die (the reference class), six would come up 1/6 of the time. The main alternatives to frequency theory that concern Popper are logical and subjective theories of probability, according to which claims about probability should be understood as claims about the strength of evidence for or degree of belief in some proposition. On these views, the claim that “the probability of getting a six on a fair die is 1/6” can be understood as a claim about our lack of evidence—if all we know is that the die is fair, then we have no reason to think that any particular number, such as a six, is more likely to come up on the next roll than any of the other five possible numbers.

Like other defenders of frequency theories, Popper argues that logical or subjective theories incorrectly interpret scientific claims about probability as being about the scientific investigators, and the evidence they have available to them, rather than the external world they are investigating. However, Popper argues that traditional frequency theories cannot account for single-case probabilities. For example, a frequency theorist would have no problem answering questions about “the probability that it will rain on an arbitrarily chosen August day,” since August days form a reference class. By contrast, questions about the probability that it will rain on a particular, future August day raises problems, since each particular day only occurs once. At best, frequency theories allow us to say the probability of it raining on that specific day is either 0 or 1, though we do not know which.

On Popper’s view, the failure to provide adequate treatment of single-case probabilities is a serious one, especially given what he saw as the centrality of such probabilities in quantum mechanics. To resolve this issue, Popper proposes that probabilities should be treated as the propensities of experimental setups to produce certain results, rather than as being derived from the reference class of results that were produced by running these experiments. On the propensity view, the results of experiments are important because they allow us to test hypotheses concerning the values of certain probabilities; however, the results are not themselves part of the probability itself. Popper argues that this solves the problem of single-case probability, since propensities can exist even for experiments that only happen once. Importantly, Popper does not require that these experiments utilize human intervention—instead, nature can itself run experiments, the results of which we can observe. For example, the propensity theory should, in theory, be able to make sense of claims about the probability that it will rain on a particular day, even though the experimental setup in this case is constituted by naturally occurring, meteorological phenomena.

Popper argues that the propensity theory of probability helps provide the grounds for a realist solution to the measurement problem within quantum mechanics. As opposed to the Copenhagen interpretation, which posits that the probabilities discussed in quantum mechanics reflect the ignorance of the observers, Popper argues these probabilities are in fact the propensities of the experimental setups to produce certain outcomes. Interpreted this way, he argues that they raise no interesting metaphysical dilemmas beyond those raised by classical mechanics and that they are equally amenable to a realist interpretation. Popper gives the example of tossing a penny, which he argues is strictly analogous to the experiments performed in quantum mechanics: if our experimental setup consists of simply tossing the penny, then the probability of getting heads is 1/2. If the experimental setup, however, is expanded to include the results of our looking at the penny, and thus includes the outcome of the experiment itself, then the probability will be either 0 or 1. This does not, though, involve positing any collapse of the wave-function caused merely by the act of human observation. Instead, what has occurred is simply a change in the experimental setup. Once we include the measurement result in our setup, the probability of a particular outcome will trivially become 0 or 1.

5. Methodology in the Social Sciences

Much of Popper’s early work on the methodology of science is concerned with physics and closely related fields, especially those where experimentation plays a central role. On Popper’s view, which was discussed in detail in previous sections, these sciences make progress by formulating a theory and then carefully designing experiments and observations aimed at falsifying the purported theory. The ever-present possibility that a theory might be falsified by these sorts of tests is, on Popper’s view, precisely what differentiates legitimate sciences, such as physics, from non-scientific activities, such as philosophical metaphysics, Freudian psychoanalysis, or myth-making.

This picture becomes somewhat more complicated, however, when we consider methodology in social sciences such as sociology and economics, where experimentation plays a much less central role. On Popper’s view, there are significant problems with many of the methods used in these disciplines. In particular, Popper argues against what he calls historicism, which he describes as “an approach to the social sciences which assumes that historical prediction is their principal aim, and which assumes that this aim is attainable by discovering the ‘rhythms’ or ‘patterns’, the ‘laws’ or ‘trends’ that underlie the evolution of history” (1957, p. 3).

Popper’s central argument against historicism contends that, insofar as the whole of human history is a singular process that occurs only once, it is impossible to formulate and test any general laws about history. This stands in stark contrast to disciplines such as physics, where the formulation and testing of laws plays a central role in making progress. For example, potential laws of gravitation can be tested by observations of planetary motions, by controlled experiments concerning the rates of falling objects near the earth’s surface, or in numerous other ways. If the relevant theories are falsified, scientists can easily respond, for instance, by changing one or more auxiliary hypotheses, and then conducting additional experiments on the new, slightly modified theory. By contrast, a law that purports to describe the future progress of history in its entirety cannot easily be tested in this way. Even if a particular prediction about the occurrence of some particular event is incorrect, there is no way of altering the theory to retest it—each historical event only occurs one, thus ruling out the possibility of carrying more tests regarding this event. Popper also rejects the claim that it is possible to formulate and test laws of more limited scope, such as those that purport to describe an evolutionary process that occurs in multiple societies, or that attempt to capture a trend within a given society.

Popper’s opposition to historicism is also evident in his objections what he calls utopian social engineering, which involves attempts by governments to fundamentally restructure the whole of society based on an overall plan or blueprint. On Popper’s view, the problem again concerns the impossibility of carrying out critical tests of the effectiveness of such plans. This impossibility is because of the holism of utopian plans, which involve changing everything at the same time. When the planners’ actions fail—as Popper thinks is inevitably the case with human interventions in society—to achieve their predicted results, the planners have no method for determining what in particular went wrong with their plan. This lack of testability, in turn, means that there is no way for the utopian engineers to improve their plans. This argument, among others, plays a central role in Popper’s critique of Marxism and totalitarianism in The Open Society and its Enemies (1945). More details on Popper’s political philosophy, including his critique of totalitarian societies, can be found here.

In place of historicism and utopian holism, Popper argues that the social sciences should embrace both methodological individualism and situational analysis. On Popper’s definition, methodological individualism is the view that the behavior of social institutions should be analyzed in terms of the behaviors of the individual humans that made them up. This individualism is motivated, in part, by Popper’s contention that many important social institutions, such as the market, are not the result of any conscious design but instead arise out of the uncoordinated actions of individuals with widely disparate motives. Scientific hypotheses about the behavior of such unplanned institutions, then, must be formulated in terms of the constituent participants. Popper’s presentation and defense of methodological individualism is closely related to that provided by the Austrian economist Frederich von Hayek (1942, 1943, 1944), with whom Popper maintained close personal and professional relationships throughout most of his life. For both Popper and Hayek, the defense of methodological individualism within the social sciences plays a key role in their broader argument in favor of liberal, market economies and against planned economies.

While Popper endorses methodological individualism, he rejects the doctrine of psychologism, according to which laws about social institutions must be reduced to psychological laws concerning the behavior of individuals. Popper objects to this view, which he associates with John Stuart Mill, on the grounds that it ends up collapsing into a form of historicism. The argument can be summarized as follows: once we begin trying to explain or predict the behavior currently existing in institutions in terms of individuals’ psychological motives, we quickly notice that these motives themselves cannot be understood without reference to the broader social environment within which these individuals find themselves. In order to eliminate the reference to the particular social institutions that make up this environment, we are then forced to demonstrate how these institutions were themselves a product of individual motives that had operated within some other previously existing social environment. This, though, quickly leads to an unsustainable regress, since humans always act within particular social environments, and their motives cannot be understood without reference to these environments. The only way out for the advocate of psychologism is to posit that both the origin and evolution of all human institutions can be explained purely in terms of human psychology. Popper argues that there is no historical support for the idea that there was ever such as an origin of social institutions. He also argues that this is a form of historicism, insofar as it commits us to discovering laws governing the evolution of society as a whole. As such, it inherits all of the problems mentioned previously.

In place of psychologism, Popper endorses a version of methodological individualism based on situational analysis. On this method, we begin by creating abstract models of the social institutions that we wish to investigate, such as markets or political institutions. In keeping with methodological individualism, these models will contain, among other things, representations of individual agents. However, instead of stipulating that these agents will behave according to the laws governing individual human psychology, as psychologism does, we animate the model by assuming that the agents will respond appropriately according to the logic of the situation. Popper calls this constraint on model building within the social sciences the rationality principle.

Popper recognizes that both the rationality principle and the models built on the basis of it are empirically false—after all, real humans often respond to situations in ways that are irrational and inappropriate. Popper also rejects, however, the idea that the rationality principle should be thought of as a methodological principle that is a priori immune to testing, since part of what makes theories in the social sciences testable is the fact that they make definite claims about individual human behavior. Instead, Popper defends the use of the rationality principle in model building on the grounds that is generally good policy to avoid blaming the falsification of a model on the inaccuracies introduced by the rationality principle and that we can learn more if we blame the other assumptions of our situational analysis (1994, p. 177). On Popper’s view, the errors introduced by the rationality principle are generally small ones, since humans are generally rational. More importantly, holding the rationality principle fixed makes it much easier for us to formulate crucial tests of rival theories and to make genuine progress in the social sciences. By contrast, if the rationality principle were relaxed, he argues, there would be almost no substantive constraints on model building.

6. Popper’s Legacy

While few of Popper’s individual claims have escaped criticism, his contributions to philosophy of science are immense. As mentioned earlier, Popper was one of the most important critics of the early logical empiricist program, and the criticisms he leveled against helped shape the future work of both the logical empiricists and their critics. In addition, while his falsification-based approach to scientific methodology is no longer widely accepted within philosophy of science, it played a key role in laying the ground for later work in the field, including that of Kuhn, Lakatos, and Feyerabend, as well as contemporary Bayesianism.  It also plausible that the widespread popularity of falsificationism—both within and outside of the scientific community—has had an important role in reinforcing the image of science as an essentially empirical activity and in highlighting the ways in which genuine scientific work differs from so-called pseudoscience.  Finally, Popper’s work on numerous specialized issues within the philosophy of science—including verisimilitude, quantum mechanics, the propensity theory of probability, and methodological individualism—has continued to influence contemporary researchers.

7. References and Further Reading

Popper Selections (1985) is an excellent introduction to Popper’s writings for the beginner, while The Philosophy of Karl Popper (Schilpp 1974) contains an extensive bibliography of Popper’s work published before the date, together with numerous critical essays and Popper’s responses to these. Finally, Unended Quest (1976) is an expanded version of the “Intellectual Autobiography” from Schilpp (1974), and it provides a helpful, non-technical overview of many of Popper’s main works in his own words.

a. Primary Sources

  • 1945. The Open Society and Its Enemies. 2 volumes. London: Routledge.
  • 1957. The Poverty of Historicism. London: Routledge. Originally published as a series of three articles in Economica 42, 43, and 46 (1944-1945).
  • 1959. The Logic of Scientific Discovery. London: Hutchinson. This is an English translation of Logik der Forschung, Vienna: Springer (1935).
  • 1959. “The Propensity Interpretation of Probability.” The British Journal for the Philosophy of Science 10 (37): 25–42.
  • 1963. Conjectures and Refutations: The Growth of Scientific Knowledge. London: Routledge. Fifth edition 1989.
  • 1970. “Normal Science and Its Dangers.” In Criticism and the Growth of Knowledge, edited by Imre Lakatos and Alan Musgravez 51–58
  • 1972. Objective Knowledge: An Evolutionary Approach. Oxford: Clarendon Press. Revised edition 1979.
  • 1974. “Replies to My Critics” and “Intellectual Autobiography.” In: Schilpp, Paul Arthur, ed.
  • 1974. The Philosophy of Karl Popper. 2 volumes. La Salle, Ill: Open Court.
  • 1976. Unended Quest. London: Fontana. Revised edition 1984.
  • 1976. “A Note on Verisimilitude.” The British Journal for the Philosophy of Science 27 (2): 147–59.
  • 1978. “Natural Selection and the Emergence of Mind.” Dialectica 32 (3-4): 339–55.
  • 1982. The Open Universe: An Argument for Indeterminism. Edited by W. W. Bartley III. London: Routledge.
  • 1982. Quantum Theory and the Schism in Physics. Edited by W. W. Bartley III. New York: Routledge.
  • 1983. Realism and the Aim of Science. Edited by W. W. Bartley III. New York: Routledge.
  • 1985. Popper Selections. Edited by David W Miller. Princeton: Princeton University Press.
  • 1994. The Myth of the Framework: In Defense of Science and Rationality. Edited by Mark Amadeus Notturno. London: Routledge.
  • 1999. All Life Is Problem Solving. London: Routledge.

b. Secondary Sources

  • Ackermann, Robert John. 1976. The Philosophy of Karl Popper. Amherst: University of Mass. Press.
  • Agassi, Joseph. 2014. Popper and His Popular Critics: Thomas Kuhn, Paul Feyerabend and Imre Lakatos. 2014 edition. New York: Springer.
  • Blaug, Mark. 1992. The Methodology of Economics: Or, How Economists Explain. 2nd edition. New York: Cambridge University Press.
  • Caldwell, Bruce J. 1991. “Clarifying Popper.” Journal of Economic Literature 29 (1): 1–33.
  • Carnap, Rudolf. 1936. “Testability and Meaning.” Philosophy of Science 3 (4): 419–71. Continued in Philosophy of Science 4 (1): 1-40.
  • Carnap, Rudolf. 1995. An Introduction to the Philosophy of Science. New York: Dover. Originally published as Philosophical Foundations of Physics (1966).
  • Carnap, Rudolf.  2003. The Logical Structure of the World and Pseudoproblems in Philosophy. Translated by Rolf A. George. Chicago and La Salle, Ill: Open Court. Originally published in 1928 as Der logische Aufbau der Welt and Scheinprobleme in der Philosophie.
  • Catton, Philip, and Graham MacDonald, eds. 2004. Karl Popper: Critical Appraisals. New York: Routledge.
  • Currie, Gregory, and Alan Musgrave, eds. 1985. Popper and the Human Sciences. Dordrecht: Martinus Nijhoff.
  • Edmonds, David, and John Eidinow. 2002. Wittgenstein’s Poker: The Story of a Ten-Minute Argument Between Two Great Philosophers. Reprint edition. New York: Harper Perennial.
  • Feyerabend, Paul. 1975. Against Method. London; New York: New Left Books. Fourth edition 2010.
  • Fuller, Steve. 2004. Kuhn vs. Popper: The Struggle for the Soul of Science. New York: Columbia University Press.
  • Gattei, Stefano. 2010. Karl Popper’s Philosophy of Science: Rationality without Foundations. London; New York: Routledge.
  • Grünbaum, Adolf. 1976. “Is Falsifiability the Touchstone of Scientific Rationality? Karl Popper Versus Inductivism.” In Essays in Memory of Imre Lakatos, edited by R. S. Cohen, P. K. Feyerabend, and M. W. Wartofsky, 213–52. Dordrecht: Springer Netherlands.
  • Hacking, Ian. 1983. Representing and Intervening: Introductory Topics in the Philosophy of Natural Science. Cambridge; New York: Cambridge University Press.
  • Hacohen, Malachi Haim. 2002. Karl Popper: The Formative Years, 1902-1945 : Politics and Philosophy in Interwar Vienna. Cambridge: Cambridge University Press.
  • Hands, Douglas W. 1985. “Karl Popper and Economic Methodology: A New Look.” Economics and Philosophy 1 (1): 83–99.
  • Harris, John H. 1974. “Popper’s Definitions of ‘Verisimilitude.’” The British Journal for the Philosophy of Science 25 (2): 160–66.
  • Hausman, Daniel M. 1985. “Is Falsificationism Unpractised or Unpractisable?” Philosophy of the Social Sciences 15 (3): 313–19.
  • Hayek, Frederich von. 1942. “Scientism and the Study of Society. Part I.” Economica, New Series, 9 (35): 267–91.
  • Hayek, Frederich von.  1943. “Scientism and the Study of Society. Part II.” Economica, New Series, 10 (37): 34–63.
  • Hayek, Frederich von. 1944. “Scientism and the Study of Society. Part III.” Economica, New Series, 11 (41): 27–39.
  • Hempel, Carl G. 1945a. “Studies in the Logic of Confirmation (I.).” Mind, New Series, 54 (213): 1–26.
  • Hempel, Carl G. 1945b. “Studies in the Logic of Confirmation (II.).” Mind, New Series, 54 (214): 97–121.
  • Howson, Colin. 1984a. “Popper’s Solution to the Problem of Induction.” The Philosophical Quarterly 34 (135): 143–47.
  • Howson, Colin. 1984b. “Probabilities, Propensities, and Chances.” Erkenntnis 21 (3): 279–93.
  • Howson, Colin, and Peter Urbach. 1989. Scientific Reasoning: The Bayesian Approach. Chicago: Open Court Publishing. Third edition 2006.
  • Hudelson, Richard. 1980. “Popper’s Critique of Marx.” Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 37 (3): 259–70.
  • Hume, David. 1993. An Enquiry Concerning Human Understanding: With Hume’s Abstract of A Treatise of Human Nature and A Letter from a Gentleman to His Friend in Edinburgh. Edited by Eric Steinberg. 2nd ed. Indianapolis: Hackett Publishing Company, Inc.
  • Jeffrey, Richard C. 1975. “Probability and Falsification: Critique of the Popper Program.” Synthese 30 (1/2): 95–117.
  • Keuth, Herbert. 2004. The Philosophy of Karl Popper. New York: Cambridge University Press.
  • Kuhn, Thomas S. 1962. The Structure of Scientific Revolutions. Chicago: University of Chicago Press. Third edition 1996.
  • Lakatos, Imre. 1970. “Falsification and the Methodology of Scientific Research Programmes.” In Criticism and the Growth of Knowledge, edited by Imre Lakatos and Alan Musgrave, 91–196. Cambridge: Cambridge University Press.
  • Lakatos, Imre.  1980. The Methodology of Scientific Research Programmes: Volume 1: Philosophical Papers. Cambridge University Press.
  • Lakatos, Imre, and Alan Musgrave, eds. 1970. Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press.
  • Levi, Isaac. 1963. “Corroboration and Rules of Acceptance.” The British Journal for the Philosophy of Science 13 (52): 307–13.
  • Maher, Patrick. 1990. “Why Scientists Gather Evidence.” The British Journal for the Philosophy of Science 41 (1): 103-119.
  • Magee, Bryan. 1985. Philosophy and the Real World: An Introduction to Karl Popper. La Salle, Ill: Open Court.
  • Miller, David. 1974. “Popper’s Qualitative Theory of Verisimilitude.” British Journal for the Philosophy of Science, 166–77.
  • Miller, David. 1998. Critical Rationalism: A Restatement and Defense. Chicago: Open Court.
  • Munz, Peter. 1985. Our Knowledge of the Growth of Knowledge: Popper or Wittgenstein?. London; New York: Routledge.
  • O’Hear, Anthony. 1996. Karl Popper: Philosophy and Problems. Cambridge ; New York: Cambridge University Press.
  • Putnam, Hilary. 1974. “The ‘corroboration’ of Theories.” In The Philosophy of Karl Popper, edited by Paul Arthur Schilpp, 221–40. La Salle, Ill: Open Court.
  • Rowbottom, Darrell. 2010. Popper’s Critical Rationalism: A Philosophical Investigation. New York: Routledge.
  • Runde, Jochen. 1996. “On Popper, Probabilities, and Propensities.” Review of Social Economy 54 (4): 465–85.
  • Ruse, Michael. 1977. “Karl Popper’s Philosophy of Biology.” Philosophy of Science 44 (4): 638–61.
  • Salmon, Wesley. 1967. The Foundations of Scientific Inference. Pittsburgh: University of Pittsburgh Press.
  • Salmon, Wesley. 1981. “Rational Prediction.” The British Journal for the Philosophy of Science 32 (2): 115–25.
  • Schilpp, Paul Arthur, ed. 1974. The Philosophy of Karl Popper. 2 volumes. La Salle, Ill: Open Court.
  • Thornton, Stephen. 2014. “Karl Popper.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta.
  • Tichý, Pavel. 1974. “On Popper’s Definitions of Verisimilitude.” The British Journal for the Philosophy of Science 25 (2): 155–60.
  • Urbach, Peter. 1978. “Is Any of Popper’s Arguments against Historicism Valid?” The British Journal for the Philosophy of Science 29 (2): 117–30.

 

Author Information

Brendan Shea
Email: Brendan.Shea@rctc.edu
Rochester Community and Technical College, Minnesota Center for Philosophy of Science
U. S. A.