Causation: Philosophy of Science

views updated

CAUSATION: PHILOSOPHY OF SCIENCE

In The Critique of Pure Reason (first published in 1781), the German philosopher Immanuel Kant maintained that causation was one of the fundamental concepts that rendered the empirical world comprehensible to humans. By the beginning of the twenty-first century, psychology was beginning to show just how pervasive human reasoning concerning cause and effect is. Even young children seem to naturally organize their knowledge of the world according to relations of cause and effect.

It is hardly surprising, then, that causation has been a topic of great interest in philosophy, and that many philosophers have attempted to analyze the relationship between cause and effect. Among the more prominent proposals are the following: Causation consists in the instantiation of exceptionless regularities (Hume 1975, 1999; Mill 1856; Hempel 1965; Mackie 1974); causation is to be understood in terms of relations of probabilistic dependence (Reichenbach 1956, Suppes 1970, Cartwright 1983, Eells 1991); causation is the relation that holds between means and ends (Gasking 1955, von Wright 1975, Woodward 2003); causes are events but for which their effects would not have happened (Lewis 1986); causes and effects are connected by physical processes that are capable of transmitting certain types of properties (Salmon 1984, Dowe 2000).

It often happens, however, that advances in science force people to abandon aspects of their common sense picture of the world. For example, Einstein's theories of relativity have forced people to rethink their conceptions of time, space, matter, and energy. What lessons does science teach about the concept of causation?

Russell's Challenge

In 1912, the eminent British philosopher Bertrand Russell delivered his paper "On the Notion of Cause" before the Aristotelian Society. In this paper, he claimed that the notion of cause had no place in a scientific worldview:

All philosophers, of every school, imagine that causation is one of the fundamental axioms or postulates of science, yet, oddly enough, in advanced sciences such as gravitational astronomy, the word "cause" never appears To me, it seems that the reason why physics has ceased to look for causes is that, in fact, there are no such things. The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm. (p. 1)

Russell was not alone in this view. Other writers of the period, such as Ernst Mach (the German physicist and philosopher of science), Karl Pearson (the father of modern statistics), and Pierre Duhem (French physicist, as well as historian and philosopher of science), also argued that causation did not belong in the world of science. This view was shared by the logical positivists, a group of philosophers working primarily in Austria and Germany between the World Wars whose ideas shaped much of philosophy of science in the twentieth century. A general suspicion of causal notions also pervaded a number of fields outside of philosophy, such as statistics and psychology.

Causation in Science

Despite Russell's remark, it is simply false that the word "cause" (and its cognates) does not appear in the advanced sciences. Russell's claim can be readily refuted by perusing any leading science journal. Admittedly, some uses of the word "cause" and its cognates have specific technical meaningssuch as talk of "causal structure" in connection with the general theory of relativitybut frequently enough these words are used in their ordinary English sense. To cite just one example, an issue of Physical Review Letters from 2003 contains an article titled "Specific-Heat Anomaly Caused by Ferroelectric Nanoregions in Pb(Mg[sub 1/3]Nb[sub 2/3])O[sub 3] and Pb(Mg[sub 1/3]Ta[sub 2/3])O[sub 3] Relaxors." Moreover, it has become common in physics to classify a variety of phenomena as "effects": there is the "Hall effect," the "Kondo effect," the "Lamb-shift effect," the "Zeeman effect," and so on. But surely "cause and effect" are an inseparable pair: where there are causes, there are effects that are caused by them, and where there are effects, there are causes that cause them.

The person on the street is more likely to encounter causal claims from the medical sciences, such as: "Cholesterol in the bloodstream causes hardened arteries, which in turn causes heart attacks." While the medical sciences may not be as advanced as Russell's example of gravitational astronomy, it is implausible to think that these causal claims are the result of conceptual confusion, or are otherwise scientifically disreputable.

Despite the falsehood of its most provocative claim, however, Russell's paper does succeed in highlighting a number of important and interesting problems about the role of causation in science.

Anti-Fundamentalism

Although the advanced sciences have hardly eschewed talk of causation, it is true that the deepest physical principlessuch as Newton's three laws of motion, his law of universal gravitation, Maxwell's equations governing the electric and magnetic fields, Schrödinger's equation governing the evolution of quantum systems, and Einstein's field equations relating the distribution of mass-energy in the universe with the structure of space and timemake no mention of causation. All of these principles take the form of mathematical equations and act as constraints on possible states of physical systems (under suitable mathematical characterizations). A given sequence of states may be compatible with, for example, Newton's laws of motion, but nothing in those laws explicitly says that certain states (or aspects of those states) cause others. This suggests that the causal relation is not part of the constitution of the world at the deepest metaphysical level, a view that the historian and philosopher of science John Norton labels "anti-fundamentalism" (Norton 2003). Indeed, the world described by fundamental physics is in many ways at odds with the ordinary picture of a world regimented by cause and effect relationships.

Asymmetry

People normally think of causation as both asymmetric and temporally biased. It is asymmetric in the sense that if C is a cause of E, then (always? typically?) E is not a cause of C. This claim must be stated with some care. It may be, for instance, that anxiety is a cause of insomnia, which is in turn a cause of anxiety. But it is one's anxiety on Monday evening that causes insomnia on Monday night, which in turn causes anxiety on Tuesday morning. Monday night's insomnia is not both the cause and the effect of one and the same episode of anxiety. Causation is temporally biased in the sense that causes (always? typically?) occur before their effects in time.

By contrast, the fundamental laws of physics mentioned above are all time-reversal invariant. That is, if a particular sequence of states of a physical system is consistent with the laws of physics, then the temporally reversed sequence is also consistent with those laws. The laws of physics do not discriminate between the past and the future in the way that causation does, with two possible exceptions. The first exception involves the statistical laws governing the decay of certain mesons. While these laws exhibit a slight temporal asymmetry, the phenomena in question seem too esoteric to be of much help in understanding the asymmetry of causation.

The second exception is the second law of thermodynamics, which states that the entropy of a closed system can increase but never decrease. Thus a closed system whose entropy is increasing is consistent with the second law, while the temporal reverse of this system is not. The second law of thermodynamics is not, however, a fundamental law. The entropy of a physical system is determined by the physical state of the particles that make up the system, as characterized in terms of ordinary physical parameters such as position and momentum. These particles are in turn governed by the time-reversal invariant laws already mentioned. It is thus something of a mystery how the asymmetric second law of thermodynamics can arise from the underlying symmetric dynamics governing the constituents of thermodynamic systems. One prominent view is that the second law of thermodynamics is the result of de facto temporal asymmetries in the boundary conditions of the universe.

There have been a few attempts to ground the asymmetry of causation in the second law of thermodynamics. The basic idea is that the best characterization of our physical universe will include not only the fundamental laws of physics, but also the statement that in the past our universe was in a state of very low entropythe so-called "past hypothesis."When entertaining various counterfactual suppositions, one conjoins those suppositions with the laws of physics and the past hypothesis to determine what the world would be like if those suppositions were true. Because people hold fixed features of the past, but not of the future, when entertaining contrary-to-fact suppositions, any changes from the actual world introduced in those suppositions will tend to entail significant changes in the future but only insignificant changes in the past. In this way, macroscopic features of the future will counterfactually depend upon what is true in the present, whereas macroscopic features of the past will not. This asymmetric relation of counterfactual dependence can then serve as the basis of an account of causation (such as that of David Lewis in "Causation" [1986]). If this account is correct, then the existence of an asymmetric causal relation is not guaranteed by the laws of physics but is rather the consequence of contingent asymmetries in the boundary conditions of the world.

The best-known attempt to account for causal asymmetry is the common cause principle, first formulated by the German-American Philosopher Hans Reichenbach and presented in his posthumously published book The Direction of Time (1956). For Reichenbach, temporal order and causal order are conceptually intertwined. Reichenbach defines causation in terms of probabilities and temporal order, but temporal order is itself defined in terms of asymmetries in probabilities. Let A and B be two events that are probabilistically correlated ; in other words, the probability that A and B will occur together, P(A & B), is greater than the product of the individual probabilities, P(A)P(B). (If the two probabilities are equal, then A and B are said to be probabilistically independent.) An event C is said to "screen off" A from B if it renders them conditionally independent; that is, if P(A & B|C) = P(A|C)P(B|C). If there is an earlier event C that screens off A from B, but no later event that does so, then the trio ABC forms a conjunctive fork open to the future. If there is a later screener-off E, but no earlier one, then ABE is a conjunctive fork open to the past. Finally, if there is an earlier and a later screener-off, then that is a closed fork. According to Reichenbach, the overwhelming majority of open forks are open to the future, and this probabilistic asymmetry provides the basis for the distinction between the past and the future. Reichenbach further held that if two events A and B are correlated, and neither is a cause of the other, then there exists a common cause of A and B in their mutual past that screens A off from B.

Reichenbach believed that his common cause principle was related to the second law of thermodynamics. Think of A & B as one possible state of a physical system, the other possible states being A & B, A & B, and A & B. A probability distribution over these states in which A and B are correlated contains information, in a sense that is made precise within the mathematical field of information theory. From a formal perspective, information is inversely related to entropy. Thus a correlation between A and B is like a low entropy state of a physical system, and it is to be explained in terms of an earlier causal interaction between the system and its external environment.

There are a number of difficulties facing Reichenbach's common cause principle. The principle seems to fail for certain quantum phenomena involving distant correlations, such as the one featured in the famous thought experiment by the physicists Albert Einstein, Boris Podolski, and Nathan Rosen, in their 1935 paper "Can Quantum Mechanical Description of Reality Be Considered Complete?" In a simplified version of this setup, two particles form a single system in which the total spin is zero. If the particles are separated, and the spin of each particle is measured, they will always be found to have opposite spins. There is thus a correlation between the outcome of the two measurements. Neither measurement result can be a cause of the other, for the measurements can be conducted at such a great distance that not even a light signal could connect the two. Yet a series of mathematical and empirical results, beginning with the work of the physicist John Bell in 1964, show that there can be no earlier state of the two-particle system that screens off the measurement outcomes.

A further problem is that it is unclear why Reichenbach's fork asymmetry should hold within the physical framework of classical statistical mechanics. Within this framework, a system possesses a microstate that evolves deterministically according to Newton's laws of motion. An "event" A is just a coarse-grained characterization of the state of the system at a particular time, consistent with many different microstates. A probability distribution is defined over the possible states of the system. Suppose that the events A and B are correlated according to this probability measure, and that there is an earlier event C that screens off A from B. It is possible to take the image of C under the deterministic dynamics of the system; that is, one can evolve each microstate in C to some point in time after the occurrence of A and B and collect the resulting set of microstates into a new event C. By construction, C will stand in the same probability relations with A and B that C did. Hence, C will be a later event that screens off A from B, and ABCC will form a closed fork. Because this procedure is fully general, it is not clear how there can be forks open to the future at all. One possible reply to this worry is that in such a closed fork, the later screener-off C will just be a heterogeneous collection of microstates, and hence will not qualify as an "event" in the relevant sense. This reply raises two new questions: first, which sets of microstates constitute genuine events? Second, why should we expect that only earlier screeners off will be genuine events?

Further Causal Anomalies

There are a number of further respects in which the world described by fundamental physics seems not to be one ruled by relations of cause and effect. It is well known that certain quantum-mechanical phenomena such as radioactive decay appear to be indeterministic. For example, even a complete description of the present state of a carbon-14 atom cannot allow one to predict whether or not it will decay during a certain period of time, but will instead yield only a probability that decay will occur. If the atom does eventually decay, can anything be said to cause the decay event? This kind of indeterminism provides part of the motivation for attempts to analyze causation in terms of probabilities. But even probabilistic theories of causation have difficulties when indeterminism is coupled with the sorts of distant correlations described in the previous section.

Moreover, even classical Newtonian physics admits indeterminism. For example, John Norton, in "Causation as Folk Science," describes a system consisting of a point mass sitting at the apex of a bell-shaped dome. Newton's laws of motion permit the point mass to rest there indefinitely, but they also allow it to begin sliding down the side of the dome in an arbitrary direction after an arbitrary finite time. No force is necessary to dislodge the mass: the sudden motion of the mass down the side of the dome is fully consistent with the constraint that at every instant, the force acting on the mass (due to the pull of gravity, and the reactive push of the dome's wall) is proportional to its acceleration. Such a motion thus appears to be entirely uncaused.

Einstein's general theory of relativity also gives rise to causal anomalies. For example, the Austrian-American mathematician Kurt Gödel showed that Einstein's field equations permitted solutions in which there were closed causal curves. Thus it may be possible for a billiard ball to get knocked, continue rolling along its new trajectory, and then eventually bump into its earlier self, knocking it into that new trajectory in the first place. Such a scenario appears to be at odds with people's ordinary conception of causation as an asymmetric relation, for the collision between the older and younger billiard ball causes the trajectory of the younger ball, which in turn causes that collision.

Causal Inference

One of Russell's targets in "On the Notion of Cause" was the so-called "law of causality"; indeed, it is this law, rather than the "notion of cause" itself, whose utility is compared to that of the British monarchy. Russell cites a formulation of this principle from the nineteenth-century British philosopher John Stuart Mill: "The Law of Causation, the recognition of which is the main pillar of inductive science, is but the familiar truth, that invariability of succession is found by observation to obtain between every fact in nature and some other fact which has preceded it." (Mill 1856, p. 359.)

According to Mill, science discovers causal relationships by discovering invariable regularities in nature, and the success of science presupposes the pervasiveness of such regularities. Russell was certainly right to challenge the importance of this law to sciencenot because science is not in the business of discovering causal relationships, but because causal inference in science does not rest upon the discovery of perfect regularities.

Causal inference presents a prima facie difficulty, first articulated by the Scottish philosopher David Hume in 1739. Suppose that one billiard ball collides with a second, causing it to move. One can observe the motion of the first billiard ball; and one can observe the motion of the second billiard ball; but one cannot observe the causation that connects the two together. How, then, is a person to acquire knowledge of causal relationships?

Traditionally there have been two main lines of response to this problem. One line that has already been mentioned is to reject the notion of causation on the grounds that it is inaccessible to empirical investigation. The second line, adopted in different ways by Hume, Mill, and a number of twentieth and twenty-first century philosophers, is to try to spell out systematic connections between causation and observable phenomena such as empirical regularities in order to explain how the former can be inferred from the latter. The "law of causation" championed by Mill and attacked by Russell stems from this second line of response to the problem. (A third possibility, defended in the early part of the twentieth century by the French-American philosopher C. J. Ducasse, and in the middle of the twentieth century by the Belgian psychologist André Michotte, is to reject the claim that causation is not subject to direct perception. Even if this is possible in special cases such as billiard ball collisions, however, this hardly seems to be an adequate explanation for causal knowledge generally.) This problem concerning the empirical accessibility of causation has been a driving force behind attempts to banish causation, and also behind attempts to provide causation with a sound philosophical analysis.

In fact, however, causal inference is neither impossible nor a matter of reading causal relations off universal regularities or correlations. Causal inference, like other forms of scientific inference, is broadly "hypothetico-deductive" in character. A causal hypothesis is formulated, and in conjunction with various background assumptions (often involving causal relationships themselves), it is used to derive predictions about what types of correlations will be observed. These predictions are then compared with observations. In this way, causal hypotheses may be subjected to empirical test without the need for a direct reduction of causal claims to claims about regularities and the like.

Experimentation

The most reliable causal knowledge comes not from passive observations, but from controlled experimentation. In the medical sciences, the experiments often take the form of randomized clinical trials. Consider the claim that a particular drug causes lowered blood pressure. How might one test this claim? One possibility would be to make the drug available on the open market and observe hypertension patients who choose to take the drug and those who do not. There is a problem with this methodology. Suppose that the drug is expensive; one might expect that patients who buy the drug will be wealthier on average then those who do not. Wealthier patients might enjoy any number of other benefitssuch as access to better healthcare generally, better diets, and so onthat influence whether or not they experience a reduction in hypertension. If one finds that patients who take the drug do in fact experience greater reduction in blood pressure levels than those who do not, it can still not be known whether this reduction is due to the drug or due to one of the other advantages associated with wealth. In a randomized trial, it is determined randomly which patients will receive the drug and which will be given a placebo instead. Randomization helps to ensure that treatment is not correlated with any other causes that might influence recovery.

This example helps to show the importance of the distinction between genuine causal relationships, on the one hand, and mere regularities or correlations on the other. Suppose that the drug is available only to wealthy patients, and that patients who take the drug fare better, on average, than those who do not. If this correlation is due to the wealth of the patients who use the drug, rather than to any effect of the drug itself on hypertension, then one would not expect the correlation to persist under various policy interventions. For example, if the drug were to be covered by insurance, so that less wealthy patients could also afford to take the drug, then the correlation between use of the drug and lowered hypertension would disappear. As the philosopher Nancy Cartwright puts it in her paper "Causal Laws and Effective Strategies" (1983), causal relationships support "effective strategies," while mere correlations or regularities do not. It is for this reason, Cartwright argues, contrary to the opinion of Russell, that the notion of cause cannot be dispensed with. It is also for this reason that one often finds the most self-conscious attention to the specific concerns of causal inference in those branches of science that have a practical dimension, such as medicine and agronomy.

In many areas of science, randomized trials are not feasible. This may be due to the inability to produce the putative cause at will, or it may be due to the lack any analog of a control group that receives placebos. Nonetheless, in the experimental setting, it is often possible to isolate the influence of the cause under investigation by preventing other causes from operating. For example, an experiment might be conducted within a metallic container to eliminate external magnetic influences; or the experimental apparatus may be set afloat in a pool of mercury to prevent vibrations from being transmitted through the floor of the laboratory (as was done in the famous Michelson-Morley experiment of 1887, which failed to detect any effect of the earth's motion on the speed at which light traveled). Sometimes the experimental preparations are more mundane, such as thoroughly dusting the apparatus to eliminate the effects of stray dust particles, or even removing pigeons found nesting in the apparatus (as was required by Arno Penzias and Robert Wilson, who discovered the cosmic microwave background in 1965).

Causal Models

In some fields, such as macroeconomics, epidemiology, and sociology, experimental manipulation is simply not feasible, and causal relationships must be inferred from observed correlations. Beginning around 1990 has been an explosion of interest in developing causal modeling techniques to facilitate such nonexperimental causal inferences. Two important works that have garnered a substantial amount of attention from philosophers are Causation, Prediction and Search (2000), by the philosophers Peter Spirtes, Clark Glymour, and Richard Scheines, and Causality: Models, Reasoning, and Inference (2000) by the computer scientist Judea Pearl. Both frameworks employ graphs to represent causal relationships among sets of causal variables. The variables in a set V form the nodes of a graph, and certain pairs of variables are connected by edges in the graph. In a directed graph, the edges take the form of arrows, which point from one variable into another. If a graph over the variable set V contains an arrow from the variable X to the variable Y, that indicates that X is a direct cause of Y (also called a parent of Y ): the value of X has an effect on the value of Y that is not mediated by any other variable in the set V.

The causal structure represented by a directed graph is connected to a probability distribution over the values of the variables by the causal Markov condition. This condition states that, conditional upon the values of its direct causes, the values of a variable are probabilistically independent of the values of all other variables, except for its effects. In other words, a variable's parents screen off that variable from all other variables, except for its effects. (The causal Markov condition is closely related to Reichenbach's common cause principle, discussed above.)

With the help of the causal Markov condition, as well as other conditions such as the minimality and the faithfulness conditions, a graph representing causal relationships among a set of variables will serve as a model that makes predictions about probabilistic relationships among the variables. In particular, it predicts that certain variables will be dependent or independent of others, either unconditionally, or conditional upon the values of other variables. These predictions can then be tested using normal statistical means.

The most obvious use of these methods is to test whether a postulated set of causal relationships among the variables in the set V is consistent with the statistical data about the values of those variables. But there are other types of problems where these methods can be applied. Even if one does not begin by hypothesizing a specific causal model, it is possible to determine which sets of causal relations among a variable set are consistent with the statistical data. Typically, the data will not single out one causal model, but will only pick out an equivalence class of statistically indistinguishable models. In this case, background knowledge may help to narrow the set of plausible models. In a different sort of problem, one begins with a qualitative causal model and uses it to make quantitative predictions about the effects of interventions that have not yet been performed.

It is important to note that the causal Markov condition is not an a priori constraint on the relationship between causal structure and probability. It can fail, for instance, if a variable set V omits a variable that is a common cause of two variables included in V. The causal Markov condition is at best an empirical assumption that holds for a wide variety of causal structures, and hence any application of techniques based on the causal Markov condition to infer causal relationships from probabilistic data carries substantive empirical presuppositions. A number of critics have charged that these presuppositions severely limit the utility of the new causal modeling techniques.

Conclusions

Contrary to Russell's claim, causal notions are as pervasive in science as they are in philosophy and everyday life. New scientific techniques continue to be developed for the discovery of causal relationships. Nonetheless, the world as it is described by the deepest physical principles bears little resemblance to a world that is regimented by asymmetrical causal relationships. Thus there remain a number of deep puzzles about how causal relationships can emerge from physical laws that themselves make no mention of causality.

See also Causation, Metaphysical Issues; Probability and Chance.

Bibliography

Albert, David. Time and Chance. Cambridge, MA: Harvard University Press, 2000.

Arntzenius, Frank. "Physics and Common Causes." Synthese 82 (1990): 7796.

Bell, John. "On the Einstein-Podolsky-Rosen Paradox." Physics 1 (1964): 195200.

Cartwright, Nancy. How the Laws of Physics Lie. Oxford: Clarendon Press, 1983.

Corry, Richard, and Huw Price, eds. Causation and the Constitution of Reality. Oxford: Oxford University Press, 2005.

Dowe, Phil. Physical Causation. Cambridge, U.K.: Cambridge University Press, 2000.

Ducasse, Curt J. "On the Nature and Observability of the Causal Relation." Journal of Philosophy 23 (1926): 5768.

Duhem, Pierre. The Aim and Structure of Physical Theory. Princeton, NJ: Princeton University Press, 1991.

Earman, John. A Primer on Determinism. Dordrecht, Netherlands: Riedel, 1986.

Eells, Ellery. Probabilistic Causality. Cambridge, U.K.: Cambridge University Press, 1991.

Einstein, Albert, Boris Podolski, and Natha Rosen. "Can Quantum Mechanical Description of Physical Reality Be Considered Complete?" Physical Review 47 (1935): 777780.

Field, Hartry. "Causation in a Physical World." In Oxford Handbook of Metaphysics, edited by Michael Loux and Dean Zimmerman. Oxford: Oxford University Press, 2003.

Gasking, Douglas. "Causation and Recipes." Mind 64 (1955): 479487.

Gopnik, Alison, and Laura Schulz. Causal Learning: Psychology, Philosophy and Computation. Oxford: Oxford University Press, 2005.

Hempel, Carl Gustav. "Aspects of Scientific Explanation." In Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. New York: Free Press, 1965.

Hume, David. An Enquiry concerning Human Understanding. Oxford: Oxford University Press, 1999.

Hume, David. A Treatise of Human Nature, 2nd ed. Oxford: Clarendon Press, 1975.

Kant, Immanuel. A Critique of Pure Reason. Translated by Paul Guyer and Allen Wood. Cambridge, U.K.: Cambridge University Press, 1998.

Lewis, David. "Causation," with Postscripts. In Philosophical Papers, Volume II. Oxford: Oxford University Press, 1986.

Mackie, John. The Cement of the Universe. Oxford: Clarendon Press, 1974.

McKim, Vaughn, and Stephen Turner. Causality in Crisis? Statistical Methods and the Search for Causal Knowledge in the Social Sciences. Notre Dame, IN: University of Notre Dame Press, 1997.

Michotte, André. The Perception of Causality. New York: Basic Books, 1963.

Mill, John Stuart. A System of Logic: Ratiocinative and Inductive. 4th edition. London: Parker and Son, 1856.

Moriya, Yosuke, Hitoshi Kawaji, Takeo Tojo, and Tooru Atake. "Specific-Heat Anomaly Caused by Ferroelectric Nanoregions in Pb(Mg(sub 1/3]Nb(sub 2/3])O(sub 3] and Pb(Mg(sub 1/3]Ta(sub 2/3])O(sub 3] Relaxors." Physical Review Letters 90 (2003): 205901.

Norton, John. "Causation as Folk Science." Philosopher's Imprint 3 (4) (2003). Available from www.philosophersimprint.org/003004/.

Pearl, Judea. Causality: Models, Reasoning, and Inference. Cambridge, U.K.: Cambridge University Press, 2000.

Pearson, Karl. The Grammar of Science. 3rd ed. reprint. New York: Meridian, 1957.

Price, Huw. Time's Arrow and Archimedes' Point. Oxford: Oxford University Press, 1996.

Reichenbach, Hans. The Direction of Time. Berkeley: University of California Press, 1956.

Russell, Bertrand. "On the Notion of Cause." Proceedings of the Aristotelian Society 13 (1913): 126

Salmon, Wesley. Scientific Explanation and the Causal Structure of the World. Princeton, NJ: Princeton University Press, 1984.

Spirtes, Peter, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search, 2nd ed. Cambridge, MA: MIT Press, 2000.

Suppes, Patrick. A Probabilistic Theory of Causality. Amsterdam: North-Holland, 1970.

van Fraassen, Bas. "The Charybdis of Realism: Epistemological Implications of Bell's Inequality." Synthese 52 (1982): 2538.

von Wright, Georg. Causality and Determinism. New York: Columbia University Press, 1975.

Woodward, James. Making Things Happen: A Theory of Causal Explanation. Oxford: Oxford University Press, 2003.

Christopher R. Hitchcock (2005)

About this article

Causation: Philosophy of Science

Updated About encyclopedia.com content Print Article