Mill's Methods of Induction

views updated

MILL'S METHODS OF INDUCTION

John Stuart Mill, in his System of Logic (Book III, Chapters 810), set forth and discussed five methods of experimental inquiry, calling them the method of agreement, the method of difference, the joint method of agreement and difference, the method of residues, and the method of concomitant variation. Mill maintained that these are the methods by which we both discover and demonstrate causal relationships, and that they are of fundamental importance in scientific investigation. Mill called these methods "eliminative methods of induction." In so doing, he was drawing an analogy with the elimination of terms in an algebraic equationan analogy that is rather forced, except with respect to the various methods that are classed under the heading of method of difference. As will be demonstrated, it is perhaps best to use the term "eliminative methods" with reference to the elimination of rival candidates for the role of cause, which characterizes all these methods.

Illustrations of the Methods

The general character of Mill's methods of experimental inquiry may be illustrated by examples of the two simplest ones, the methods of agreement and of difference. Mill's canon for the method of agreement is this: "If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree is the cause (or effect) of the given phenomenon."

For example, if a number of people who are suffering from a certain disease have all gone for a considerable time without fresh fruits or vegetables, but have in other respects had quite different diets, have lived in different conditions, belong to different races, and so on, so that the lack of fresh fruits and vegetables is the only feature common to all of them, then we can conclude that the lack of fresh fruits and vegetables is the cause of this particular disease.

Mill's canon for the method of difference is this: "If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring in the former; the circumstance in which alone the two instances differ, is the effect, or the cause, or an indispensable part of the cause, of the phenomenon."

For example, if two exactly similar pieces of iron are heated in a charcoal-burning furnace and hammered into shape in exactly similar ways, except that the first is dipped into water after the final heating while the second is not, and the first is found to be harder than the second, then the dipping of iron into water while it is hot is the cause of such extra hardnessor at least an essential part of the cause, for the hammering, the charcoal fire, and so on may also be needed. For all this experiment shows, the dipping alone might not produce such extra hardness.

The method of agreement, then, picks out as the cause the one common feature in a number of otherwise different cases where the effect occurs; the method of difference picks out as the cause the one respect in which a case where the effect occurs differs from an otherwise exactly similar case where the effect does not occur. Both are intended to be methods of ampliative induction, that is, methods by which we can reason from a limited number of observed instances to a general causal relationship: The intended conclusion is that a certain disease is always produced by a lack of fresh fruits and vegetables, or that dipping iron into water while it is hot always hardens it, if it has been heated and hammered in a particular way. And the other three methods are intended to work in a similar manner.

These methods have been criticized on two main counts: First, it is alleged that they do not establish the conclusions intended, so that they are not methods of proof or conclusive demonstration; and second, that they are not useful as methods of discovery. Such criticisms have been used to support the general observation that these methods play no part, or only a very minor part, in the investigation of nature, and that scientific method requires a radically different description.

In order to estimate the force of such criticisms, and to determine the real value of the eliminative methods, Mill's formulation need not be discussed in detail. Instead, one need only determine what would be valid demonstrative methods corresponding to Mill's classes, and then consider whether such methods, or any approximations of them, have a place in either scientific or commonsense inquiry.

Methods of Agreement and of Difference

To avoid unnecessary complications, let us assume that the conclusion reached by any application of the method of agreement or of difference is to have the form "Such-and-such is a cause of such-and-such kind of event or phenomenon." For a formal study of these methods and the joint method we could regard a cause as a necessary and sufficient condition of the effector, in some cases, as a necessary condition only, or as a sufficient condition onlywhere to say that X is a necessary condition for Y is just to say that wherever Y is present, X is present, or briefly that all Y are X ; and to say that X is a sufficient condition for Y is just to say that wherever X is present Y is present, or briefly that all X are Y.

In general we shall be looking for a condition that is both necessary and sufficient for the phenomenon, but there are variants of the methods in which we look for a condition that is merely necessary or merely sufficient. In practice, however, we are concerned with conditions that are not absolutely necessary or sufficient, but that are rather necessary and/or sufficient in relation to some field, that is, some set of background conditions, which may be specified more or less exactly. We are concerned, for example, not with the cause of a certain disease in general, but with what causes it in human beings living on the earth, breathing air, and so forth. Again, we are concerned not with the cause of hardness in general, but with that of a greater-than-normal hardness in iron in ordinary circumstances and at ordinary temperatures. The field in relation to which we look for a cause of a phenomenon must be such that the phenomenon sometimes occurs in that field and sometimes does not. We may assume that this field is constituted by the presence of certain qualities or at least of some general descriptive features, not by a specific location.

The observation that supports the conclusion is an observation of one or more instances in each of which various features are present or absent. An instance may be one in which the phenomenon in question occurs, which we may call a positive instance, or one in which the phenomenon does not occur, which we may call a negative instance.

To reason validly, however, from any such observation to a general causal conclusion, we require an additional general premise, an assumption. We must assume that there is some condition which, in relation to the field, is necessary and sufficient (or which is necessary, or which is sufficient) for the phenomenon, and also that this condition is to be found within a range of conditions that is restricted in some way. For these methods fall within the general class of eliminative forms of reasoning, that is, arguments in which one possibility is confirmed or established by the elimination of some or all of its rivals. The assumption will state that there is a cause to be found and will limit the range of candidates for the role of cause; the task of the observation will be to rule out enough of the candidates initially admitted to allow some positive conclusion.

possible causes

It follows from the above that the assumption must indicate some limited (though not necessarily finite) set of what we may call possible causes. These are the factors (Mill calls them circumstances or antecedents ) that, it is initially assumed, may be causally relevant to the phenomenon. Any possible cause, any factor that may be causally relevant in relation to the field in question, must, like the phenomenon itself, be something that sometimes occurs and sometimes does not occur within that field.

But are we to assume that a possible cause acts singly, if it acts at all? If the possible causes are A, B, C, etc., the phenomenon is P, and the field is F, are we to assume that the cause of P in F will be either A by itself or B by itself, and so on? Or are we to allow that it might be a conjunction, say AC, so that P occurs in F when and only when A and C are both present? Are we to allow that the necessary and sufficient condition might be a disjunction, say (B or D ), so that P occurs in F whenever B occurs, and whenever D occurs, but only when one or other (or both) of these occurs? Again, are we to allow that what we have taken as possible causes may include counteracting causes, so that the actual cause of P in F may be, say, the absence of C (that is, the negation not-C, or C̅ ) or perhaps BC̅ so that P occurs in F when and only when B is present and C is absent at the same time?

There are in fact valid methods with assumptions of different sorts, from the most rigorous kind, which requires that the actual cause should be just one of the possible causes by itself, through those which progressively admit negations, conjunctions, and disjunctions of possible causes and combinations of these, to the least rigorous kind of assumption, which says merely that the actual cause is built up out of these possible causes in some way.

classification of these methods

There will be, then, not one method of agreement, one method of difference, and one joint method, but a series of variants of each. A complete survey could be made of all possible methods of these types, numbered as follows: A number from 1 to 8 before a decimal point will indicate the kind of assumption. Thus, it is assumed that there is an actual cause that is

  1. one of the possible causes;
  2. one of the possible causes or the negation of a possible cause;
  3. a possible cause or a conjunction of possible causes;
  4. a possible cause or a disjunction of possible causes;
  5. a possible cause or the negation of a possible cause, or a conjunction each of whose members is a possible cause or the negation of a possible cause;
  6. a possible cause, or the negation of a possible cause, or a disjunction each of whose members is a possible cause or the negation of a possible cause;
  7. a possible cause, or a conjunction of possible causes, or a disjunction each of whose members is a possible cause or a conjunction of possible causes;
  8. a possible cause, or the negation of a possible cause, or a conjunction each of whose members is a possible cause or the negation of one; or a disjunction each of whose members is a possible cause or the negation of one, or a conjunction each of whose members is a possible cause or a negation of one.

The first figure after the decimal point will indicate the sort of observation, as follows:

  1. a variant of the method of agreement;
  2. a variant of the method of difference;
  3. a variant of the joint method;
  4. a new but related method.

The second figure after the decimal point will mark further differences where necessary, but this figure will have no constant significance.

The complete survey cannot be given here, but a few selected variants will be considered, numbered in the manner set forth above.

positive method of agreement

Let us begin with an assumption of the first kind, that there is a necessary and sufficient condition X for P in F, that is, that for some X all FP are X and all FX are P, and X is identical with one of the possible causes A, B, C, D, E. (It may be noted that a condition thus specified may sometimes not be what we would ordinarily regard as the cause of the phenomenon: We might rather say that it contains the real cause. However, in our present account we shall call such a condition the cause; it is explained below how the cause of a phenomenon may be progressively located with greater precision.)

We obtain a variant of the method of agreement (1.12) by combining with this assumption the following observation: A set of one or more positive instances such that one possible cause, say A, is present in each instance, but for every other possible cause there is an instance from which that cause is absent. This yields the conclusion that A is necessary and sufficient for P in F.

For example, the observation might be this:

ABCDE
I1pap·a
I2ppaa·

where p indicates that the possible cause is present, a that it is absent, and a dot that it may be either present or absent without affecting the result. I1 and I2 are positive instances: I1 shows that neither B nor E is necessary for P in F, I2 that neither C nor D is necessary, and hence, given the assumption, it follows that A is necessary and sufficient.

Since this reasoning eliminates candidates solely on the ground that they are not necessary, there is another variant (1.11) that assumes only that there is some necessary condition for P in F identical with one of the possible causes, and (with the same observation) concludes that A is a necessary condition for P in F.

Negative method of agreement.

Besides the positive method of agreement, in which candidates are eliminated as not being necessary because they are absent from positive instances, there are corresponding variants of a negative method of agreement in which candidates are eliminated as not being sufficient because they are present in negative instances. This requires the following observation: A set of one or more negative instances such that one possible cause, say A, is absent from each instance, but for every other possible cause there is an instance in which it is present. For example:

ABCDE
N1ap···
N2a·pp·
N3a···p

If the assumption was that one of the possible causes is sufficient for P in F, this observation would show (1.13) that A is sufficient, while if the assumption was that one of the possible causes is both necessary and sufficient, this observation would show (1.14) that A is necessary and sufficient.

method of difference

For the simplest variant of the method of difference (1.2) we need this observation: a positive instance I1 and a negative instance N1 such that of the possible causes present in I1, one, say A, is absent from N1, but the rest are present in N1. For example:

ABCDE
I1pppa·
N2app·p

Here D is eliminated because it is absent from I1, and hence not necessary, and B, C, and E are eliminated because they are present in N1 and hence not sufficient. Hence, given the assumption that one of the possible causes is both necessary and sufficient for P in F, it follows that A is so. (Note that since it would not matter if, say, E were absent from I1, the presence of the actual cause in I1 need not be the only difference between the instances.) We may remark here that the method of difference, unlike some variants of the method of agreement, requires the assumption that there is some condition that is both necessary and sufficient for P. It is true, as we shall see later with variants 4.2 and 8.2, that the "cause" detected by this method is often not itself a necessary condition, or even a sufficient one; but the assumption needed is that something is both necessary and sufficient.

joint method

The joint method may be interpreted as an indirect method of difference, that is, the job done by I1 above may be shared among several positive instances, and the job done by N1 among several negative instances. That is, we need (for 1.3) the following observation: a set Si of one or more positive instances and a set Sn of one or more negative instances such that one of the possible causes, say A, is present throughout Si and absent throughout Sn, but each of the other possible causes is either absent from at least one positive instance or present in at least one negative instance. Given that one of the possible causes is both necessary and sufficient, this yields the conclusion that A is so.

simple variants of these methods

With an assumption of the second kind (that the requisite condition is either a possible cause or a negation of a possible cause) we need stronger observations. Thus, for variants of the positive method of agreement (2.11 and 2.12) we need this: two or more positive instances such that one possible cause (or negation), say A, is present in each instance, but for every other possible cause there is an instance in which it is present and an instance from which it is absent. This is needed to rule out, as candidates for the role of necessary (or both necessary and sufficient) condition, the negations of possible causes as well as the possible causes other than A themselves.

For the corresponding variant of the method of difference (2.2) we need this: a positive instance I1 and a negative instance N1 such that one possible cause (or negation), say A, is present in I1 and absent from N1, but each of the other possible causes is either present in both I1 and N1 or absent from both. For example:

ABCDE
I1ppaap
N1apaap

Since B is present in N1, B is not sufficient for P in F ; but since B is present in I1, not-B is not necessary for P in F ; thus neither B nor not-B can be both necessary and sufficient. Similarly, C, D, E, and their negations, and also not-A, are ruled out, and thus the necessary and sufficient condition must be A itself. This is the classic difference observation described by Mill, in which the only (possibly relevant) difference between the instances is the presence in I1 of the factor identified as the actual cause; but we need this observation (as opposed to the weaker one of 1.2) only when we allow that the negation of a possible cause may be the actual cause.

The joint method needs, along with this weaker assumption, a similarly strengthened observation: That is, each of the possible causes other than A must be either present in both a positive and a negative instance or absent from both a positive and a negative instance, and then this variant (2.3) still yields the conclusion that A is both necessary and sufficient.

(What Mill and his followers describe as the joint method may be not this indirect method of difference, but rather a double method of agreement, in which a set of positive instances identifies a necessary condition and a set of negative instances identifies a sufficient condition. Such a combination is redundant with an assumption of either of the first two kinds, but not when the assumption is further relaxed.)

more complex variants

We consider next an assumption of the third kind, that the requisite condition is either a possible cause or a conjunction of possible causes. (This latter possibility seems to be at least part of what Mill meant by "an intermixture of effects.") This possibility does not affect the positive method of agreement, since if a conjunction is necessary, each of its conjuncts is necessary, and candidates can therefore be eliminated as before. But since the conjuncts in a necessary and sufficient condition may not severally be sufficient, the negative method of agreement as set forth above will not work. The observation of (1.13 or) 1.14 would now leave it open that, say, BC was the required (sufficient or) necessary and sufficient condition, for if C were absent from N1 and B from N2, then BC as a whole might still be sufficient: It would not be eliminated by either of these instances. This method now (in 3.14) needs a stronger observation, namely, a single negative instance N1 in which one possible cause, say A, is absent, but every other possible cause is present. This will show that no possible cause or conjunction of possible causes that does not contain A is sufficient for P in F. But even this does not show that the requisite condition is A itself, but merely that it is either A itself or a conjunction in which A is a conjunct. We may express this by saying that the cause is (A ), where the dots indicate that other conjuncts may form part of the condition, and the dots are underlined, while A is not, to indicate that A must appear in the formula for the actual cause, but that other conjuncts may or may not appear.

The corresponding variant (3.2) of the method of difference needs only the observation of 1.2; but it, too, establishes only the less complete conclusion that (A ) is a necessary and sufficient condition of P in F. For while (in the example given for 1.2 above) B, C, D, and E singly are still eliminated as they were in 1.2, and any conjunctions such as BC which, being present in I1, might be necessary, are eliminated because they are also present in N1 and hence not sufficient, a conjunction such as AB, which contains A, is both present in I1, and absent from N1, and might therefore be both necessary and sufficient. Thus this assumption and this observation show only that A is, as Mill put it, "the cause, or an indispensable part of the cause." The full cause is represented by the formula (A ), provided that only possible causes that are present in I1 can replace the dots.

In the corresponding variant of the joint method (3.3), we need a single negative instance instead of the set Sn, for the same reason as in 3.14, and the cause is specified only as (A ).

With an assumption of the fourth kind (that the requisite condition is either a possible cause or a disjunction of possible causes), the negative method of agreement (4.13 and 4.14) works as in 1.13 and 1.14, but the positive method of agreement is now seriously affected. For with the observation given for 1.12 above, the necessary and sufficient condition might be, say, (B or C ), for this disjunction is present in both I1 and I2, though neither of its disjuncts is present in both. Thus the observation of 1.12 would leave the result quite undecided. We need (for 4.12) a much stronger observation, that is, a single positive instance in which A is present but all the other possible causes are absent together; but even this now shows only that the cause is (A or ). This assumption (that the cause may be a disjunction of possible causes) allows what Mill called a "plurality of causes," for each of the disjuncts is by itself a "cause" in the sense that it is a sufficient condition; and what we have just noted is the way in which this possibility undermines the use of the method of agreement.

The method of difference, on the other hand (4.2), still needs only the observation of 1.2; this eliminates all possible causes other than A, and all disjunctions that do not contain A, either as being not sufficient because they are present in N1 or as not necessary because they are absent from I1. The only disjunctions not eliminated are those that occur in I1 but not in N1, and these must contain A. Thus this observation, with this assumption, shows that a necessary and sufficient condition is (A or ), that is, either A itself or a disjunction containing A, where the other disjuncts are possible causes absent from N1. This, of course, means that A itself, the factor thus picked out, may be only a sufficient condition for P.

The joint method with this assumption (4.3) needs a single positive instance, but can still use a set of negative instances and it specifies the cause as (A or ).

As the assumptions are relaxed further, the method of agreement requires stronger and stronger observations. For example, in 6.12, which is a variant of the positive method with an assumption allowing that the necessary and sufficient condition may be a disjunction of possible causes or negations, the observation needed is a set Si of positive instances such that one possible cause, say A, is present in each, but that for every possible combination of the other possible causes and their negations there is an instance in which this combination is present (that is, if there are n other possible causes, we need 2n different instances). This observation will climinate every disjunction that does not contain A, and will show that the requisite necessary and sufficient condition is (A or ), and hence that A itself is a sufficient condition for P in F. A corresponding variant of the negative method of agreement (5.14) shows that (A ) is a necessary and sufficient condition, and hence that A itself is necessarya curious reversal of roles, because in the simplest variants, the positive method of agreement was used to detect a necessary condition and the negative one a sufficient condition.

In the method of difference, however, the observation of 1.2 (or, where negations are admitted, that of 2.2) continues to yield results, though the conclusions become less complete, that is, the cause is less and less completely specified. For example, in 8.2, where we assume that there is a necessary and sufficient condition for P in F which may be one of the possible causes, or a negation of one, or a conjunction of possible causes or negations, or a disjunction of possible causes or negations or of conjunctions of possible causes or negationswhich in effect allows the actual condition to be built up out of the possible causes in any waythe observation of 2.2 establishes the conclusion that the requisite condition is (A or ). that is to say, it is either A itself, or a conjunction containing A, or a disjunction in which one of the disjuncts is A itself or a conjunction containing A. Since any such disjunct in a necessary and sufficient condition is a sufficient condition, this observation, in which the presence of A in I1 is the only possibly relevant difference between I1 and N1, shows even with the least rigorous kind of assumption that A is at least a necessary part of a sufficient condition for P in F the sufficient condition being (A ).

The joint method, as an indirect method of difference, ceases to work once we allow both conjunctions and disjunctions; but a double method of agreement comes into its own with this eighth kind of assumption. In 8.12, as in 6.12, if there are n possible causes other than A, the set of 2n positive instances with A present in each but with the other possible causes present and absent in all possible combinations will show that (A or ) is necessary and sufficient, and hence that A is sufficient. Similarly in 8.14, as in 5.14, the corresponding set of 2n negative instances will show that (A ) is necessary and sufficient and hence that A is necessary. Putting the two observations together, we could conclude that A is both necessary and sufficient.

A new method, similar in principle, can be stated as follows (8.4): If there are n possible causes in all, and we observe 2n instances (positive or negative) which cover all possible combinations of possible causes and their negations, then the disjunction of all the conjunctions found in the positive instances is both necessary and sufficient for P in F. For example, if there are only three possible causes, A, B, C,

and we have the observations listed in the accompanying table, then (ABC̄ or AB̄C or ĀBC̄ ) is a necessary and sufficient condition for P in F. For if these are the only possibly relevant conditions, each combination of possible causes and negations for which P is present is sufficient for P, and these are the only sufficient conditions for P, since in all the relevantly different circumstances P is absent; but the disjunction of all the sufficient conditions must be both necessary and sufficient, on the assumption that there is some condition that is both necessary and sufficient.

many valid methods

We thus find that while we must recognize very different variants of these methods according to the different kinds of assumptions that are used, and while the reasoning that validates the simplest variants fails when it is allowed that various negations and combinations of factors may constitute the actual cause, nevertheless there are valid demonstrative methods which use even the least rigorous form of assumption, that is, which assume only that there is some necessary and sufficient condition for P in F, made up in some way from a certain restricted set of possible causes. But with an assumption of this kind we must be content either to extract (by 8.2) a very incomplete conclusion from the classical difference observation or (by 8.12, 8.14, the combination of these two, or 8.4) to get more complete conclusions only from a large number of instances in which the possible causes are present or absent in systematically varied ways.

an extension of the methods

An important extension of all these methods is the following: Since in every case the argument proceeds by eliminating certain candidates, it makes no difference if what is not eliminated is not a single possible cause but a cluster of possible causes which in our instances are always present together or absent together, the conclusion being just as we now have it, but with a symbol for the cluster replacing A. For example, if in 2.2 we have, say, both A and B present in I1 and both absent from N1, but each possible cause either present in both or absent from both, it follows that the cluster (A,B ) is the cause in the sense that the actual cause lies somewhere within this cluster. A similar observation in 8.2 would show that either A, or B, or AB, or (A or B ) is an indispensable part of a sufficient condition for P in F.

Method of Residues

The method of residues can be interpreted as a variant of the method of difference in which the negative instance is not observed but constructed on the basis of already known causal laws.

Suppose, for example, that a positive instance I1 has been observed as follows:

ABCDE
I1ppapa

Now if we had, to combine with this, a negative instance N1 in which B and D were present and A, C, and E absent, we could infer, according to the kind of assumption made, by 2.2 that A was the cause, or by 8.2 that (A or ) was the cause, and so on. But if previous inductive inquiries have already established laws from which it follows that given ĀBC̄DĒ in the field F, P would not result, there is no need to observe N1; we already know all that N1 could tell us, and so one of the above-mentioned conclusions follows from I1 alone along with the appropriate assumption.

Again, if the effect or phenomenon in which we are interested can be quantitatively measured, we could reason as follows. Suppose that we observe a positive instance, say with the factors as in I1 above, in which there is a quantity x 1 of the effect in question, while our previously established laws enable us to calculate that with the factors as in N1 there would be a quantity x 2 of this effect; then we can regard the difference (x 1x 2) as the phenomenon P which is present in I1 but absent from N1. With an assumption of kind (1) or (2) or (4) or (6)that is, any assumption that does not allow conjunctive terms in the causewe could conclude that the cause of P in this instance I1 was A alone, and hence that A is a sufficient condition for P in F. With an assumption of kind (1) or (2) we could indeed infer that A is both necessary and sufficient, but with one of kind (4) or (6) we could conclude only that a necessary and sufficient condition is (A or ).

To make an assumption of any of these four kinds is to assume that the effects of whatever factors are actually relevant are merely additive, and this lets us conclude that the extra factor in I1, namely A, by itself produces in relation to F the extra effect (x 1x 2). But with an assumption of kind (3) or (5) or (7) or (8), which allows conjunctive terms, and hence what Mill calls an "intermixture of effects," we could only infer that the cause of (x 1x 2) in this instance was (A ). With the other factors that were present in both I1 and N1, A was sufficient to produce this differential effect, but it does not follow that A is sufficient for this in relation to F as a whole. (Though Mill does not mention this, such a use of constructed instances along with some observed ones is in principle applicable to all the methods, not only to the method of difference in the way here outlined.)

Method of Concomitant Variation

The method of concomitant variation, like those already surveyed, is intended to be a form of ampliative induction; we want to argue from a covariation observed in some cases to a general rule of covariation covering unobserved cases also. To interpret this method we need a wider concept of cause than that which we have so far been using. A cause of P in the field F must now be taken, not as a necessary and sufficient condition, but as something on whose magnitude the magnitude of P, in F, functionally depends. For our present purpose this means only that there is some true lawlike proposition which, within F, relates the magnitude of the one item to that of the other. The full cause, in this sense, will be something on which, in F, the magnitude of P wholly depends, that is, the magnitude of P is uniquely determined by the magnitudes of the factors that constitute the full cause.

A full investigation of such a functional dependence would comprise two tasks: first, the identification of all the factors on which, in F, the magnitude of P depends, and second, the discovery of the way in which this magnitude depends on these factors. The completion of the first task would yield a mere list of terms, that of the second a mathematical formula. Only the first of these tasks can be performed by an eliminative method analogous to those already surveyed.

We should expect to find concomitant variation analogues of both the method of agreement and the method of difference, that is, ways of arguing to a causal relationship between P and, say, A, both from the observation of cases where P remains constant while A remains constant but all the other possibly relevant factors vary, and also from the observation of cases where P varies while A varies but all the other possibly relevant factors remain constant. And indeed there are methods of both kinds, but those of the second kind, the analogues of the method of difference, are more important.

As before, we need an assumption as well as an observation, but we have a choice between two different kinds of assumption. An assumption of the more rigorous kind would be that in F the magnitude of P wholly depends in some way on the magnitude of X, where X is identical with just one of the possible causes A, B, C, D, E. Given this, if we observe that over some period, or over some range of instances, P varies in magnitude while one of the possible causes, say A, also varies but all the other possible causes remain constant, we can argue that none of the possible causes other than A can be that on which the magnitude of P wholly depends, and thus conclude that X must be identical with A, that in F the magnitude of P depends wholly on that of A. (But how it depends, that is, what the functional law is, must be discovered by an investigation of some other sort.)

An assumption of the less rigorous kind would be that in F the magnitude of P wholly depends in some way on the magnitudes of one or more factors X, X, X, etc., where each of the actually relevant factors is identical with one of the possible causes A, B, C, D, E. Given this, if we again observe that P varies while, say, A varies but B, C, D, E remain constant, this does not now show that B, for example, cannot be identical with X, etc.; that is, it does not show that variations in B are causally irrelevant to P. All it shows is that the magnitude of P is not wholly dependent upon any set of factors that does not include A, for every such set has remained constant while P has varied. This leaves it open that the full cause of P in F might be A itself, or might be some set of factors, such as (A,B,D ) which includes A and some of the others as well. All we know is that the list must include A. This observation and this assumption, then, show that a full cause of P in F is (A, ); that is, that A is an actually relevant factor and there may or may not be others. Repeated applications of this method could fill in other factors, but would not close the list. (And, as before, it is a further task, to be carried out by a different sort of investigation, to find how the magnitude of P depends on those of the factors thus shown to be actually relevant.)

To close the list, that is, to show that certain factors are actually irrelevant, we need to use an analogue of the method of agreement. If we assume, as before, that the full cause of P in F is some set of factors (X, X, X, etc.), but also that P is responsive to all these factors in the sense that for any variation, in, say, X while X, X, etc. remain constant P will vary, and that X, X, X, etc. are identical with some of the possible causes A, B, C, D, E, then if we observe that P remains constant while, say, A, C, D, and E remain constant but B varies, we can conclude that B is causally irrelevant, that none of the X 's is identical with B.

Uses and Applications of the Eliminative Methods

We have so far been considering only whether there are demonstratively valid methods of this sort; but by stating more precisely what such methods involve, we may incidentally have removed some of the more obvious objections to the view that such methods can be applied in practice. Thus, by introducing the idea of a field, we have given these methods the more modest task of finding the cause of a phenomenon in relation to a field, not the ambitious one of finding conditions that are absolutely necessary and sufficient. By explicitly introducing the possible causes as well as the field, we have freed the user of the method of agreement from having to make the implausible claim that the user's instances have only one circumstance in common. Instead, the user has merely to claim that they have in common only one of the possible causes, while admitting that all the features that belong to the field, or that are constant throughout the field, will belong to all the instances, and that there may be other common features too, though not among those that he has initially judged to be possibly relevant.

Similarly, the user of the method of difference has only to claim that no possibly relevant feature other than the one he has picked as the cause is present in I1 but not in N1. Also, we have taken explicit account of the ways in which the possibilities of counteracting causes, a plurality of causes, an intermixture of effects, and so on, affect the working of the methods, and we have shown that even when these possibilities are admitted we can still validly draw conclusions, provided that we note explicitly the incompleteness of the conclusions that we are now able to draw (for example, by the method of difference) or the much greater complexity of the observations we need (for example, in variants of the method of agreement or method 8.4).

eliminative methods and induction

By making explicit the assumptions needed and by presenting the eliminative methods as deductively valid forms of argument, we have abandoned any pretense that methods such as these in themselves solve or remove the "problem of induction." Provided that the requisite observations can be made, the ultimate justification of any application of one of these methods of ampliative induction will depend on the justification of the assumption used; and, since this proposition is general in form, it will presumably have to be supported by some other kind of inductive, or at least nondeductive, reasoning. But we must here leave aside this question of ultimate justification.

eliminative methods and determinism

Some light, however, can be thrown on the suggestion frequently made that causal determinism is a presupposition of science. If these eliminative methods play some important part in scientific investigation, then it is noteworthy that they all require deterministic assumptions: They all work toward the identification of a cause of a given phenomenon by first assuming that there is some cause to be found for it. However, it has emerged that what we require is not a single universally applicable principle of causality, namely, that every event has a cause, but something at once weaker in some ways and stronger in other ways than such a principle. The principle assumed is that the particular phenomenon P in the chosen field F has a cause, but that a cause of P in F is to be found within a range of factors that is restricted in some way. We have also found that different concepts of a cause are required for concomitant variation and for the other methods. The complaint that the phrase "uniformity of nature" cannot be given a precise or useful meaning, incidentally, has been rebutted by finding in exactly what sense our methods have to assume that nature is uniform.

employment of the methods

Such assumptions are in fact regularly made, both in investigations within our already developed body of knowledge and in our primitive or commonsense ways of finding out about the world. In both these sorts of inquiry we act on the supposition that any changes that occur are caused; they do not "just happen." In a developed science, the causal knowledge that we already have can limit narrowly the range of possibly relevant causal factors. It can tell us, for this particular phenomenon, what kinds of cause to be on the lookout for, and how to exclude or hold constant some possibly relevant factors while we study the effects of others.

In more elementary discoveries, we restrict the range of possibly relevant factors mainly by the expectation that the cause of any effect will be somewhere in the near spatiotemporal neighborhood of the effect. The possible causes, then, will be features that occur variably within the field in question in the neighborhood of cases where the effect either occurs, or might have occurred, but does not.

use of method of differences

As an example of the above, singular causal sequences are detected primarily by the use of variants of the method of difference. Antoine-Henri Becquerel discovered that the radium he carried in a bottle in his pocket was the cause of a burn by noticing that the presence of the radium was the only possible relevant difference between the time when the inflammation developed and the earlier time when it did not, or between the part of his body where the inflammation appeared and the other parts.

Similar considerations tell us that a certain liquid turned this litmus paper red: The paper became red just after it was dipped in the liquid, and nothing else likely to be relevant happened just then. The situations before and after a change constitute our negative and positive instances respectively, and we may well be fairly confident that this is the only possibly relevant factor that has changed. We do not and need not draw up a list of possible causes, but by merely being on the lookout for other changes we can ensure that what would constitute a large number of possible causes (identified as such by their being in the spatiotemporal neighborhood) are the same in I1 as in N1.

Repeating the sequencefor example, dipping another similar piece of litmus paper into the liquidconfirms the view that the liquid caused the change of color. But it is not that in this case we are using the method of agreement; the repetition merely makes it less likely that any other change occurred to cause the change of color simultaneously with each of the two dippings, and this confirms our belief that the instances are what the use of the method of difference would require.

Since, in general, it will not be plausible to make an assumption more rigorous then one of kind (8), the conclusion thus established will only be that this individual sequence is an exemplification of a gappy causal law, of the form that (A or ) is necessary and sufficient for P in F. But this is exactly what our ordinary singular causal statements mean: To say that this caused that says only that this was needed, perhaps in conjunction with other factors that were present, to produce the effect, and it leaves it open that other antecedents altogether (not present in this case) might produce the same effect.

General causal statements, such as "The eating of sweets causes dental decay," are to be interpreted similarly as asserting gappy causal laws. Anyone who says this would admit that the eating of sweets has this effect only in the presence of certain other conditions or in the absence of certain counteracting causes, and he would admit that things other than the eating of sweets might produce tooth decay. And such a gappy causal law can be established by the use of method 8.2, or the method of concomitant variation, or by statistical methods that can be understood as elaborations of these. Such general causal statements are, however, to be understood as asserting gappy causal laws, not mere statistical correlations: Anyone who uses such a statement is claiming that in principle the gaps could be filled in.

use in discovering effects

The use of the above methods is not confined to cases where we begin with a question of the form "What is the cause of so-and-so?" We may just as well begin by asking "What is the effect of so-and-so?"for example, "What is the effect of applying a high voltage to electrodes in a vacuum tube?" But we are justified in claiming that what is observed to happen is an effect of this only if the requirements for the appropriate variant of the method of difference are fulfilled.

use of method of agreement

The simpler variants of the method of agreement can be used to establish a causal conclusion only in a case in which our previous knowledge narrowly restricts the possible causes and justifies the belief that they will operate singly. For example, if the character of a disease is such as to indicate that it is of bacterial origin, then the microorganism responsible may be identified through the discovery that only one species of microorganism not already known to be innocent is present in a number of cases of the disease. Otherwise, the observation of what seems to be the only common factor in a number of cases of a phenomenon can be used only very tentatively, to suggest a hypothesis that will need to be tested in some other way.

Where, however, we have a very large number of extremely diverse instances of some effect, and only one factor seems to be present in all of them, we may reason by what is in effect an approximation to method 8.12. The diverse instances cover at least a large selection of all the possible combinations of possibly relevant factors and their negations. Therefore it is probable that no condition not covered by the formula (A or ) is necessary, and hence, if there is a necessary and sufficient condition, (A or ) is such, and hence A itself is a sufficient condition of the phenomenon.

Similarly, by an approximation to 8.14, we may reason that the one possibly relevant factor that is found to be absent in a large number of very diverse negative instances is probably a necessary condition of the phenomenon (that is, that its negation is a counteracting cause).

use of method of concomitant variation

The method of concomitant variation, with statistical procedures that can be considered as elaborations of it, is used in a great many experimental investigations in which one possibly relevant factor is varied (everything else that might be relevant being held constant) to see whether there is a causal connection between that one factor and the effect in question. (Of course, what we regard as a single experiment may involve the variation of several factors, but still in such a way that the results will show the effects of varying each factor by itself: Such an experiment is merely a combination of several applications of concomitant variation.)

further uses

The "controlled experiment," in which a control case or control group is compared with an experimental case or experimental group, is again an application of the method of difference (or perhaps the method of residues, if we use the control case, along with already known laws, to tell us what would have happened in the experimental case if the supposed cause had not been introduced.)

An important use of these methods is in the progressive location of a cause. If we take "the drinking of wine" as a single possible cause, then an application of 8.2 may show that the drinking of wine causes intoxication: That is, this factor is a necessary element in a sufficient condition for this result. But we may then analyze this possible cause further and discover that several factors are included in this one item that we have named "the drinking of wine," and further experiments may show that only one of these factors was really necessary: The necessary element will then be more precisely specified. But the fact that this is always possible leaves it true that in relation to the earlier degree of analysis of factors, the drinking of wine was a necessary element in a sufficient condition, and the discovery of this (admittedly crude) causal law is correct as far as it goes and is an essential step on the way to the more accurate law that is based on a finer analysis of factors.

Criticism of the Methods

The sort of example presented above helps to rebut one stock criticism of these methods, which is that they take for granted what is really the most important part of the procedure, namely, the discovery and analysis of factors. Any given application of one of these methods does presuppose some identification of possible causes, but it will not be completely vitiated by the fact that a finer analysis of factors is possible. Besides, the use of the methods themselves (particularly to discover singular causal sequences and hence the dispositional properties of particular things) is part of the procedure by which factors are further distinguished and classified. Also, the assumptions used, especially with regard to the range of possible causes allowed, are corrigible, and in conjunction with the methods they are self-correcting. A mistaken assumption is likely to lead, along with the observations, to contradictory conclusions, and when this happens we are forced to modify the assumption, in particular, to look further afield than we did at first for possibly relevant factors.

A fundamental and widely accepted objection to the claim that these methods form an important part of scientific method is that science is not concerned, or not much concerned, with causal relations in the sense in which these methods can discover them. It may be conceded that the formulation and confirmation of hypotheses and theories of the kind that constitute the greater part of a science such as physics is a scientific procedure quite different from the actual use of these methods. Even the discovery of a law of functional dependence is, as was noted, a task beyond what is achieved by our method of concomitant variation. It may also be conceded that many sciences are concerned largely with the simple discovery of new items and the tracing of processes rather than with causal relationships. Further, it was noted that these methods logically cannot be the whole of scientific procedure, since they require assumptions which they themselves cannot support.

In reply to this objection, however, it can be stressed, first, that a great deal of commonsense everyday knowledge, and also a great deal of knowledge in the more empirical sciences, is of causal relations of this sort, partly of singular causal sequences and partly of laws, especially of the incomplete or gappy form at which these methods characteristically arrive.

Second, it is largely such empirical causal relations that are explained by, and that support, the deeper theories and hypotheses of a developed science. But if they are to be used thus, they must be established independently.

Third, although descriptions of the eliminative methods of induction have often been associated with a kind of ground-floor empiricism that takes knowledge to be wholly concerned with empirical relations between directly observable things, qualities, and processes, the methods themselves are not tied to this doctrine but can establish causal relations between entities that are indirectly observed. For example, as long as there is any way, direct or indirect, of determining when a magnetic field is present and when there is an electric current in a wire, the methods can establish the fact that such a current will produce a magnetic field.

Finally, even where such causal relations are not the main object of inquiry, in investigation we constantly make use of causal relations, especially of singular causal sequences. In measuring, say, a voltage, we are assuming that it was the connecting of the meter across those terminals that caused this deflection of its needle, and the precautions that ensure that this is really so are to be explained in terms of our methods.

In fact, these methods are constantly used, explicitly or implicitly, both to suggest causal hypotheses and to confirm them. One should not, of course, expect any methods of empirical inquiry to establish conclusions beyond all possibility of doubt or all need of refinement, but in using these methods we can frequently say at least this: We have reason to suppose that for an event of this kind in this field there is some cause, and if the cause is not such-and-such, we cannot see what else the cause might be.

See also Deduction; Determinism, A Historical Survey; Empiricism; Induction; Mill, John Stuart.

Bibliography

works on induction

The classical study of eliminative induction remains that of J. S. Mill, A System of Logic (London, 1843), Book III, Chs. 810. Mill acknowledges that his study owes much to John Herschell, A Preliminary Discourse on the Study of Natural Philosophy (London: Longman, Rees, Orme, Brown, and Green, 1831), Part II, Ch. 6, and both are fundamentally indebted to Francis Bacon, Novum Organum (London: Joannem Billium, 1620), Book II. Since Mill, the literature has become extensive, but mostly in textbooks rather than in original works on logic or philosophy. There have been many worthwhile treatments of eliminative induction that are far above the textbook levelnotably those of John Venn, Empirical Logic (London, 1889), Ch. 17; Christoff von Sigwart, Logic, 2nd ed. (Freiburg, 1893), translated by Helen Dendy as Logic (London: Sonnenschein, 1895), Vol. II, Part II, Ch. 5; and H. W. B. Joseph, An Introduction to Logic (Oxford: Clarendon Press, 1906), Ch. 20. But there are only a small number of writers who, either by criticizing Mill or developing his account, have added something new and substantial to either the logic or the philosophy of eliminative induction.

criticisms of mill's methods

Mill's most important critics are William Whewell, The Philosophy of Discovery (London, 1860), Ch. 22; W. S. Jevons, The Principles of Science (London: Macmillan, 1874), Chs. 11, 19, and 23; F. H. Bradley, The Principles of Logic (London: K. Paul, Trench, 1883), Book II, Part II, Ch. 3; and M. R. Cohen and Ernest Nagel, An Introduction to Logic and Scientific Method (New York: Harcourt Brace, 1934), Ch. 13.

elaborations on mill's methods

The main writers who have tried to develop Mill's ideas on the logical side are W. E. Johnson, Logic (Cambridge, U.K., 1924), Part II, Ch. 10; C. D. Broad, "The Principles of Demonstrative Induction" in Mind 39 (1930): 302317 and 426439; and G. H. von Wright, A Treatise on Induction and Probability (London: Routledge and Paul, 1951). Broad, following Johnson, undertakes a demonstrative reconstruction of Mill's methods and tries to extend eliminative methods to reasonings that terminate in quantitative laws. Von Wright's is the most thorough treatment so far published and studies the conditions under which "complete elimination" can be achieved even with what are here called the "less rigorous" kinds of assumptions. His account, however, seems somewhat unclear.

further studies

The only major addition to the pure philosophy of induction is that of J. M. Keynes, A Treatise on Probability (London: Macmillan, 1921), Part III. Three more recent books that contain some discussion of it are J. O. Wisdom, Foundations of Inference in Natural Science (London: Methuen, 1952), Ch. 11; S. F. Barker, Induction and Hypothesis (Ithaca, NY: Cornell University Press, 1957), Ch. 3; and J. P. Day, Inductive Probability (London: Routledge and Paul, 1961), Sec. 5.

J. L. Mackie (1967)