## Confirmation Theory

## Confirmation Theory

# CONFIRMATION THEORY

Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless propositions of these and other kinds are often said to be confirmed by observational evidence. A natural place to begin the study of confirmation theory is to consider what it means to say that some evidence *E* confirms a hypothesis *H*.

## Incremental and Absolute Confirmation

Let us say that *E* raises the probability of *H* if the probability of *H* given *E* is higher than the probability of *H* not given *E*. According to many confirmation theorists, "*E* confirms *H* " means that *E* raises the probability of *H*. This conception of confirmation will be called incremental confirmation.

Let us say that *H* is probable given *E* if the probability of *H* given *E* is above some threshold. (This threshold remains to be specified but is assumed to be at least one half.) According to some confirmation theorists, "*E* confirms *H* " means that *H* is probable given *E*. This conception of confirmation will be called absolute confirmation.

Confirmation theorists have sometimes failed to distinguish these two concepts. For example, Carl Hempel (1945/1965) in his classic "Studies in the Logic of Confirmation" endorsed the following principles:

(1) A generalization of the form "All

FareG" is confirmed by the evidence that there is an individual that is bothFandG.(2) A generalization of that form is also confirmed by the evidence that there is an individual that is neither

FnorG.(3) The hypotheses confirmed by a piece of evidence are consistent with one another.

(4) If

EconfirmsHthenEconfirms every logical consequence ofH.

Principles (1) and (2) are not true of absolute confirmation. Observation of a single thing that is *F* and *G* cannot in general make it probable that all *F* are G; likewise for an individual that is neither *F* nor *G*. On the other hand there is some plausibility to the idea that an observation of something that is both *F* and *G* would raise the probability that all *F* are *G*. Hempel argued that the same is true of an individual that is neither *F* nor *G*. Thus Hempel apparently had incremental confirmation in mind when he endorsed (1) and (2).

Principle (3) is true of absolute confirmation but not of incremental confirmation. It is true of absolute confirmation because if one hypothesis has a probability greater than ½ then any hypothesis inconsistent with it has a probability less than ½. To see that (3) is not true of incremental confirmation, suppose that a fair coin will be tossed twice, let *H _{1}* be that the first toss lands heads and the second toss lands tails, and let

*H*be that both tosses land heads. Then

_{2}*H*and

_{1}*H*each have an initial probability of ¼. If

_{2}*E*is the evidence that the first toss landed heads, the probability of both

*H*and

_{1}*H*given

_{2}*E*is ½, and so both hypotheses are incrementally confirmed, though they are inconsistent with each other.

Principle (4) is also true of absolute confirmation but not of incremental confirmation. It is true of absolute confirmation because any logical consequence of *H* is at least as probable as *H* itself. One way to see that (4) is not true of incremental confirmation is to note that any tautology is a logical consequence of any *H* but a tautology cannot be incrementally confirmed by any evidence, since the probability of a tautology is always one. Thus Hempel was apparently thinking of absolute confirmation, not incremental confirmation, when he endorsed (3) and (4).

Since even eminent confirmation theorists like Hempel have failed to distinguish these two concepts of confirmation, we need to make a conscious effort not to make the same mistake.

## Confirmation in Ordinary Language

When we say in ordinary language that some evidence confirms a hypothesis, does the word "confirms" mean incremental or absolute confirmation?

Since the probability of a tautology is always one, a tautology is absolutely confirmed by any evidence whatever. For example, evidence that it is raining absolutely confirms that all triangles have three sides. Since we would ordinarily say that there is no confirmation in this case, the concept of confirmation in ordinary language is not absolute confirmation.

If *E* reduces the probability of *H* then we would ordinarily say that *E* does not confirm *H*. However, in such a case it is possible for *H* to still be probable given *E* and hence for *E* to absolutely confirm *H*. This shows again that the concept of confirmation in ordinary language is not absolute confirmation.

A hypothesis *H* that is incrementally confirmed by evidence *E* may still be probably false; for example, the hypothesis that a fair coin will land "heads" every time in 1000 tosses is incrementally confirmed by the evidence that it landed "heads" on the first toss, but the hypothesis is still extremely improbable given this evidence. In a case like this nobody would ordinarily say that the hypothesis was confirmed. Thus it appears that the concept of confirmation in ordinary language is not incremental confirmation either.

A few confirmation theorists have attempted to formulate concepts of confirmation that would agree better with the ordinary concept. One such theorist is Nelson Goodman. He noted that if *E* incrementally confirms *H*, and *X* is an irrelevant proposition, then *E* incrementally confirms the conjunction of *H* and *X*. Goodman (1979) thought that in a case like this we would not say that *E* confirms the conjunction. He proposed that "*E* confirms *H* " means that *E* increases the probability of every component of *H*. One difficulty with this is to say what counts as a component of a hypothesis; if any logical consequence of *H* counts as a component of *H* then no hypothesis can ever be confirmed in Goodman's sense. In addition Goodman's proposal is open to the same objection as incremental confirmation: It allows that a hypothesis *H* can be confirmed by evidence *E* and yet *H* be probably false given *E*, which is not what people would ordinarily say.

Peter Achinstein (2001) speaks of "evidence" rather than "confirmation" but he can be regarded as proposing an account of the ordinary concept of confirmation. His account is complex but the leading idea is roughly that "*E* confirms *H* " means that (i) *H* is probable given *E* and (ii) it is probable that there is an explanatory connection between *H* and *E*, given that *H* and *E* are true. The explanatory connection may be that *H* explains *E, E* explains *H*, or *H* and *E* have a common explanation. Achinstein's proposal is open to one of the same objections as absolute confirmation: It allows evidence *E* to confirm *H* in cases in which *E* reduces the probability of *H*. Achinstein has argued that this implication is in agreement with the ordinary concept, but his reasoning has been criticized, for example, by Sherrilyn Roush (2004).

It appears that none of the concepts of confirmation discussed by confirmation theorists is the same as the ordinary concept of evidence confirming a hypothesis. Nevertheless, some of these concepts are worthy of study in their own right. In particular, the concepts of incremental and absolute confirmation are simple concepts that are of obvious importance and they are probably components in the more complex ordinary language concept of confirmation.

## Probability

All the concepts of confirmation that we have discussed involve probability. However, the word "probability" is ambiguous. For example, suppose you have been told that a coin either has heads on both sides or else has tails on both sides and that it is about to be tossed. What is the probability that it will land heads? There are two natural answers: (i) ½; (ii) either 0 or 1 but I do not know which. These answers correspond to different meanings of the word "probability." The sense of the word "probability" in which (i) is the natural answer will here be called inductive probability. The sense in which (ii) is the natural answer will be called physical probability.

Physical probability depends on empirical facts in a way that inductive probability does not. We can see this from the preceding example; here the physical probability is unknown because it depends on the nature of the coin, which is unknown; by contrast the inductive probability is known even though the nature of the coin is unknown, showing that the inductive probability does not depend on the nature of the coin.

There are two main theories about the nature of physical probability. One is the frequency theory, according to which the physical probability of an event is the relative frequency with which the event happens in the long run. The other is the propensity theory, according to which the physical probability of an event is the propensity of the circumstances or experimental arrangement to produce that event.

It is widely agreed that the concept of probability involved in confirmation is not physical probability. One reason is that physical probabilities seem not to exist in many contexts in which we talk about confirmation. For example, we often take evidence as confirming a scientific theory but it does not seem that there is a physical probability of a particular scientific theory being true. (The theory is either true or false; there is no long run frequency with which it is true, nor does the evidence have a propensity to make the theory true.) Another reason is that physical probabilities depend on the facts in a way that confirmation relations do not. Inductive probability does not have either of these shortcomings and so it is natural to identify the concept of probability involved in confirmation with inductive probability. Therefore we will now discuss inductive probability in more detail.

Some contemporary writers appear to believe that the inductive probability of a proposition is some person's degree of belief in the proposition. Degree of belief is also called subjective probability, so on this view, inductive probability is the same as subjective probability. However, this is not correct. Suppose, for example, that I claim that scientific theory *H* is probable in view of the available evidence. This is a statement of inductive probability. If my claim is challenged, it would not be a relevant response for me to prove that I have a high degree of belief in *H*, though this would be relevant if inductive probability were subjective probability. To give a relevant defense of my claim I need to cite features of the available evidence that support *H*.

In saying that inductive probabilities are not subjective probabilities, we are not denying that when people make assertions about inductive probabilities they are expressing their degrees of belief. Every sincere and intentional assertion expresses the speaker's beliefs but not every assertion is about the speaker's beliefs.

We will now consider the concept of logical probability and, in particular, whether inductive probability is a kind of logical probability. This depends on what is meant by "logical probability."

Many writers define the "logical probability" of *H* given *E* as the degree of belief in *H* that would be rational for a person whose total evidence is *E*. However, the term "rational degree of belief" is far from clear. On some natural ways of understanding it, the degree of belief in *H* that is rational for a person could be high even when *H* has a low inductive probability given the person's evidence. This might happen because belief in *H* helps the person succeed in some task, or makes the person feel happy, or will be rewarded by someone who can read the person's mind. Even if it is specified that we are talking about rationality with respect to epistemic goals, the rational degree of belief can differ from the inductive probability given the person's evidence, since the rewards just mentioned may be epistemic. Alternatively, one might take "the rational degree of belief in *H* for a person whose total evidence is *E* " to be just another name for the inductive probability of *H* given *E*, in which case these concepts are trivially equivalent. Thus if one takes "logical probability" to be rational degree of belief then, depending on what one means by "rational degree of belief," it is either wrong or trivial to say that inductive probability is logical.

A more useful conception of logical probability can be defined as follows. Let an "elementary probability sentence" be a sentence that asserts that a specific hypothesis has a specific probability. Let a "logically determinate sentence" be a sentence whose truth or falsity is determined by meanings alone, independently of empirical facts. Let us say that a probability concept is "logical in Carnap's sense" if all elementary probability sentences for it are logically determinate. (This terminology is motivated by some of the characterizations of logical probability in Carnap's *Logical Foundations of Probability*.) Since inductive probability is not subjective probability, the truth of an elementary statement of inductive probability does not depend on some person's psychological state. It also does not depend on facts about the world in the way that statements of physical probability do. It thus appears the truth of an elementary statement of inductive probability does not depend on empirical facts at all and hence that inductive probability is logical in Carnap's sense.

It has often been said that logical probabilities do not exist. If this were right then it would follow that inductive probabilities are either not logical or else do not exist. So we will now consider arguments against the existence of logical probabilities.

John Maynard Keynes in 1921 published a theory of what we call inductive probability and he claimed that these are logical. Frank Ramsey (1926/1980) criticizing Keynes's theory, claimed that "there really do not seem to be any such things as the probability relations he describes." The main consideration that Ramsey offered in support of this was that there is little agreement on the values of probabilities in the simplest cases and these are just the cases where logical relations should be most clear. Ramsey's argument has been cited approvingly by several later authors.

However, Ramsey's claim that there is little agreement on the values of probabilities in the simplest cases seems not to be true. For example, almost everyone agrees with the following:

(5) The probability that a ball is white, given only that it is either white or black, is ½.

Ramsey cited examples such as the probability of one thing being red given that another thing is red; he noted that nobody can state a precise numerical value for this probability. But that is an example of *agreement* about the value of an inductive probability, since *nobody* pretends to know a precise numerical value for the probability. What examples like this show is merely that inductive probabilities do not always have numerically precise values.

Furthermore, if inductive probabilities are logical (i.e., non-descriptive), it does not follow that their values should be clearest in the simplest cases, as Ramsey claimed. Like other concepts of ordinary language, the concept of inductive probability is learned largely from examples of its application in ordinary life and many of these examples will be complex. Hence, like other concepts of ordinary language, its application may sometimes be clearer in realistic complex situations than in simple situations that never arise in ordinary life.

So much for Ramsey's argument. Another popular argument against the existence of logical probabilities is based on the "paradoxes of indifference." The argument is this: Judgments of logical probability are said to presuppose a general principle, called the Principle of Indifference, which says that if evidence does not favor one hypothesis over another then those hypotheses are equally probable on this evidence. This principle can lead to different values for a probability, depending on what one takes the alternative hypotheses to be. In some cases the different choices seem equally natural. These "paradoxes of indifference," as they are called, are taken by many authors to be fatal to logical probability.

But even if we agree (as Keynes did) that quantitative inductive probabilities can only be determined via the Principle of Indifference, we can also hold (as Keynes did) that inductive probabilities do not always have quantitative values. Thus if there are cases where contradictory applications of the principle are equally natural, we may take this to show that these are cases where inductive probabilities lack quantitative values. It does not follow that quantitative inductive probabilities never exist, or that qualitative inductive probabilities do not exist. The paradoxes of indifference are thus consistent with the view that inductive probabilities exist and are logical.

How can we have knowledge of inductive probabilities, if this does not come from an exceptionless general principle? The answer is that the concept of inductive probability, like most concepts of ordinary language, is learned from examples, not by general principles. Hence we can have knowledge about particular inductive probabilities (and hence logical probabilities) without being able to state a general principle that covers these cases.

A positive argument for the existence of inductive probabilities is the following: We have seen reason to believe that a statement of inductive probability, such as (5), is either logically true or logically false. Which of these it is will be determined by the concepts involved, which are concepts of ordinary language. So, since competent speakers of a language normally use the language correctly, the wide endorsement of (5) is good reason to believe that (5) is a true sentence of English. And it follows from (5) that at least one inductive probability exists. Parallel arguments would establish the existence of many other inductive probabilities.

The concept of probability that is involved in confirmation can appropriately be taken to be inductive probability. Unlike physical probability, the concept of inductive probability applies to scientific theories. And unlike both physical and subjective probability, the concept of inductive probability agrees with the fact that confirmation relations are not discovered empirically but by examination of the relation between the hypothesis and the evidence.

## Explication of Inductive Probability

Inductive probability is a concept of ordinary language and, like many such concepts, it is vague. This is reflected in the fact that inductive probabilities often have no precise numerical value.

A useful way to theorize about vague concepts is to define a precise concept that is similar to the vague concept. This methodology is called explication, the vague concept is called the explicandum, and the precise concept that is meant to be similar to it is called the explicatum. Although the explicatum is intended to be similar to the explicandum, there must be differences, since the explicatum is precise and the explicandum is vague. Other desiderata for an explicatum, besides similarity with the explicandum, are theoretical fruitfulness and simplicity.

Inductive probability can be explicated by defining, for selected pairs of sentences *E* and *H*, a number that will be the explicatum for the inductive probability of *H* given *E* ; let us denote this number by "*p(H|E)*." The set of sentences for which *p(H|E)* is defined will depend on our purposes.

Quantitative inductive probabilities, where they exist, satisfy the mathematical laws of probability. Since a good explicatum is similar to the explicandum, theoretically fruitful, and simple, the numbers *p(H|E)* will also be required to satisfy these laws.

In works written from the 1940s to his death in 1970, Carnap proposed a series of increasingly sophisticated explications of this kind, culminating in his *Basic System of Inductive Logic* published posthumously in 1971 and 1980. Other authors have proposed other explicata, some of which will be mentioned below.

Since the value of *p(H|E)* is specified by definition, a statement of the form "*p(H|E)* = *r* " is either true by definition or false by definition, and hence is logically determinate. Since we require these values to satisfy the laws of probability, the function p is also a probability function. So we may say that the function *p* is a logical probability in Carnap's sense.

Thus there are two different kinds of probability, both of which are logical in Carnap's sense: Inductive probability and functions that are proposed as explicata for inductive probability. Since the values of the explicata are specified by definition, it is undeniable that logical probabilities of this second kind exist.

## Explication of Incremental Confirmation

Since inductive probability is vague, and *E* incrementally confirms *H* if and only if *E* raises the inductive probability of *H*, the concept of incremental confirmation is also vague. We will now consider how to explicate incremental confirmation.

First, we note that the judgment that *E* confirms *H* is often made on the assumption that some other information *D* is given; this information is called backgroundevidence. So we will take the form of a fully explicit judgment of incremental confirmation to be "*E* incrementally confirms *H* given *D*." For example, a coin landing heads on the first toss incrementally confirms that the coin has heads on both sides, given that both sides of the coin are the same; there would be no confirmation if the background evidence was that the coin is normal with heads on one side only.

The judgment that *E* incrementally confirms *H* given *D* means that the inductive probability of *H* given both *E* and *D* is greater than the inductive probability of *H* given only *D*. Suppose we have a function p that is an explicatum for inductive probability and is defined for the relevant statements. Let "*E.D"* represent the conjunction of *E* and *D* (so the dot here functions like "and"). Then the explicatum for "*E* incrementally confirms *H* given *D* " will be *p(H|E.D)* > *p(H|D)*. We will use the notation "*C(H, E, D)* " as an abbreviation for this explicatum.

The concept of incremental confirmation, like all the concepts of confirmation discussed so far, is a qualitative concept. For each of these qualitative concepts there is a corresponding comparative concept, which compares the amount of confirmation in different cases. We will focus here on the judgment that *E _{1}* incrementally confirms

*H*more than

*E*does, given

_{2}*D*. The corresponding statement in terms of our explicata is that the increase from

*p(H|D)*to

*p(H|E*is larger than the increase from

_{1}.D)*p(H|D)*to

*p(H|E*. This is true if and only if

_{2}.D)*p(H|E*>

_{1}.D)*p(H|E*, so the explicatum for "

_{2}D)*E*confirms

_{1}*H*more than

*E*does, given

_{2}*D*" will be

*p(H|E*>

_{1}.D)*p(H|E*. We will use the notation "

_{2}.D)*M(H,E*" as an abbreviation for this explicatum.

_{1},E_{2},D)Confirmation theorists have also discussed quantitative concepts of confirmation, which involve assigning numerical "degrees of confirmation" to hypotheses. In earlier literature the term "degree of confirmation" usually meant degree of absolute confirmation. The degree to which *E* absolutely confirms *H* is the same as the inductive probability of *H* given *E* and hence is explicated by *p(H|E)*.

In later literature, the term "degree of confirmation" is more likely to mean degree of incremental confirmation. An explicatum for the degree to which *E* incrementally confirms *H* given *D* is a measure of how much *p(H|E.D)* is greater than *p(H|D)*. Many different explicata of this kind have been proposed; they include the following. (Here "*∼H* " means the negation of *H*.)

Difference measure: *p(H|E.D) − p(H|D)*

Ratio measure: *p(H|E.D) / p(H|D)*

Likelihood ratio: *p(E|H.D) / p(E |∼H.D)*

Confirmation theorists continue to debate the merits of these and other measures of degree of incremental confirmation.

## Verified Consequences

The remainder of this entry will consider various properties of incremental confirmation and how well these are captured by the explicata *C* and *M* that were defined above. We begin with the idea that hypotheses are confirmed by verifying their logical consequences.

If *H* logically implies *E* given background evidence *D*, we usually suppose that observation of *E* would incrementally confirm *H* given *D*. For example, Einstein's general theory of relativity, together with other known facts, implied that the orbit of Mercury precesses at a certain rate; hence the observation that it did precess at this rate incrementally confirmed Einstein's theory, given the other known facts.

The corresponding explicatum statement is: If *H.D* implies *E* then *C(H,E,D)*. Assuming that *p* satisfies the laws of mathematical probability, this explicatum statement can be proved true provided that 0 > *p(H|D)* > 1 and *p(E|D)* < 1.

We can see intuitively why the provisos are needed. If *p(H|D)* = 1 then *H* is certainly true given *D* and so no evidence can incrementally confirm it. If *p(H|D)* = 0 then *H* is certainly false given *D* and the observation that one of its consequences is true need not alter this situation. If *p(E|D)* = 1 then *E* was certainly true given *D* and so the observation that it is true cannot provide new evidence for *H*.

If *H* and *D* imply both *E _{1}* and

*E*, and if

_{2}*E*is less probable than

_{1}*E*given

_{2}*D*, then we usually suppose that

*H*would be better confirmed by

*E*than by

_{1}*E*, given

_{2}*D*. The corresponding explicatum statement is: If

*H.D*implies

*E*and

_{1}*E*, and

_{2}*p(E*<

_{1}|D)*p(E*, then

_{2}|D)*M (H, E*. Assuming that

_{1}, E_{2}, D)*p*satisfies the laws of probability, this can be proved true provided that 0 <

*p(H|D)*< 1. The proviso makes sense intuitively for the same reasons as before.

If *H* and *D* imply both *E _{1}* and

*E*then we usually suppose that

_{2}*E*and

_{1}*E*together would confirm

_{2}*H*more than

*E*alone, given

_{1}*D*. The corresponding explicatum statement is that if

*H.D*implies

*E*and

_{1}*E*then

_{2}*M (H, E*. It follows from the result in the previous paragraph that this is true, provided that

_{1}.E_{2}, E_{1}, D)*p(E*<

_{1}.E_{2}|D)*p(E*and 0 <

_{1}|D)*p(H|D)*< 1. The provisos are needed for the same reasons as before.

These results show that, if we require *p* to satisfy the laws of probability, then *C* and *M* will be similar to their explicanda with respect to verified consequences and, to that extent at least, *C* and *M* will be good explicata. In addition these results illustrate in a small way the value of explication. Although the provisos that we added make sense when one thinks about them, the need for them is likely to be overlooked if one thinks only in terms of the vague explicanda and does not attempt to prove a precise corresponding result in terms of the explicata. Thus explication can give a deeper and more accurate understanding of the explicandum. We will see more examples of this.

## Reasoning by Analogy

If two individuals are known to be alike in certain respects, and one is found to have a particular property, we often infer that, since the individuals are similar, the other individual probably also has that property. This is a simple example of reasoning by analogy, and it is a kind of reasoning that we use every day.

In order to explicate this kind of reasoning, we will use "*a* " and "*b* " to stand for individual things and "*F* " and "*G* " for logically independent properties that an individual may have (for example, being tall and blond). We will use "*Fa* " to mean that the individual *a* has the property *F* ; similarly for other properties and individuals.

It is generally accepted that reasoning by analogy is stronger the more properties that the individuals are known to have in common. So for *C* to be a good explicatum it must satisfy the following condition:

(6)

C (Gb, Fa.Fb, Ga).

Here we are considering the situation in which the background evidence is that *a* has *G*. The probability that *b* also has *G* is increased by finding that *a* and *b* also share the property *F*.

In the case just considered, *a* and *b* are not known to differ in any way. When we reason by analogy in real life we normally do know some respects in which the individuals differ, but this does not alter the fact that the reasoning is stronger the more alike *a* and *b* are known to be. So for *C* to be a good explicatum it must also satisfy the following condition. (Here *F′* is a property that is logically independent of both *F* and *G*.)

(7)

C (Gb, Fa.Fb, Ga.F′a.∼F′b).

Here the background evidence is that *a* has *G* and that *a* and *b* differ in regard to *F′*. The probability that *b* has *G* is increased by finding that *a* and *b* are alike in having *F*.

Another condition that *C* should satisfy is:

(8)

C (Gb, Ga, F′a. ∼F′b).

Here the background evidence is merely that *a* and *b* differ regarding *F′*. For all we know, whether or not something has *F′* might be unrelated to whether it has *G*, so the fact that *a* has *G* is still some reason to think that *b* has *G*.

In *Logical Foundations of Probability* Carnap proposed a particular explicatum for inductive probability that he called *c**. In *The Continuum of Inductive Methods* he described an infinite class of possible explicata. The function *c**, and all the functions in Carnap's continuum, satisfy (6) but not (7) or (8). Hence none of these functions provides a fully satisfactory explicatum for situations that involve more than one logically independent property.

Carnap recognized this failure early in the 1950s and worked to find explicata that would handle reasoning by analogy more adequately. He first found a class of possible explicata for the case where there are two logically independent properties; the functions in this class satisfy (6) and (8). Subsequently, with the help of John Kemeny, Carnap generalized his proposal to the case where there are any finite number of logically independent properties, though he never published this. A simpler and less adequate generalization was published by Mary Hesse in 1964. Both these generalizations satisfy all of (6)-(8).

Carnap had no justification for the functions he proposed except that they seemed to agree with intuitive principles of reasoning by analogy. Later he found that they actually violate one of the principles he had taken to be intuitive. In his last work Carnap expressed indecision about how to proceed.

For the case where there are just two properties, Maher (2000) has shown that certain foundational assumptions pick out a class of probability functions, called *P* _{I }, that includes the functions that Carnap proposed for this case. Maher argued that the probability functions in *P* _{I } handle reasoning by analogy adequately and Carnap's doubts were misplaced.

For the case where there are more than two properties, Maher (2001) has shown that the proposals of Hesse, and Carnap and Kemeny, correspond to implausible foundational assumptions and violate intuitive principles of reasoning by analogy. Further research is needed to find an explicatum for inductive probability that is adequate for situations involving more than two properties.

## Nicod's Condition

We are often interested in universal generalizations of the form "All *F* are *G*," for example, "All ravens are black," or "All metals conduct electricity." Nicod's condition, named after the French philosopher Jean Nicod, says that generalizations of this form are confirmed by finding an individual that is both *F* and *G*. (Here and in the remainder of this entry, "confirmed" means incrementally confirmed.)

Nicod (1970) did not mention background evidence. It is now well known that Nicod's condition is not true when there is background evidence of certain kinds. For example, suppose the background evidence is that, if there are any ravens, then there is a non-black raven. Relative to this background evidence, observation of a black raven would refute, not confirm, that all ravens are black.

Hempel claimed that Nicod's condition is true when there is no background evidence but I. J. Good argued that this is also wrong. Good's argument was essentially this: Given no evidence whatever, it is improbable that there are any ravens, and if there are no ravens then, according to standard logic, "All ravens are black" is true. Hence, given no evidence, "All ravens are black" is probably true. However, if ravens do exist, they are probably a variety of colors, so finding a black raven would increase the probability that there is a non-black raven and hence disconfirm that all ravens are black, contrary to Nicod's condition.

Hempel was relying on intuition, and Good's counterargument is intuitive rather than rigorous. A different way to investigate the question is to use precise explicata. The situation of "no background evidence" can be explicated by taking the background evidence to be any logically true sentence; let *T* be such a sentence. Letting A be "all *F* are *G*," the claim that Nicod's condition holds when there is no background evidence may be expressed in explicatum terms as

(9)

C (A, Fa.Ga, T).

Maher has shown that this can fail when the explicatum p is a function in *P _{I}* and that the reason for the failure is the one identified in Good's argument. This confirms that Nicod's condition is false even when there is no background evidence.

Why then has Nicod's condition seemed plausible? One reason may be that people sometimes do not clearly distinguish between Nicod's condition and the following statement: Given that an object is F, the evidence that it is *G* confirms that all *F* are *G*. The latter statement may be expressed in explicatum terms as:

(10)

C (A, Ga, Fa).

This is true provided only that *p* satisfies the laws of probability, 0 < *p(A|Fa)* < 1, and *p(Ga|Fa)* < 1. (This follows from the first of the results stated earlier for verified consequences.) If people do not clearly distinguish between the ordinary language statements that correspond to (9) and (10), the truth of the latter could make it seem that Nicod's condition is true.

## The Ravens Paradox

The following three principles about confirmation have seemed plausible to many people.

(11) Nicod's condition holds when there is no background evidence.

(12) Confirmation relations are unchanged by substitution of logically equivalent sentences.

(13) In the absence of background evidence, the evidence that some individual is a non-black non-raven does not confirm that all ravens are black.

However, these three principles are inconsistent. That is because (11) implies that a non-black non-raven confirms "all non-black things are non-ravens," and the latter is logically equivalent to "all ravens are black," so by (12) a non-black non-raven confirms "all ravens are black," contrary to (13).

Hempel was the first to discuss this paradox. His initial statement of the paradox did not explicitly include the condition of no background evidence but he stated later in his article that this was to be understood. The subsequent literature on this paradox is enormous but most discussions have not respected the condition of no background evidence. Here we will follow Hempel in respecting that condition.

The contradiction shows that at least one of (11)-(13) is false. Hempel claimed that (11) and (12) are true and (13) is false but his judgments were based on informal intuitions, not on any precise explicatum or use of probability theory.

Our preceding discussion of Nicod's condition shows that (11) is false, contrary to what Hempel thought. On the other hand, our explicata support Hempel's view that (12) is true and (13) is false, as we will now show.

In explicatum terms, what (12) says is: If *H′, E′*, and *D′* are logically equivalent to *H, E*, and *D* respectively, then *C(H, E, D)* if and only if *C(H′, E′, D′)*. The truth of this follows from the assumption that *p* satisfies the laws of probability.

Now let "*F* " mean "raven" and "*G* " mean "black." Then (13), expressed in explicatum terms, is the claim*∼C (A, ∼Fa.∼Ga, T)*. Maher has shown that this need not be true when *p* is a function in *P _{I}* ; we can instead have

*C (A, ∼Fa. ∼Ga, T)*. This happens for two reasons:

(a) The evidence

∼Fa.∼Gareduces the probability ofFb.∼Gb, wherebis any individual other thana. Thus∼Fa.∼Gareduces the probability that another individualbis a counterexample toA.(b) The evidence

∼Fa.∼Gatells us thatais not a counterexample toA, which a priori it could have been.

Both of these reasons make sense intuitively.

We conclude that, of the three principles (11)-(13), only (12) is true.

## Projectability

A predicate is said to be "projectable" if the evidence that the predicate applies to some objects confirms that it also applies to other objects. The standard example of a predicate that is not projectable is "grue," which was introduced by Goodman (1979). According to Goodman's defnition, something is grue if either (i) it is observed before time *t* and is green or (ii) it is not observed before time *t* and is blue. The usual argument that "grue" is not projectable goes something like this: A grue emerald observed before *t* is green, and observation of such an emerald confirms that emeralds not observed before *t* are also green. Since a green emerald not observed before *t* is not grue, it follows that a grue emerald observed before *t* confirms that emeralds not observed before *t* are not grue; hence "grue" is not projectable.

The preceding account of the meaning of "projectable" was the usual one but it is imprecise because it fails to specify background evidence. Let us say that a predicate ϕ is absolutely projectable if *C (ϕb, ϕa, T)* for any distinct individuals *a* and *b* and logical truth *T*. This concept of absolute projectability is one possible explicatum for the usual imprecise concept of projectability. Let "*Fa* " mean that *a* is observed before *t* and let "*Ga* " mean that *a* is green. Let "*G′a* " mean that either *Fa.Ga* or *∼Fa.∼ Ga*. Thus "*G* ′" has a meaning similar to "grue." (The difference is just that *G* uses "not green" instead of "blue" and so avoids introducing a third property.) Maher has proved that if *p* is any function in *P _{I}* then "

*F*", "

*G*", and "

*G′*" are all absolutely projectable. It may seem unintuitive that "

*G′*" is absolutely projectable. However, this result corresponds to the following statement of ordinary language: The probability that

*b*is grue is higher given that

*a*is grue than if one was not given any evidence whatever. If we keep in mind that we do not know whether

*a*or

*b*was observed before

*t*, this should be intuitively acceptable. So philosophers who say that "grue" is not projectable are wrong if, by "projectable," they mean absolute projectability.

Let us say that a predicate *ϕ* is projectable across another predicate *ψ* if *C (ϕb, ϕa, ψa.∼ψb)* for any distinct individuals *a* and *b*. This concept of projectability across another predicate is a second possible explicatum for the usual imprecise concept of projectability.

It can be shown that if *p* is any function in *P _{I}* then "

*G*" is, and "

*G′*" is not, projectable across "

*F*." So philosophers who say that "grue" is not projectable are right if, by "projectable," they mean projectability across the predicate "observed before

*t*."

Now suppose we change the definition of "*Ga* " to be that *a* is (i) observed before *t* and green or (ii) not observed before *t* and not green. Thus "*G* " now means what "*G′* " used to mean. Keeping the definitions of "*F* " and "*G′* " unchanged, "*G′a* " now means that *a* is green. The results reported in the preceding paragraph will still hold but now they are the opposite of the usual views about what is projectable. This shows that, when we are constructing explicata for inductive probability and confirmation, the meanings assigned to the basic predicates (here "*F* " and "*G* ") need to be intuitively simple ones rather than intuitively complex concepts like "grue."

** See also ** Carnap, Rudolf; Einstein, Albert; Goodman, Nelson; Hempel, Carl Gustav; Induction; Keynes, John Maynard; Probability and Chance; Ramsey, Frank Plumpton; Relativity Theory.

## Bibliography

Achinstein, Peter. *The Book of Evidence*. New York: Oxford University Press, 2001.

Carnap, Rudolf. "A Basic System of Inductive Logic, Part I." In *Studies in Inductive Logic and Probability*. Vol. 1, edited by Rudolf Carnap and Richard C. Jeffrey. Berkeley: University of California Press, 1971.

Carnap, Rudolf. "A Basic System of Inductive Logic, Part II." In *Studies in Inductive Logic and Probability*. Vol. 2, edited by Richard C. Jeffrey. Berkeley: University of California Press, 1980.

Carnap, Rudolf. *The Continuum of Inductive Methods*. Chicago: University of Chicago Press, 1952.

Carnap, Rudolf. *Logical Foundations of Probability*. Chicago: University of Chicago Press, 1950. Second edition 1962.

Earman, John. *Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory*. Cambridge, MA: MIT Press, 1992.

Festa, Roberto. "Bayesian Confirmation." In *Experience, Reality, and Scientific Explanation*, edited by Maria Carla Galavotti and Alessandro Pagnini. Dordrecht: Kluwer, 1999.

Fitelson, Branden. "The Plurality of Bayesian Measures of Confirmation and the Problem of Measure Sensitivity." *Philosophy of Science* 66 (1999): S362–S378.

Gillies, Donald. *Philosophical Theories of Probability*. London: Routledge, 2000.

Good, I. J. "The White Shoe *qua* Herring Is Pink." *British Journal for the Philosophy of Science* 19 (1968): 156–157.

Goodman, Nelson. *Fact, Fiction, and Forecast*. 3rd ed. Indianapolis, IN: Hackett, 1979.

Hempel, Carl G. "Studies in the Logic of Confirmation." *Mind* 54 (1945): 1–26 and 97–121. Reprinted with some changes in Carl G. Hempel. *Aspects of Scientific Explanation*. New York: The Free Press, 1965.

Hesse, Mary. "Analogy and Confirmation Theory." *Philosophy of Science* 31 (1964): 319–327.

Howson, Colin, and Peter Urbach. *Scientific Reasoning: The Bayesian Approach*. 2nd ed. Chicago: Open Court, 1993.

Keynes, John Maynard. *A Treatise on Probability*. London: Macmillan, 1921. Reprinted with corrections, 1948.

Maher, Patrick. "Probabilities for Two Properties." *Erkenntnis* 52 (2000): 63–91.

Maher, Patrick. "Probabilities for Multiple Properties: The Models of Hesse and Carnap and Kemeny." *Erkenntnis* 55 (2001): 183–216.

Maher, Patrick. "Probability Captures the Logic of Scientific Confirmation." In *Contemporary Debates in Philosophy of Science*, edited by Christopher R. Hitchcock. Oxford: Blackwell, 2004.

Nicod, Jean. *Geometry and Induction*. Berkeley and Los Angeles: University of California Press, 1970. English translation of works originally published in French in 1923 and 1924.

Ramsey, Frank P. "Truth and Probability." Article written in 1926 and published in many places, including *Studies in Subjective Probability*, 2nd ed., edited by Henry E. Kyburg, Jr. and Howard E. Smokler. Huntington, New York: Krieger, 1980.

Roush, Sherrilyn. "Positive Relevance Defended." *Philosophy of Science* 71 (2004):110–116.

Salmon, Wesley C. "Confirmation and Relevance." In *Minnesota Studies in the Philosophy of Science*. Vol. VI: *Induction, Probability, and Confirmation*, ed. Grover Maxwell and Robert M. Anderson Jr. Minneapolis: University of Minnesota Press, 1975.

Skyrms, Brian. *Choice and Chance*. 4th ed. Belmont, CA: Wadsworth, 2000.

Stalker, Douglas, ed. *Grue: Essays on the New Riddle of Induction*. Chicago: Open Court, 1994.

*Patrick Maher (2005)*