R. A. Fisher (1930) proposed a statistical method for obtaining from observed data a probability distribution concerning a parameter value; he called the distribution a fiducial probability distribution. The theory of confidence intervals, as developed by J. Neyman, was initially presented in the literature as a clarification and development of fiducial probability [see ESTIMATION, article on CONFIDENCE INTERVALS AND REGIONS]. Fisher denied the equivalence and in his subsequent theoretical papers developed and extended fiducial probability.
As an example, consider a sample of independent measurements, X1, …, Xn, on a physical characteristic μ and suppose that the measurement error is normally distributed with mean 0 and known variance σ02. Fisher requires that fiducial inference be based on the simplest statistic containing all the information about the parameter, in this case, on the sample mean X. [The sample mean here is a minimal sufficient statistic; see SUFFICIENCY.] The expression W = X — μ, involving the variable X and the characteristic (μ, has a known distribution: normal with mean 0 and variance σ02/n. In an application of the method, the value of X is obtained and substituted in the expression W = X — μ; the expression is solved for μ in terms of W, giving μ = X - W; the fiducial distribution of μ then derives from the known distribution of W: μ is normal with mean X and variance σ02/n. A 95 per cent fiducial interval is .
Fisher claimed that a fiducial probability statement has the same meaning as an ordinary probability statement. In the example, suppose that the 95 per cent fiducial interval as calculated in a specific application is 163.9 ± 0.8. The fiducial statement is that there is 95 per cent probability that the unknown value of μ lies in this interval. The interval 163.9 ± 0.8 is also a 95 per cent confidence interval, but as a confidence interval its interpretation is different [see ESTIMATION, article On CONFIDENCE INTERVALS AND REGIONS]. Confidence methods and fiducial methods do not, however, always lead to the same numerical results.
The proponents of the confidence method claim that in this context probability statements concerning μ cannot be made; the value of μ is something that exists: either it is in the interval or it is not, and we don’t know which.
The proponents of the fiducial method reply that probabilities concerning realized values are commonplace: in the play of card games, for example, a player may observe his own hand and perhaps other cards (say, those already played) and make a probability statement concerning the distribution of cards in the concealed hands.
The rejoinder is that μ did not arise from a random process such as the card shuffling and dealing. The relevance of this rejoinder is perhaps the key element to criticisms of the fiducial method.
In more complex problems the fiducial method may give a result different than the confidence method. In one prominent problem mentioned below, the Behrens-Fisher problem, the fiducial method gives an answer where confidence methods have not yet produced an entirely satisfactory result.
In his original paper on the fiducial method, Fisher (1930) considered a statistic, T, obtained by the maximum likelihood method for estimating a parameter, θ Let F(T, 0) be the cumulative distribution function for the statistic T. The probability density function for T is obtained by differentiating with respect to T: f(T, θ) = <9F(T, 0)/<9T. Correspondingly, the fiducial density function for 6, given an observed value for T, is obtained by differentiating with respect to θ: g(θ, T) = <9F(T, 0)/<90.
Fisher illustrated the method with the correlation coefficient r of a sample from a bivariate normal distribution with population correlation coefficient p.
For more complex problems, Fisher proposed the use of a pivotal quantity, W = h(T, θ), a function of the statistic T and the parameter θ that has a fixed known distribution regardless of the value of θ For the first example, W = X — μ. is a pivotal quantity. In an application, the observed value of the statistic T is substituted in the expression W = h(T,θ); the parameter θ is expressed in terms of W; and the fiducial distribution of θ is obtained from the known distribution of W.
Fisher’s original method for obtaining a fiducial distribution is a special case of the pivotal method. As a function of a continuous statistic, T, the cumulative distribution function W = F(T, θ) has a uniform distribution on the interval (0, 1); this relationship is called that of the probability integral transformation. The fiducial density of θ for fixed T is obtained from the uniform distribution of W, in the same way as the density of T for fixed θ is obtained by differentiation.
As a second example, consider a random sample, X1, …, Xn, from a normal distribution with mean μ and variance σ2, both unknown, and suppose that interest centers on the parameter μ. The quantity , using the sample mean X̅ and sample standard deviation sx, has a known distribution, the t-distribution on n — 1 degrees of freedom. In an application, the values of X and sx are substituted and the parameter is solved for: . This equation gives a fiducial distribution for μ that is of t-distribution form (n — 1 degrees of freedom), located at X and scaled by the factor .
The Behrens-Fisher problem is an extension of this example. Consider a first random sample, X1, …, Xn, from a normal distribution with mean μy and variance σ2x and a second independent random sample, Y,1, … Ym, from a normal distribution with mean μy and variance σ2Y. The Behrens-Fisher problem concerns inference about the parameter difference, μx‒ μy. The fiducial method gives a distribution described by for μx and a distribution described by for /μy, . (Here t1, and t2 are independent t variables with n — 1, m — 1 degrees of freedom.) The fiducial distribution for μx — μy is that of the difference, some percentage points are given in Fisher and Yates ( 1949, p. 44).
For many problems involving normal and chisquare distributions, the fiducial distribution has the form of a Bayesian posterior distribution as based on a prior distribution with uniformity characteristics [see BAYESIAN INFERENCE].
Fisher (1956) considers a wide range of statistical problems and derives the corresponding fiducial distributions.
A central criticism of the fiducial method has been concerned with whether fiducial probabilities are in fact probabilities in an acceptable sense. Some recent analysis, mentioned later, has clarified this question.
Other criticism seems to fall under three headings. First, fiducial probabilities in some examples may not add or integrate to a total of 1. James (1954) and Stein (1959) produce examples that can yield fiducial distributions that do not integrate to 1. Second, in some examples more than one reasonable pivotal quantity may be present; these can lead to several inconsistent fiducial distributions (see Creasy 1954; Fieller 1954; Mauldon 1955). In other examples no reasonable pivotal quantity may be present. Third, if a fiducial distribution from a collection of data is used as a prior distribution for a Bayesian analysis on a second collection of data, the resulting distribution may be different from the fiducial distribution based on the combined collection of data [see Lind-ley 1958; see also BAYESIAN INFERENCE].
Fraser (1961) uses transformations to investigate fiducial probability. The transformation approach applies to a large proportion of Fisher’s examples, and it introduces an additional range of problems for which fiducial distributions can be obtained.
In a later paper (Fraser 1966) the emphasis in the transformation approach is focused on error variables. Consider the example involving a sample of measurements, X1, … , Xn on a physical quantity, p. Let e be a variable describing the error introduced by the measuring instrument: in the example, e is normally distributed with mean 0 and variance σ20. A measurement, X1 can then be expressed in the form X1 = μ + ei. Correspondingly, the sample mean takes the form X̅ = μ + ēi, where ē is normally distributed with mean 0 and variance σ20. Now consider an application and suppose there is no information concerning μ With no information concerning μ, there is no information concerning e other than that describing its distribution. Probability statements can then be made concerning the unknown e just as the card player makes statements concerning the realized but un-revealed cards in his opponents’ hands.
Suppose the normal distribution of ē with variance σ20/n gives a 95 per cent probability for ē lying in the interval 0.0 ± 0.8. Then, with an observed x =163.9, the probability statement concerning ē is equivalent to the statement that μ is in the interval 163.9 ± 0.8 with probability 95 per cent.
This analysis involving error variables applies to many of Fisher’s examples, and it extends to other problems. The name structural probability has been introduced (Fraser 1966) to distinguish it in cases where the method conflicts with the fiducial method. None of the criticisms mentioned concerning fiducial probability apply to structural probability.
D. A. Sprott (1964) uses a more general class of transformations to analyze a wider range of fiducial distributions.
Some alternative methods have been proposed for obtaining probability distributions concerning parameter values: Dempster (1963; 1966) proposes direct probabilities, and Verhagen (1966) proposes induced probabilities.
D. A. S. FRASER
A survey of fiducial methods and criticisms may be found in Fraser 1964.
Creasy, Monica A. 1954 Limits for the Ratio of Means. Journal of the Royal Statistical Society Series B 16: 186–194.
DEMPSTER, A. P. 1963 On Direct Probabilities. Journal of the Royal Statistical Society Series B 25: 100–110.
DEMPSTER, A. P. 1966 New Methods for Reasoning Towards Posterior Distributions Based on Sample Data. Annals of Mathematical Statistics37 : 355–374.
FIELLER, E. C. 1954 Some Problems in Interval Estimation. Journal of the Royal Statistical Society Series B 16: 175–185.
FISHER, R. A. (1930) 1950 Inverse Probability. Pages 22.527a-22.535 in R. A. Fisher, Contributions to Mathematical Statistics. New York: Wiley. → First published in Volume 26 of the Proceedings of the Cambridge Philosophical Society.
FISHER, R. A. (1956) 1959 Statistical Methods and Scientific Inference.2d ed., rev. New York: Hafner; London: Oliver & Boyd.
FISHER, R. A.; and YATES, F. (1938) 1949 Statistical Tables for Biological, Agricultural, and Medical Research.3d ed., rev. & enl. New York: Hafner; London: Oliver & Boyd.
FRASER, D. A. S. 1961 The Fiducial Method and In-variance. Biometrika48 : 261–280.
FRASER, D. A. S. 1964 On the Definition of Fiducial Probability. International Statistical Institute, Bulletin40, part 2: 842–856.
PHASER, D. A, S. 1966 Structural Probability and a Generalization. Biometrika53 : 1–9.
JAMES, G. S. 1954 Discussion on the Symposium on Interval Estimation. Journal of the Royal Statistical Society Series B 16: 214–218.
LINDLEY, D. V. 1958 Fiducial Distributions and Bayes’ Theorem. Journal of the Royal Statistical Society Series B 20: 102–107.
MAULDON, J. G. 1955 Pivotal Quantities for Wishart’s and Related Distributions, and a Paradox in Fiducial Theory. Journal of the Royal Statistical Society Series B 17: 79–85.
SPROTT, D. A. 1961 Similarities Between Likelihoods and Associated Distributions a Posteriori. Journal of the Royal Statistical Society Series B 23: 460–468.
SPROTT, D. A. 1964 A Transformation Model for the Investigation of Fiducial Distributions. International Statistical Institute, Bulletin40, part 2: 856–869.
Stein, Charles 1959 An Example of Wide Discrepancy Between Fiducial and Confidence Intervals. Annals of Mathematical Statistics 30: 877–880.
VERHAGEN, A. M. W. 1966 The Notion of Induced Probability in Statistical Inference. Division of Mathematical Statistics, Technical Paper No. 21. Unpublished manuscript, Commonwealth Scientific and Industrial Research Organisation, Melbourne, Australia.