For any variable or quantity that requires a measurement, short of a "perfect" measurement (which does not exist), the true value cannot be obtained from any known detector or analysis. For example, the measurement of an environmental pollutant will be subject to errors in instrument design, sampling rate, and analytical methods. These errors will lead to measured concentrations that may approach a true value, but will not be 100 percent accurate due to random or systematic processes during detection. Variables or quantities that are subject to uncertainty include: (1) empirical metrics (e.g., concentrations); (2) constants (e.g., diffusion coefficients); (3) decision variables (e.g., acceptable/unacceptable limits); and (4) modeling domains or boundaries (e.g., grid size). Of these variables, the empirical metrics are usually the most uncertain, since each may have many independent variables that can individually or synergistically control the total uncertainty attached to a measurement.
There are different sources of uncertainty for a variable, including:
- Random error, which is derived from weaknesses or imperfections in measurement techniques or independent interferences.
- Systematic error, which is due to biases in the measurement, analytical technique, or models; these can be associated with calibration, detector malfunctions, or assumptions about processes that affect variables.
- Unpredictability, which is due to the inability to control the stability of a system or process, such as the partitioning of a semivolatile compound between the vapor and particle phase in the atmosphere.
Other sources of less importance include the lack of an empirical basis for individual values (theoretical predictions) and dependence/correlation of variables (interdependence of controlling variables in a system). Some uncertainties in variables or systems can be reduced, either by improving the methods of measurement and analysis or by improving the formulation of a model. Some nonreducible uncertainty, however, is inherent within the physical, chemical, or biological system that is being studied and can only be quantified by statistical analyses of data collected from the system.
A number of methods are used to quantify the uncertainty of a system. Analytical uncertainty analysis involves a description of the output or response variable that is a function of the uncertainty of each input variables (independent) that affects the response variable. This technique is only useful for simple systems, however; more complex systems require sophisticated techniques to determine uncertainty and its propagation within a system, such as Monte Carlo distributional methods, Latin hypercube sampling, and the stochastic response surface method.
At times uncertainty is mistaken for variability. Variability consists of the range of values that truly can be ascribed to a variable within a system. In principle, variability is based upon the differences in a variable frequently found within a system (e.g., a population distribution or concentration pattern). It is based on the number and frequency of observations of one or more variables in the system, or on the probability of the occurrence of a specific value (e.g., concentration) in the system under consideration. In this case, the uncertainty would be the quantitative error around the measurement of a single value or all values frequently observed in the system.
Paul J. Lioy
(see also: Rates; Risk Assessment, Risk Management; Sampling; Statistics for Public Health )
Cullen, A. C., and Frey, H. C. (1999). Probability Techniques in Exposure Assessment. New York: Plenum Press.
Doll, J. D., and Freeman, D. L. (1986). "Randomly Exact Methods." Science 234:1356–1360.
Inman, R. L., and Conover, W. J. (1980). "Small Sample Sensitivity Analysis Techniques for Computer Models with Application to Risk Assessments." In Communications in Statistics, Part A: Theory and Methods 17:1749–1842.
Isukapalli, S. S.; Roy, A.; and Georgopoulos, P. G. (1998). "Stochastic Response Surface Methods (SRSM) for Uncertainty Propagation: Application to Environmental and Biological Systems." Risk Analysis 18:351–363.
—— (2000). "Efficient Sensitivity of Uncertainty Analysis Using the Combined Stochastic Response Surface Method and Automated Differentiation: Application to Environmental and Biological Systems." Risk Analysis 20:591–602.
Uncertainty Analysis in Forensic Science
Many decisions within forensic science are made in the face of uncertainties. As the world becomes increasingly complex, and along with it the complexity of crimes and their investigations, there is an escalating need by forensic scientists to provide more and better statistical information in order to more effectively fight criminals. One of the major tasks confronting the forensic science community is to carefully plan so that the quantity and quality of information obtained will meet the requirements to solve crime and convict criminals. However, any mathematical value that is calculated to estimate an actual value involves an uncertainty. Although uncertainty exists with regard to the quantity and quality of information, it can be minimized by using critical thinking, objectivity, and systematic measurement and examination of the facts.
Uncertainty with regards to mathematical statistics is the estimated amount or percentage by which an observed or calculated value may differ from the actual value. In other words, the uncertainty of a calculated result is a measure of the accurateness (or goodness) to the actual value. Without such a comparative measure, it would be impossible to judge the fitness (or goodness) of the value as a basis for making informed decisions relating to forensic science. For example, in the investigation of a drug bust, a forensic chemist might find from his chemical analysis of a white powder that 35.0 ± 1.0% of the contents of the tested powder is the narcotic drug cocaine. The plus-or-minus (±) one percent value (which is sometimes called a margin of error) is the uncertainty associated with the chemist's result; that is, the actual value could vary from 34.0–36.0% of cocaine, or one percent on either side of the value 35.
In this particular case, the chemist is wise to account for the fact that the equipment and instruments used to measure the concentration of cocaine in the tested powder are not perfectly accurate, so uncertainty arises in the measured value. Thus, uncertainty analysis in the field of forensic science, or in any other field for that matter, involves the procedures, methods, and tools of systematically accounting for every factor contributing to such uncertainties. It covers a wide range of topics that include probability and statistical variables, mathematical relationships and equations, and design and sensitivity of experiments.
The forensic purpose of uncertainty analysis is to evaluate the result of a particular measurement, in a particular laboratory, at a particular time; and as a consequence of knowing that such measurements are not totally accurate, to assign assumptions and approximations to those results. The most widely accepted and commonly used statistical approach to modeling uncertainty is probability theory, which is the branch of mathematics that deals with measuring or determining quantitatively the likelihood that an experiment or event will have a particular outcome.
A common probability measure used to calculate uncertainty is called the confidence interval, which is based on multiple runs of the same analysis. Thus, a confidence interval is a range around a measurement that shows how precise the measurement has been made. For instance, the forensic chemist who found out that 35.0 ± 1.0% of the tested substance is cocaine might also report that after repeated laboratory analysis of the substance there is a 95% certainty that the concentration of cocaine within the tested substance lies between 34.0 and 36.0 percent. The level of significance—in this case 95%—is a statistical term for defining how confident a measurement is contained within the confidence interval. In this case, the chemist is 95% confident that the actual concentration of cocaine (within the tested sample) lies between 34.0 and 36.0%. There are other confidence intervals based on different levels of significance, such as 90% or 99%. With a 95% confidence interval, the chemist has a 5% chance of being wrong (and a 95% change of being correct); with a 90% confidence interval, a 10% chance of being wrong; and with a 99% confidence interval, a 1% chance of being wrong.
Evaluation of uncertainty is becoming more important within forensic science. Forensic test laboratories are increasingly required to include uncertainty analyses in measurement results through quality management standards such as the ISO 9000 series (where ISO is the common short name for the International Organization for Standardization, the world's largest developer of standards). Several organizations, such as the National Conference of Standards Laboratories and the International Standards Organization are currently investigating ways to standardize and simplify the approach to uncertainty analysis within forensic science.
see also Forensic science; Quality control of forensic evidence; Statistical interpretation of evidence.