Statistical Interpretation of Evidence
Statistical Interpretation of Evidence
Because the identification of forensic evidence is not always obvious, forensic scientists today do not simply identify evidence but must also interpret it statistically. Such evidence—that is, the ways and means by which disputed facts are proven true or false in a legal setting—comes in many forms. Evidence may be spoken or written by expert or ordinary witnesses, law enforcement officials, or other relevant persons; contained in physical objects obtained from crime scenes, victims, and suspects such as glass fragments, fingerprints, and firearm marks; and found in physical materials recovered from forensic examinations such as body fluids , paint, DNA samples, and drugs. There are many other types of evidence, nearly all of which contain variations (or uncertainties) when measured and compared between the observed or calculated value and the actual value. For example, there is little likelihood (actually as low as one chance in 1060) that fingerprint characteristics from one person would match the fingerprints of another person, but there is a greater likelihood (about 34 percent) that if one person has blood type A+, then another person has the same blood type.
The statistical interpretation of evidence, often performed by forensic statisticians, involves the evaluation and comparison of evidence found during crime scene examinations and in forensic laboratories, and identified from reference samples of suspects. Such matching of characteristics between crime scene and suspect evidence relies on the theory of probability, the branch of mathematics that deals with determining quantitatively the frequency of occurrence of an event. For example, if a coin is flipped 100 times, it is theoretically expected to land as heads 50 times and tails 50 times. Within probability, various statistical models and techniques—such as Bayes' Theorem, deductive and inductive reasoning, graphical modeling, grouping, likelihood ratios, distributions, samplings, and significance tests—are used to help forensic scientists correctly evaluate and accurately interpret evidence that contain elements of uncertainty. For example, a suspect may be identified because a rare blood type (AB ) found at the crime scene matches the suspect's own blood type, which only about one percent of the U.S. population carries.
Although used in both the nineteenth and twentieth centuries, the application of statistics to interpret evidence became especially important during the 1980s when DNA profiling first became popular. During those subsequent years, there was doubt within the legal community as to the reliability of such methods and to the appropriateness of the (often) simplistic calculations involved. Eventually, the use of statistics for interpreting evidence brought a positive change in the way quantitative data was viewed from a legal standpoint. Presently, the legal community continues to ask the forensic science community for more and better statistics to interpret evidence in order to prove the innocence or guilt of suspected criminals.
Statistics, however, can hinder the solvability of crime when performed improperly. When subjective assessment is introduced—that is, biased evaluations or those based on personal opinion—inaccurate conclusions can be made when comparing evidence characteristics. Subjective qualifiers, such as high chance for guilt when used by the prosecution (for example), and as low chance, by the defense, can often lead to angry and unproductive legal debates as to whether to implicate or exonerate a suspect. As an example of biased statistical interpretation, if a suspect has a five percent chance of possessing a particular trait in order to be deemed innocent of a crime, then the prosecution could state that the suspect has a 95% chance of being guilty (often called the prosecutor's fallacy), while the defense would simply compute five percent of the population (say one million) to be innocent and declare there is a 1 in 50,000 chance of guilt. Therefore, objective assessment, or the unbiased interpretation of the evidence, is preferred because it requires the rational and sound application of statistics to accurately interpret the uncertainties of evidence.
Although the choice of the statistical method is a subjective one, once it is chosen, different forensic experts with identical data and the same statistical method will produce (theoretically) the same assessment and make the same interpretation of the evidence. Of course, there is still controversy about the assumptions underlying the choice of the method used and the interpretations made thereafter. Debate also exists as to how often statistics are misinterpreted or overvalued as evidence. Although the field of statistics seems to have the potential for uniformity, clarity, and impartiality, it is still in the embryonic stage of development within forensic science. Because of its immature nature there are still risks of misuse by forensic examiners, law enforcement officers, courtroom lawyers, judges, and juries. Incorrect statistical use of evidence still leads to problems such as unnecessary forensic tests, unwarranted appeals against court rulings, and even miscarriages of justice.
In order to make a correct assessment of all the variations within a particular case and to present the information in the most understandable way possible, forensic statisticians need a well-rounded knowledge of both the theory and the application of statistics to forensic evidence. Statisticians must be able to evaluate rationally both subjective opinions and objective analyses in order to check, criticize, and verify all the evidence. For the field of statistics to be successfully applied to the interpretation of evidence, it must be appropriately applied, unbiased in analyses and presentation, and intelligible to the people involved in all phases of the forensic investigation and legal proceedings.
see also Forensic science; Quality control of forensic evidence; Uncertainty analysis in forensic science.