Litigation, Social Science Role in
Litigation, Social Science Role in
Judicial opinions have four elements: (1) jurisdiction (the court’s authority to adjudicate this dispute); (2) findings of fact (what happened); (3) conclusions of law (what laws apply to these facts); and (4) order (what the court directs to be done). Social science is utilized in all four parts. For example, social scientists debate the merits of existing and proposed law, and their writings may be brought to the attention of judges, legislators, and government administrators (Meier et al. 1986, and Dixon and Gill 2002). They are most directly active, as participants, in finding fact, usually as expert witnesses. John L. Solow and Daniel Fletcher describe the role of the antitrust economist as “reaching conclusions about factual issues like the existence of market power or barriers to entry, and drawing causal links between firms’ actions, market outcomes, and claimed damages” (2006, p. 31).
The law categorizes two kinds of fact: legislative and adjudicative. Legislative fact is about generalizations. For example, courts often want to know if a screening device (for school admission, or for employment or termination, for example) is “valid.” But the effectiveness of a device used to screen plaintiffs cannot be tested on them, because validation requires observations on success or failure of persons who achieved the positions the plaintiffs have been denied. As a Fifth Circuit panel observed, “We fail to understand how passing scores conclusively establish the demographics of the qualified applicant pool if passing scores mean nothing with respect to predicting the quality of future firefighters” (Dean v. City of Shreveport 2006, p. 457). A validity study therefore is always legislative fact. The most famous legislative fact argument in litigation, appearing in an appeals brief by Louis Brandeis, was put together by his social scientist sister-in-law, Josephine Goldmark. Brandeis argued that as women are physically and socially different from men, it is appropriate that they have special protective legislation (Muller v. Oregon 1908). In United States v. Virginia (1996), in contrast, plaintiffs’ experts argued convincingly that academically qualified women who could pass the same physical test as men needed only equal assessment.
To accept legislative fact requires that the court extrapolate from one population to another. When faced with an explicit recognition that such extrapolation is required, some judges have been reluctant to do it, despite precedential acceptance by the Supreme Court of Kenneth Clark’s “doll studies,” which showed that both white and black children prefer to play with white dolls. Clark had tested some children at issue in the U.S. Supreme Court’s decision in Brown v. Board of Education (1954), which held that race is an unconstitutional basis for school assignment, but the Court referred only to his larger studies of children not connected with this case (Beggs 1995). In contrast, a massive study that contended that harsher sentences (particularly for execution) are handed down more often in black-on-white murder cases (i.e., black murderer of a white victim) than in cases involving any other race combination was rejected because it was legislative fact, where what the Court required was evidence that the particular sentence being challenged was tainted with racial bias (McCleskey v. Kemp, 1987). In another example, the First Circuit Court of Appeals rejected a sociologist’s argument that “relies on evidence from one locality to establish the lingering effects of discrimination in another” (Wessman v. Gittens 1998, p. 804).
Adjudicative fact is particular. Much of what the social scientist expert does is devise ways to measure the concepts that seem relevant to the law. Redistricting, jury representation, antitrust, and equal employment opportunity issues inherently call for such social science fact. To show that juries are unrepresentative of “the community,” for example, one needs a measure of that community, as well as of the jurors. Even if population demographics are stable, individuals may come and go, and not be available for jury duty. The Hispanic population may be younger and more likely to be single and renting—and therefore implicitly more mobile—than the non-Hispanic population. As Hispanics tend to live in identifiable areas, one can describe their mobility directly from address-based data sources. Inferring characteristics of a set of people from data describing their tract or block or zip code is called geocoding.
In another example, invalid implicit screening devices may be used to make employment decisions. An implicit device might be a prejudice held, perhaps unknowingly, by the decision-maker, for example, thinking that fat people are lazy. According to R. Matthew Wise, “social framework evidence is the product of social science research that an expert compiles and uses to construct a frame of reference for specific issues central to the resolution of a case” (2005, p. 548). That is, particular evidence offered by such an expert may be an amalgam, applying legislative fact to case-specific attributes. Whether that type of study will be admitted is always in question.
Three other distinctions loom large in explaining the role of the social scientist as expert witness. First, laws are about events, whereas data are almost always about situations. Changing one’s residence is an event, but most data describe the population in an area at one time, and then at another. Hiring and firing are events, but the majority of employment-discrimination analyses have compared situations, such as the racial composition of a firm’s employees with the racial composition of a subset of a proximate area’s workforce. In another example, one seldom observes the collusion implied in some antitrust charges; that event is inferred from situations, such as where stores are located and the prices of their goods.
The complaining party’s expert tries to explain the relationship between the outcome and the alleged events, excluding other, benign events that could have led to the same outcome. The defending party’s expert tries to show that it likely was benign events that led to the same outcome, and also argues that plaintiffs’ descriptions of the situations are themselves incorrect or misleading. An example in which how one defines a variable determines the statistical result is McReynolds v. Sodexho Marriott Services, Inc. (2004), where the plaintiffs’ expert does not measure “promotions” as defendant firm defines them. Courts generally dislike plaintiffs’ analyses that have not considered alternative explanations for the events complained about. The defendant may prevail merely by criticizing the plaintiffs’ “proof.” For example, “A statistical analysis which fails to control for equally plausible non-discriminatory factors overstates the comparison group and, under the facts of this case, cannot raise a question of fact for trial regarding discriminatory impact” (Carpenter et al. v. The Boeing Company, 2004, affirmed 2006).
Second, the social scientist’s evidence is subject to legal distinctions, such as the difference between disparate treatment (in which actions are at issue) and disparate impact (in which the action’s effect is at issue). Measuring the “cost” of gasoline at its market price (opportunity cost), evaluating damages to resorts from oil spills by lost consumer surplus, failing to distinguish allowable from nonallowable behavior as causes for an outcome in antitrust litigation, and using the wrong basis for a survey are examples of legal mistakes made in social science evidence. See, for example, Rebel Oil Company, Inc. v. Atlantic Richfield Company (1996), In the Matter of Oil Spill by the Amoco-Cadiz off the Coast of France on March 16, 1978 (1992), Williamson Oil Co. v. Philip Morris (2003), Citizens Financial Group, Inc. v. Citizens National Bank of Evans City (2004), and Autozone v. Tandy Corp. (2001).
Third, although both an eyewitness and a social science witness may present adjudicative fact as they see it, the social scientist provides circumstantial evidence. (Social scientists would call it inferential evidence.) Therefore, social science results must be reported probabilistically. The social scientist cannot be “certain” in the way an eyewitness can.
However, no matter how firmly held, eyewitness testimony is often wrong—it, too, is probabilistic. See United States v. Veysey (2003), referring to Judge Frank Easterbrook’s discussion of probabilistic evidence in Branion v. Gramley, 1988: “Much of the evidence we think of as most reliable is just a compendium of statistical inferences.”
Estimates of the fallibility of eyewitness identification (legislative fact) sometimes are allowed into evidence, sometimes not. Thus, although a social scientist cannot testify that “X did not do Y,” he may be called upon to testify that although Z says that X did Y, Z may be mistaken; or that such mistakes happen in such and such circumstances, more or less to such an extent. On the fallibility of eyewitness identification, see Munsterburg (1909), Loftus and Monahan (1980), and Bradfield and Wells (2000). Similarly, “forensic” experts often allude to a certainty in their identification (e.g., by handwriting, fingerprints) that social scientists find offensive. Social science research about the fallibility of forensic evidence is sometimes admitted in rebuttal, sometimes excluded.
Geocoding is a powerful method of examining voting patterns, even though individual ballots remain secret. Social scientists have debated whether gerrymandering had been directed by race (impermissible) or party (permissible) in situations when the outcome of the vote is known and the race of the voters is inferred. J. Morgan Kousser attributes the setting of district boundaries to the majority’s attempt to prevent minority successes (1999). In Thornburg v. Gingles (1986) the Supreme Court found race to be too important a factor to let the redistricting stand; and in Hunt v. Cromartie (2001) it found that race was not important enough to disallow the redistricting. (See Grofman 1998 for social science studies concerning judicial redistricting decisions.)
The federal judicial system had no formal rules of evidence until 1975, prior to which judges usually accepted a witness as an “expert” if he was regarded to be one by others in the same “field” (Frye v. United States 1923). Although the institution of rules of evidence gave judges the authority to determine the expertise of proffered experts, decisions at first were little affected. “Junk science” was sometimes determinative (Huber 1991). In Daubert v. Merrill Dow Pharmaceuticals, Inc. (1993) the Supreme Court held that the Federal Rules of Evidence superseded Frye. Trial judges were instructed to exclude expert testimony they found insufficiently reliable or unhelpful to a fact finder.
The decisions of the “Science Evidence Trilogy”—Daubert, Joiner v. General Electric Company (1997), and Kumho Tire Co. v. Carmichael (1999)—now govern the use of scientific and technical testimony. Joëlle Anne Moreno describes the slowness with which Daubert principles have been applied to criminal cases, and the resulting bias against rebuttal experts offered by defendants (2004). John V. Jansonius and Andrew M. Gould (1998) and Mark S. Brodin (2004) discuss the problem of applying the Science Evidence Trilogy to social science. Credentials remain important. Practice has not evolved to focus solely on the proposed testimony. For example, in Gary Price Studios, Inc. v. Randolph Rose Collection, Inc. (2006), a proffered expert’s method for evaluating damages was internally contradictory. The judge took pains to show that the witness was not otherwise qualified, when the rules would have allowed him to dismiss the testimony as clearly incorrect and therefore unhelpful.
Subjects amenable to expert study are fast expanding as imaginative attorneys find new ways to argue for their clients. Courts are increasingly willing to hear evidence from statistical models. For example, adjudicative studies based on samples at one time were not acceptable; see United States v. United Shoe Machinery Corp. (1953), in which Judge Wyzanski drew his own sample, breaking that tradition. “The Court arbitrarily selected from a standard directory of shoe manufacturers, the first 15 names that began with the first letter of the alphabet, the first 15 names that began with the eleventh letter of the alphabet, all 8 of the names that began with the twenty-first letter of the alphabet, and the first seven of the names that began with the twenty-second letter of the alphabet” (p. 305). Now, over fifty years later, the parties’ experts would do the sampling, presumably more skillfully. Opposing experts may come from different fields. Some judges complain that they do not have the skills to resolve technical disputes, and sometimes they engage their own experts, as the Rules of Evidence permit.
Beggs, Gordon J. 1995. Novel Expert Evidence in Federal Civil Rights Litigation. American University Law Review 45 (5) 1–75.
Bradfield, Amy L., and Gary L. Wells. 2000. The Perceived Validity of Eyewitness Identification Testimony: A Test of the Five Biggers Criteria. Law and Human Behavior 24 (5): 581–594.
Brodin, Mark S. 2004. Behavioral Science Evidence in the Age of Daubert : Reflections of a Skeptic. Boston College Law School Research Paper 24. http://lsr.nellco.org/bc/bclsfp/papers/24.
Brown v. Board of Education, 347 U.S. 483, 495 (1954).
Carpenter et al. v. The Boeing Company, 2004 WL 2661691 (2004).
Dean v. City of Shreveport, No. 04–31163 (5th Cir. 2006).
Dixon, Lloyd, and Brian Gill. 2002. Changes in the Standards for Admitting Expert Evidence in Federal Civil Cases since the Daubert Decision. Psychology, Public Policy, and Law 8: 251–308.
Grofman, Bernard, ed. 1998. Race and Redistricting in the 1990s. New York: Agathon Press.
Huber, Peter W. 1991. Galileo’s Revenge: Junk Science in the Courtroom. New York: Basic Books.
Hunt v. Cromartie, 526 U.S. 541 (1999); on remand, 133 F.Supp.2d 407 (E.D. N.C. 2000); reversed, 532 U.S. 234 (2001).
Jansonius, John V., and Andrew M. Gould. 1998. Expert Witnesses in Employment Litigation: The Role of Reliability in Assessing Admissibility. Baylor Law Review 50 (267): 282–286.
Kousser, J. Morgan. 1999. Colorblind Injustice: Minority Voting Rights and the Undoing of the Second Reconstruction. Chapel Hill: University of North Carolina Press.
Loftus, Elizabeth, and John Monahan. 1980. Trial by Data: Psychological Research as Legal Evidence. American Psychologist 35 (3): 270.
Meier, Paul, Jerome Sacks, and Sandy L. Zabell. 1986. What Happened in Hazelwood: Statistics, Employment Discrimination, and the 80% Rule. In Statistics and the Law, ed. Morris H. DeGroot, Stephen E. Fienberg, and Joseph B. Kadane, 1–48. New York: John Wiley.
Michelson, Stephan. 2006. The Expert: The Statistical Analyst in Litigation. Hendersonville, NC: LRA Press.
Moreno, Joëlle Anne. 2004. What Happens When Dirty Harry Becomes an (Expert) Witness for the Prosecution? Tulane Law Review 79 (1): 1–54.
Munsterberg, Hugo. 1909. On the Witness Stand. New York: Doubleday and Page.
Solow, John L., and Daniel Fletcher. 2006. Doing Good Economics in the Courtroom: Thoughts on Daubert and Expert Testimony in Antitrust. Journal of Corporation Law 31:489–502.
United States v. United Shoe Machinery Corp., 110 F.Supp. 295 (D.Mass. 1953).
United States v. Veysey, 334 F.3d 600 (2003).
Wessman v. Gittens, 160 F.3d 790 (1998).
Wise, R. Matthew. 2005. From Price Waterhouse to Dukes and Beyond: Bridging the Gap between Law and Social Science by Improving the Admissibility Standard for Expert Testimony. Berkeley Journal of Employment and Labor Law 26: 545–581.