The validity or capacity of scientific or medical studies to generalize is often put at risk through the introduction of bias. Such bias results from systematic, nonrandom effects that, even in a large study, produce an incorrect answer or result by weakening, distorting, or spuriously creating a relation between a risk factor or intervention and the observed outcome. It might be caused by a reference population different from the intended group. Therefore, bias has the potential to jeopardize study validity. Researchers must recognize this potential, and reduce its effects through study design, analysis, and interpretation. Controlled laboratory experiments and randomized clinical trials are less prone to bias than are observational studies such as cohort or case-control studies, but this protection is only available for a limited set of conclusions, and bias must be addressed in all studies.
There are many types of bias, which can be intentional or unintentional, and events or features that bias one study may have no biasing effect on another. Biases can result from selection effects (e.g., the sampling plan leaves out a sub-group, over represents a subgroup, or has more complete follow-up for a subgroup [the healthy worker effect]); differential measurement (e.g., cancer cases provide a more accurate family history or exposure history than do controls), measurement error (e.g., the recorded and actual exposures to cigarette smoke differ), and a host of other factors.
Bias is a loaded term in that not all bias is bad. For example, in small studies use of a statistically biased estimate (an estimate that on average does not equal the population valve) can have substantially lower variance than the unbiased estimate and thus be preferred. Regression techniques rely on this trade-off between variance and bias to decide on the valve of entering additional explanatory variables.
Additional examples of bias include the following:
- Conscious selection: A randomized clinical trial requires participants to have the disease of interest, but not be too ill. The treatment comparison is internally valid, but generalizing findings to all diseased individuals may introduce a bias.
- Regression dilution: Reducing elevated blood pressure is known to reduce the risk of a myocardial infarction. However, blood pressure is measured with error, and regression dilution produces an attenuated (biased) relation between the intervention and risk.
- Drop out bias: For an interesting example of bias consider a study of the effects of coaching on SAT scores, reporting that students completing the coaching program averaged a fifty-point-higher score on their next SAT exam than those who dropped out. This result is unbiased in comparing completers with noncompleters; however, the result is positively biased in assessing the effect of a coaching program on all who start the program, irrespective of whether they complete it.
Other types of bias typically encountered in epidemiologic research, particularly those employing observational designs, include recall and observer bias. Recall bias arises if one group systematically over- or underreports information about an exposure or risk factor in comparison to the other group. Observer bias occurs if one group is systematically "observed" and reported to behave in a manner that is different from the other group.
Careful design and conduct of studies and careful interpretation of results are necessary to reduce or eliminate bias. Minimizing bias in design and conduct is preferable to relying on post hoc statistical "cures" such as covariance of adjustment and causal modeling. These powerful techniques are absolutely necessary in analyzing observational studies and can be used to "mop up" some bias in designed experiments, but their effectiveness depends on model validity and expert tuning to the specific study.
Germaine M. Buck
Thomas A. Louis
(see also Case-Control Study; Causality, Causes, and Causal Inference; Cohort Studies; Observational Studies )
Rothman, K. J., and Greenland, S. (1998). Modern Epidemiology, 2nd edition. Philadelphia, PA: Lippincott-Raven.
bi·as / ˈbīəs/ • n. 1. prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair: there was evidence of bias against foreign applicants [in sing.] a systematic bias in favor of the powerful. ∎ [in sing.] a concentration on or interest in one particular area or subject: a discernible bias toward philosophy. ∎ Statistics a systematic distortion of a statistical result due to a factor not allowed for in its derivation. 2. an edge cut obliquely across the grain of a fabric. 3. in some sports, such as lawn bowling, the irregular shape given to a ball. ∎ the oblique course that such a shape causes a ball to run. 4. Electr. a steady voltage, magnetic field, or other factor applied to an electronic system or device to cause it to operate over a predetermined range. • v. (bi·ased , bi·as·ing or bi·assed,bi·as·sing) 1. [tr.] (usu. be biased) show prejudice for or against (someone or something) unfairly: readers said the paper was biased toward the conservatives [as adj.] (biased) a biased view of the world. ∎ influence unfairly to invoke favoritism: her well-rehearsed sob story failed to bias the jury. 2. give a bias to: bias the ball. PHRASES: cut on the bias (of a fabric or garment) cut obliquely or diagonally across the grain.
1. The d.c. component of an a.c. signal.
2. The d.c. voltage used to switch on or off a bipolar transistor or diode (see forward bias, reverse bias), or the d.c. gate-source voltage used to control the d.c. drain-source current in a field-effect transistor. The word is also used as a verb: to switch.
3. The d.c. voltage or current used to set the operating point in linear amplifiers.
4. In statistical usage, a source of error that cannot be reduced by increasing sample size. It is systematic as opposed to random error.
Sources of bias include (a) bias in sampling, when members of the sample are not fully representative of the population being studied; (b) nonresponse bias in sample surveys, when an appreciable proportion of those questioned fail to reply; (c) question bias, a tendency for the wording of the question to invite an incorrect reply; (d) interviewer bias, a problem of personal interviewing when respondents try to reply in the way the interviewer is thought to expect.
A narrower definition of bias in statistical analysis (see statistical methods) is the difference between the mean of an estimating formula and the true value of the quantity being estimated. The estimate
for the variance of a population is biased, but is unbiased when n is replaced by (n–1).
5. (excess factor) See floating-point notation.
A predisposition or a preconceived opinion that prevents a person from impartially evaluating facts that have been presented for determination; a prejudice.
A judge who demonstrates bias in a hearing over which he or she presides has a mental attitude toward a party to the litigation that hinders the judge from supervising fairly the course of the trial, thereby depriving the party of the right to a fair trial. A judge may recuse himself or herself to avoid the appearance of bias.
If, during the voir dire, a prospective juror indicates bias toward either party in a lawsuit, the juror can be successfully challenged for cause and denied a seat on the jury.
Hence bias vb. XVII.