Mistakes, Medical

views updated

MISTAKES, MEDICAL

•••

With its report, To Err is Human: Building a Safer Health System, The Institute of Medicine (IOM) Committee on the Quality of Health Care in America performed a commendable public service. The report dramatized the extent of a hitherto under-appreciated public problem, harm to patients because of medical error. The report estimates that between 44,000 and 98,000 deaths occur each year due to adverse medical events, that one-half of these adverse events are preventable, that the total cost of these medical misadventures is between 17 and 29 billion dollars, and that the events rank eighth in causes of deaths in the United States.

The report does more than locate a problem largely unrecognized by the public. It points to faulty systems, rather than individual's performance flaws, as the source of the majority of adverse events. The report also sets forward policy recommendations to meliorate the problem. The IOM recommended a triad familiar to those who study safety and post-hoc accounts of accidents: 1) training to improve the performance of personnel, 2) developing new technologies to improve the performance of fallible human operators, and 3) implementing new procedures to improve the over-all functioning of the healthcare delivery system. These changes will bring to medicine the philosophies and work routines of total quality improvement.. The IOM report sets for itself the laudable operational goal of halving medical errors over five years. Success depends in large part on the providers of medical care accepting the IOM's diagnosis and implementing its treatment plan. There will be resistance on both fronts. No change will occur without a re-thinking of how healthcare providers define their obligation to provide quality care.

Error as a Systems Problem

The IOM report defines error in a way most involved in patient care would find unfamiliar: "the failure of a planned action to be completed as intended (i.e. error of execution) or the use of the wrong plan to achieve an aim (i e., error of planning)" (p. 28). This definition seems to ignore uncertainty inherent in medical practice. "An adverse event is an injury caused by the medical management rather than underlying condition of the patient. An adverse event attributable to error is a preventable adverse event" (IOM, p.28). The IOM's definitions presuppose that what should be done is clear, that outcomes are unproblematically attributable to treatment alone, and that what constitutes an error is not subject to debate. Notably, Troyen Brennan, one of the researchers involved in the Harvard Medical Practice Study (HMPS) questioned whether error or preventable adverse events are easily distinguishable from more innocent treatment failures (Brennan).

While the IOM report uncritically accepts the HMPS and subsequent replications and extensions of it and uses the HMPS to shape the basis of the IOM's recommendations, researchers have raised multiple questions about the HMPS findings and their interpretation. The HMPS bases its estimates of adverse events and preventable adverse events on retrospective chart reviews. Death was among the criteria used to select charts for reviews. This raises the strong suspicion that both outcome and hindsight bias influenced reviewers's judgments of the appropriateness of care. Researchers looked at physicians's responses to patient vignettes describing identical diagnoses and treatments but varying with respect to positive and negative outcomes. In these studies, doctors are more likely to find medical error in cases with negative outcomes. Even when raters are asked to pay no attention to outcomes, they still judge the treatment with poor outcomes more negatively than when identical treatment has a positive outcome. The HMPS does not establish a direct link between specific errors and outcomes nor does it address the possibility of attribution error or spurious causality. Finally, McDonald, Weiner, and Hui, have suggested that counting deaths attributable to error, as in the IOM report, is too gross a measure. Many of those who died from the identified errors had terminal diagnoses and complex multi-system problems. A more precise measure of the burden of error may be days of life lost (McDonald, Weiner, and Hui). None of these criticisms suggest that medical error does not constitute a serious problem or that there is not substantial room for improving medical care systems. However, reservations about the methods and assumptions of the HMPS and the IOM report suggest that reducing medical error is more complex and may leave more room for debate than the IOM report acknowledges.

One goal of the IOM report is to shift attention away from individual professionals's performance and to focus on system performance. The report embraces normal accident theory, a blend of organizational and management theory, cognitive psychology, and human factors engineering to understand and explain the occurrence of preventable adverse events (Perrow). The theory holds that modern technological systems are error prone (Paget) and that we should think of certain mishaps as normal accidents. Errors and mistakes, with all their baleful consequences seldom result solely from individual failings—what Charles Perrow a leading proponent of this approach, calls ubiquitous operator error. Rather, errors and mistakes are embedded in the organization of complex technological work like medicine. The two structural features most important to the production of normal accidents (in medicine, preventable adverse events) are interactive complexity and tight coupling. That is, each component of the system is intrinsically complicated and each component's performance affects the functioning of other system parts. Small deviations from expected performance ramify through the system in unpredictable ways through unanticipated feedback loops creating large consequences. For a complex technological undertaking such as medicine, this is an unpleasant fact.

The IOM report focuses on a rejoinder to normal accident theory, highly reliable organizational theory, to remedy the problem. This approach acknowledges that errors can never be eliminated and concentrates on what organizational features allow workers to operate risky and complex technological systems, such as nuclear-powered aircraft carriers, with a minimum of untoward incidents. The theory relies on work structures that have redundancy and overlap; teams that encourage constant communication among and between the ranks; constant surveillance and monitoring for even the smallest deviation from expectations; flexible authority systems that permit even low-ranking workers to question those with the highest authority; a rich oral culture that constantly uses stories to remind workers of behavior that can create trouble; a reporting system that takes note of near-misses and is constantly self-correcting and non-punitive when trouble arises; and technology designed to be userfriendly and cue workers to avoid the most common errors (Roberts; Rochlin, Laporte, and Roberts; Weick; Weick and Roberts).

Error in Professional Culture

Through its pleas to end inaction regarding adverse events and its call to break the pattern of naming, blaming and shaming engaged in by professionals, the IOM report acknowledges the need to change the shopfloor culture of medicine. Curiously, the IOM report neglects workplace studies of physician attitudes, beliefs, and behavior. As a result, the report ignores leverage points for and barriers to change in physician culture. Worksite studies of physicians concentrate on how doctors negotiate and understand the meaning of such terms as adverse event, preventable adverse event, and negligent error. Their meanings are not fixed but are fluid and flexible, highly dependent on context.

One of the earliest discussions of medical mistakes, by Everett C. Hughes, suggests a rough calculus for the frequency of mistakes, based on the skill and experience of the worker and the complexity of the task. Because academic hospitals involve front-line workers (students, residents, and fellows) who may have little experience and because many of the clinical problems encountered often deviate far from the routine, one might expect to find a fair number of mistakes and errors in such institutions. However, says Hughes, hospital work is organized to control and limit the occurrence of mistakes. The organization of physician work in teaching environments also reduces the recognition of error and makes responsibility and accountability difficult to pinpoint. Hughes describes a set of risk-sharing and guiltshifting devices that obscure exactly where in a chain of events the error or mistake occurred. These work practices include supervision, cross-coverage, consultation, and case conferences. These practices make it harder to see and correct individual mistakes, or for that matter, system errors. Errors are a feature of the workplace, and an elaborate division of social and moral labor prevents mistakes and errors from coming plainly into view.

Eliot Freidson describes the social processes used in a group of physicians to bury mistakes and to sustain a structured silence about mistakes. Freidson's results are striking given that the group that he observed was designed self-consciously to maintain the highest imaginable professional standards. In a setting designed to maximize surveillance by colleagues of each other's behavior, Freidson found that peer monitoring and surveillance were unsystematic at best. Referral relations structured colleagues's knowledge of one another's performance. Knowledge gathered in this way was haphazard; the two main sources for information were patient gossip and colleague complaints. Regular procedures or mechanisms for evaluating colleague performance and sharing the results of such evaluations did not exist. Once an individual physician's knowledge and dissatisfaction with the poor performance with another group member had crossed some threshold for action, few options for action were open. Freidson labeled the most immediately available informal action employed by group members the talking to. Colleagues confront the offender, who either clears the air with a non-defensive response or increases distrust with defensive one. If the results of a talking to were unsatisfactory, a physician could engage in a private boycott by refusing to refer additional patients to the offending colleague. The possibility of formally making a complaint and having a physician removed from the group existed but was so administratively cumbersome as not to be a realistic option. In Freidson's work we see that that notions of error, mistake, and competence are conceived within the work group at the level of the individual and that there is a general reluctance to deal with these issues through formal organizational measures.

Charles L. Bosk's Forgive and Remember: Managing Medical Failure examines how surgical residents learn to separate blameless errors from blameworthy mistakes in the course of their training. Errors appear blameless, largely, if they are seen as part of the normal learning process. Attending faculty anticipate that inexperienced residents will make some technical or judgmental mistakes. These errors are considered a normal consequence of providing opportunities to the unpracticed. Errors are blameworthy when, in the eyes of senior surgeons, it is difficult to sustain a claim that a resident acted in good faith. Bosk identified two types of blameworthy errors: (1) normative errors, which breach universal rules concerning physician behavior and (2) quasinormative errors, which mark a resident's failure to conform to an attending surgeon's cherished, but often idiosyncratic, way of doing things. A source of great confusion for residents is the fact that attending surgeons treat breaches of personal preferences as seriously as breaches of universal rules. Technical and judgmental errors, so long as they are not repeated, especially on a single rotation, are forgiven. Not so with normative and quasi-normative error; residents who commit these breaches are often dismissed from training programs. This public punishment, just as Émile Durkheim (1933) long ago suggested, works: (1) as a general deterrence for the not yet corrupted; (2) as reinforcement to the norms of the group; and (3) as a device to increase solidarity among those that share a commitment to the community.

Each of the studies reviewed above has a different focus and emphasis. However, when they, and other similar research that concentrates on the dynamics of the work group, are assessed together, a number of themes to which the recommendations of the IOM Report do not give sufficient weight emerge. These themes include the following:

  1. The inherent uncertainty of medical action—diagnosis and treatment are assessed in prospect, probabilistically. After action is taken results are known and uncertainty evaporates. The relation between a treatment and outcome once so cloudy now appears over-determined.
  2. The essentially contestable nature of error itself—everyone knows errors are untoward events whose occurrence needs to be minimized. What medical workers do not agree on is what happened and why. In each instance, we can agree that errors, in general, are to be avoided, while disagreeing, in each instance, that this action was an error.
  3. The medical profession tolerates normal error. Workers in the same occupation share the same difficulties and have an artful appreciation of all the factors that can create negative outcomes in the face of what otherwise looks like flawless technical performance. What medical workers have in common is an understanding of the ever present possibility for the unexpected negative outcome and a set of beliefs about work that allow such outcomes to be neutralized.

These themes underscore how, on the one hand, the IOM Report is an attempt to encourage the medical profession to take more responsibility for its obligation to the larger society and, on the other, just how difficult that task is.

Perhaps these difficulties are seen most clearly in the recommendations to increase reporting of near misses. For such reporting to be effective, however, the participants in the current system have to possess the ability to recognize the events that they need to report. Workplace studies of error demonstrate, however, that workers's ability and/or willingness to do this should not be taken for granted. Inherent uncertainty, the essentially contested nature of error, and the normal tolerance for the risks of the workplace, when combined with the intense production pressure of hospital practice all create barriers to seeing near misses. What is not seen cannot be reported. What is not reported cannot be learned from. Successful implementation of the IOM recommendation requires that the context of the workplace be taken into account.

Ethics and Medical Error

Two issues dominate the ethical concerns associated with mistakes in medicine: disclosure and accountability. However, as the preceding discussion reveals, a third matter deserves moral scrutiny: definitions of terms. We need to know what counts as error before we can conclude who has a duty to reveal what information, who has the right to receive information, and how professional and legal systems should respond to misadventure.

Classic thinking about mistakes has focused on process and outcome. People may proceed erroneously (begin the wrong operation, administer the wrong medication, fail to do something prescribed or indicated) and, through care or good luck prevent or escape harm. On the other hand, things may expectedly work out poorly for the patient (e.g., they may die, as in the previous discussion) even though, upon close examination, no one omitted appropriate actions, committed inappropriate acts, or otherwise behaved wrongly. In many cases of adverse outcome, one simply finds a great deal of uncertainty about what happened and why. Medicine's lack of complete understanding of disease and physiology leaves a much unexplained or even inexplicable. At the very least, despite human desire to eliminate doubt and fix blame, the world of human medicine leaves a great deal up in the air when one wishes to say a doctor, nurse, pharmacist, or other healthcare worker erred or that a system failed. Finding egregious behavior is easy; the problems arise when an observer does not like what has happened but cannot readily point a finger at the cause.

Starting in the last quarter of the twentieth century, attitudes and practices towards disclosure of clear-cut medical error changed from guild-like self-protectionism to more forthright, perhaps preemptive truth-telling. That is, both medical ethicists and risk managers now counsel practitioners to tell patients or their legally authorized representatives (parents, guardians, among others) when an obvious error occurs. Few now suggest hiding an overdose, administration of a mismatched blood product, or some clearly preventable difficulty in the operative field. Philosophers and lawyers take a pragmatic approach here. Not only do people want to know when something has gone wrong, not only do some argue wronged individuals have a right to know, the consequences of failed cover-ups include overwhelming anger and much larger jury awards. As Sissela Bok pointed out in Lying: Moral Choice in Public and Private Life, in a socially complex world, including that of modern medicine, lying just does not succeed.

Note, however, that the generally accepted admonition to tell the truth often fails to provide practical help. Did the surgical assistant pull too hard on the retractor, resulting in a lacerated artery and a much-prolonged operation for microvascular repair? Was this negligence or something about the patient's fragile tissues? If the patient's recovery is unimpeded, does it matter? Do patients and surrogates want to know every detail of what happened? Might full disclosure inappropriately undermine trust? While there might be objective agreement that the degree of disclosure should somehow follow the desires or psychological needs of patients, loved ones, and legal surrogates, it is not at all clear how one determines, in advance, how much an individual or family member wants to know in a given situation.

Regarding accountability, many problems remain. If the assistant in the hypothetical operation was a surgical intern scrubbing in on this kind of operation for the first time, how does that fact influence an assessment of whether she made a culpable mistake or made an excusable error? The legal system usually acknowledges that trainees do not bear the same level of responsibility as their supervisors—much of the time lawsuits drop involved students and residents from being named defendants in malpractice actions. However, there are no reliable systems for determining how professionals or society should factor (in)experience into judgments about moral responsibility for things going awry. Bosk, in his book on surgical training, Forgive and Remember: Managing Medical Failure, distinguishes between technical and normative error. This distinction assists in understanding that surgeons use social and behavioral standards to assess residents's ethics, but it is not clear how the law or patients can or ought to use such an approach.

How best to respond to ethically suspect or clearly wrong behavior must also be considered. Answers here might also take into account context as well as the specific acts or omissions. How might sleep deprivation play a role in evaluating someone's mistake? Would it or should it matter if the individual's lack of sleep were a result of staying on duty in the middle of a snow storm that precluded replacement staff from reaching the hospital? Should reactions to first offenses be limited, especially for those in training? Focused (re)education may suffice for the cognitive components of error. However, whether reviews of professional standards and obligations can effectively ethically rehabilitate those who seem morally indifferent or disinclined to take their duties as professionals seriously is not really known. Finally, relatively little attention has been paid to the affective consequences of mistakes on those who make them. As Joel Frader notes in "Mistakes in Medicine: Personal and Moral Responses," routine reactions to error should include counseling and support for those involved, especially regarding the guilt and fear common following errors that have produced or nearly resulted in serious harm.

The sometimes-conflicting contemporary Western tendencies to blame/find fault, to seek revenge or at least receive compensation for tragedy, and to excuse the young/naïve/inexperienced also clash with the move toward seeing medical error as a matter of system faults. If complicated processes inevitably include both faulty O-rings and distracted practitioners, those who feel wronged cannot easily point fingers and extract their pound of flesh. Moreover, systems-thinking may itself have negative unintended consequences. First, further diffusion of responsibility, beyond teams and identifiable persons, may decrease incentives to ferret out even recurring, systematic causes of error. If someone who must stop the buck cannot be identified, perhaps everyone will stop caring about reducing the incidence and seriousness of medical error. Second, turning away from notions of individual moral responsibility may allow (even more) moral bad actors to proceed through professional educational and monitoring systems and inflict their damage on patients, family members, colleagues, subordinates, and institutions.

Possible Solutions

The above considerations do not make for obvious or easy answers to the problems of medical mistakes. Regardless of the faults of the HMPS and the IOM report, it seems clear that much medical practice, at least that occurring in the modern hospital, does involve complex technological systems with multiple occasions and places for things to go wrong. Better attention to the components of through-put may indeed identify opportunities to implement technical fixes and safety checks. For example, computer order-entry of medications certainly can eliminate difficulties associated with illegible handwriting. Given the right software, such systems can markedly reduce errors associated with errors in dosing, misspelling of drug names, and so on. Barcodes on medication packets and patient identification bands may lower the incidence of administering drugs to the wrong patient. Routines of repeating oral orders back to the doctor—similar to what happens between pilot and copilots—may clarify confusion-prone exchanges and prevent some mishaps. Such interventions will likely bring on their own problems. Almost certainly, typing orders into a computer increases the amount of time physicians have to spend at that task. The additional time and potential for (inappropriate) inferences of lack of respect involved in oral repetition may create inefficiencies and raised tensions on the wards and in the operating room.

There is a clear need to continue and strengthen efforts to inculcate a sense of individual moral responsibility into healthcare professionals. Indeed, the idea that providers owe specific duties to patients (or clients) that transcend selfish goals constitutes the essence of what it means to become or remain a professional. While the U.S. healthcare education system has more or less, depending on local culture and resources, institutionalized ethics teaching at the student level, further medical training in residencies and fellowships often lack organized approaches and/or appropriately trained or experienced ethics educators, not to mention adequate role models. Of course, ethics education assumes trainees can and do learn ethical behavior at that relatively late stage of personal development. Perhaps healthcare education and training need better systems for identifying and screeningout individuals predictably inclined to behave in undesirable ways. (Such an effort would, in turn, assume valid and reliable methods to weed out disfavored characteristics.)

Current systems for professional regulation are notoriously ineffective in recognizing and intervening when doctors misbehave, even when they do so repeatedly. In hospitals organized medical staff systems for detecting and intervening in the face of misconduct and impairment face legal fear (of libel and restraint of trade lawsuits) and patterned social inhibition (old boy networks and other manifestations of group solidarity, as in there but for the grace of God go I concern). State regulatory bodies have unclear standards, inadequate resources, and some similar solidaritybased reluctance to act. Professional associations often lack mechanisms for investigating, judging, and acting on claims of misconduct or malfeasance. Without the devotion of considerable resources and a real dedication to making mechanisms for professional social controls actually work, healthcare providers should continue to expect malpractice lawyers to thrive.

Conclusions

At the end of the twentieth century, mistakes in medicine began to receive attention appropriate to their contribution to morbidity and mortality in the healthcare system. Public policy began to concentrate on recurring, systematic underlying causes of medical error and borrow concepts from cognitive science, social psychology, and organizational behavior to address the pervasive problem of medical mistakes. Whether this approach to improving patient safety will reduce the incidence or seriousness of medical error remains to be seen, especially as industrial thinking has not paid close attention to the actual and powerful culture of medicine. Also unclear is the effect that an impersonal line of attack on the problem will have on professional morality. Too great an emphasis on technical fixes may erode the sense of personal ethical obligation to patients that society wants its healthcare professionals to hold dear.

joel e. frader

charles l. bosk

SEE ALSO: Competence; Harm; Malpractice, Medical; Medicine, Profession of; Responsibility

BIBLIOGRAPHY

Bok, Sissela. 1979. Lying: Moral Choice in Public and Private Life. New York: Vintage Books.

Bosk, Charles. 1979. Forgive and Remember: Managing Medical Failure. Chicago: University of Chicago Press.

Brennan, Troyen. 2000. "The Institute of Medicine Report—Could it do Harm?" New England Journal of Medicine 342(15): 1123–1125.

Brennan, Troyen; Leape, Lucien; Nam, N. M.; et al. 1991. "Incidence of Adverse and Negligence in Hospitalized Patients: Results of the Harvard Medical Practice Study I." New England Journal of Medicine 324: 370–376.

Caplan, R. A.; Posner, K. I.; and Cheney, F.W. 1991. "Effects of Outcomes on Physician Judgments of Appropriateness of Care." Journal of American Medical Association 265: 1957–1969.

Cook, Richard, and Woods, David. 1994. "Operating at the Sharp End: The Complexity of Human Error." In HumanError in Medicine, ed. Marilyn Bogner. Hillsdale, NJ: Lawrence Erlbaum.

Davis, Fred. 1960. "Uncertainty in Medical Diagnosis: Clinical and Functional." American Journal of Sociology 66: 259–267.

Durkheim, Émile. 1933. The Division of Labor in Society, tr. George Simpson. Glencoe, IL: The Free Press.

Fox, Rénee C. 1957. "Training for Uncertainty." In The Student Physician: Introductory Studies in the Sociology of Medical Education, ed. Robert K. Merton, George Reader, and Patricia L. Kendall. Cambridge, MA.: Harvard University Press.

Frader, Joel. 2000. "Mistakes in Medicine: Personal and Moral Responses." In Margin of Error: The Ethics of Mistakes in the Practice of Medicine, ed. Susan B. Rubin and Laurie Zoloth. Hagerstown, MD: University Publishing Group.

Gawande, Atul. 2002. Complications: A Surgeon's Notes on an Imperfect Science. New York: Metropolitan Books.

Hughes, Everett C. 1951. "Mistakes at Work." Canadian Journal of Economics and Political Science 17: 320–327.

Kohn, Linda T.; Corrigan, Janet M.; and Donaldson, Molla S., eds. 2000. To Err Is Human: Building a Safer Health System. Washington, D.C.: National Academy Press.

Leape, Lucien; Brennan, Troyen, Laird, N. M.; et al. 1991. "The Nature of Adverse Events in Hospitalized Patients: Results of the Harvard Medical Practice Study II." New England Journal of Medicine 324: 370–376.

McDonald, C.J.; Weiner, M.; and Hui, S.L. 2000. "Deaths Due to Errors Are Exaggerated in Institute of Medicine Report." Journal of the American Medical Association 284: 93–95.

Paget, Marianne. 1988. The Unity of Mistakes: A Phenomenologiacal Account. Philadelphia: Temple University Press.

Parsons, Talcott. 1951. "Social Structure and Dynamic Process: The Case of Modern Medical Practice." In The Social System. New York: The Free Press.

Perrow, Charles. 1984. Normal Accidents: Living with Hight Risk Technologies New York: Basic Books.

Reason, James. 1990. Human Error. New York: Cambridge University Press.

Roberts, Karlene. 1990. "Some Characteristics of One Type of High Reliability Organization." Organizational Science 1: 160–176.

Rochlin, Gene; La Porte, Todd; and Roberts, Karlene. 1987. "The Self-Designing High Reliability Organization: Aircraft Carrier Flight Operations at Sea." Naval War College Review (Autumn): 76–90.

Thomas, E. J.; Studder, D. M.; Burstin, H. R.; et al. 2000. "Incidence and Types of Adverse Events and Negligent Care in Utah and Colorado." Medical Care 28: 261–271.

Weick, Karl. 1987. "Organizational Culture as a Source of High Reliability." California Management Review 29: 112–127.

Weick, Karl, and Roberts, Karlene. 1993. "Collective Mind in Organizations: Heedful Interrelating on Flight Decks." ADM Inistrative Science Quarterly 38: 357–381.