Psychology and Psychiatry
PSYCHOLOGY AND PSYCHIATRY.
Psychiatry in the United States has undergone a number of sweeping changes since the middle of the twentieth century. The settings in which psychiatrists practice, the range of diseases they seek to treat, their theoretical understandings of these diseases, and the treatments they apply are all radically different from those of their predecessors. These changes have had an impact not only on the psychiatric profession but on cultural understandings of the mind as well, altering how people make sense not only of mental illness but of their everyday feelings and behaviors.
What is remarkable is not that such changes occurred, for radical transformations in medical practice and understandings have come to be expected, but that they occurred in the way they did and for the reasons they did. Despite enormous changes, researchers have not identified the root cause of a single psychiatric disease or developed a single definitive cure. This is not to say that understandings and treatment of psychiatric illness have not improved, but simply to say that psychiatry's revolutions cannot be traced to the kinds of scientific breakthroughs that one might imagine, but rather to the interaction of a number of historical developments within psychiatry, medicine, and American culture as a whole.
This article traces the history of psychiatric theory, therapeutics, and clinical science since the mid-1900s, exploring the ways in which their interaction has shaped the course of psychiatry. This history covers three major transformations in psychiatry: an about-face in its theoretical orientation, characterized by the postwar rise and fall of psychoanalysis and the subsequent rise of biopsychiatry; the redefinition of the practice of psychiatry that followed the discovery of psychotropic drugs; and the changes in the clinical science of medicine as a whole that reinforced psychiatry's biological shift.
Psychiatric Diagnosis: From Psychosis to the "Psychopathology of Everyday Life"
The years following World War II were a time of unprecedented growth in the scope of psychiatry in the United States. In a transformation that reflected an increase in outpatient psychiatry rather than a decrease in state hospital treatment, the percentage of psychiatrists working in outpatient settings, a slim minority before the war, grew to more than half by 1947 and to an astounding 83 percent by 1957. Accompanying this shift was a similarly dramatic expansion in the kinds of ills that led patients to seek psychiatric care, inside or outside of the state hospital system. Psychiatrists began caring for an entirely new type of patient, one who suffered from "psychoneurotic" ills instead of severe mental illness.
Throughout the twentieth century, psychiatrists divided psychiatric illness into two main classes: "organic" and "functional." They classified organic illnesses as those in which there was an obvious cause (e.g., intoxication) or brain lesion (e.g., dementia), whereas the functional disorders, those most commonly associated with the practice of psychiatry, had no ascertainable biological cause. Toward the end of the twentieth century, psychiatrists increasingly criticized this division between functional and organic, arguing that all psychiatric illness is, at its root, biological. In 1994, the distinction was entirely dropped from the fourth edition of the American Psychiatric Association's (APA) Diagnostic and Statistical Manual (DSM ). However those disorders with known organic causes tend to fall under the domain of neurology, while psychiatry continues to tend primarily to disorders that fall under the traditionally functional category. These disorders are subdivided into two categories—psychotic disorders and nonpsychotic disorders—that are then further divided into specific diagnoses.
In 1952 the APA published the first edition of the DSM (DSM-I ), replacing the collection of diagnoses endorsed by the APA in 1933. DSM-I was heavily influenced by psychoanalytic theory and by Adolf Meyer's emphasis on individual failures of adaptation to biological or psychosocial stresses as the cause of psychiatric illness. The diagnoses enumerated in DSM-I indicate a major enlargement in the ways in which nonpsychotic illness could be experienced and named.
This change reflected not a simple recategorization of existing patients, but rather a redrawing of the line between "dis-eased" and "normal" distress that resulted in the creation of entirely new patients. The years following the war witnessed a staggering increase in the number of patients seeking psychiatric care for their troubles in everyday living, either by voluntarily admitting themselves to state psychiatric hospitals or by hiring the services of a psychotherapist. This explosion of psychiatric concerns and practice is owed in large part to two related phenomena: the psychiatric profession's reaction to World War II, and the increasing dominance of psychoanalytic theory and practice.
World War II was the single most important factor in propelling psychodynamic psychiatry to the forefront of American psychiatry. Most fundamentally, the war reinforced the belief "that environmental stress contributed to mental maladjustment and that purposeful human interventions could alter psychological outcomes" (Grob, p. 427). Of the 18 million men screened for induction, nearly 2 million were deemed unfit for military service because of severe emotional difficulties. Despite flaws in the screening process (especially its cursory nature and the broad criteria used for rejection), this huge rejection rate highlighted the ubiquity of psychiatric disorder in the community.
The war provided a means for addressing this new concern by greatly increasing the number of physicians with experience treating psychiatric disorders. Between 1941 and 1945, the number of Army Medical Corps physicians working in psychiatry increased from 35 to 2,400. Psychoanalytic theory and therapy figured centrally in much of the training they received, and many of these physicians went on to practice psychiatry as well as psychoanalysis after the war. Moreover successfully treating wartime neuropsychiatric casualties with psychosocial interventions strengthened psychiatrists' convictions in the efficacy of psychotherapy based in psychodynamic principles.
The proportion of psychiatrists following psychodynamic tenets rose to a third by the late 1950s, and to half by the early 1970s. Psychoanalysis and psychodynamics dominated the curriculum of medical schools and residency programs, as well as the orientation of many academic departments, through the mid-1960s. The 1968 publication of the second edition of the DSM (DSM-II ) reflected this. Like DSM-I, DSM-II presented a psychosocial view of psychiatric illness. Psychiatric illnesses were reactions to stresses of everyday living, not discrete disease entities that could easily be demarcated from one another or even from normal behavior or experience. From this perspective, naming a disease was of much less consequence than understanding the underlying psychic conflicts and reactions that gave rise to symptoms.
Diagnoses as disease entities.
Diagnosis, relegated to the periphery of psychiatric concerns from the 1950s through 1970s, has since taken center stage. DSM-III (1980) and DSM-IV (1994), as well as DSM-V (planned for release in 2011), reflect American psychiatry's embrace of a biomedical model of disease, complete with discrete illness categories that are distinct both from one another and from that which qualifies as "normal." Unlike DSM-I and DSM-II, the subsequent revisions have been major undertakings of central scientific importance to the field. Whereas DSM-II had consisted of a paltry 119 pages, DSM-III was 494 pages long and listed 265 distinct subdisorders—a number that would grow to nearly 400 with the publication of DSM-IV. Many disorders came to exist for the very first time when they made their appearance in print, the end product of six years' effort and of endless debate and compromise within committees assigned to each major disease category.
Unlike previous editions, DSM-III was intended to actively guide psychiatrists in assessing and diagnosing patients. The need for such a guide arose largely from psychiatry's place within larger contexts. The whole of medicine had experienced a cultural shift, one that was characterized by reliance on standardized knowledge rather than clinical expertise; statistical knowledge based on groups rather than individuals; and an increasingly reductionistic view of disease in which biology was paramount. This shift occurred within psychiatry as well: The availability of pharmacological treatments for psychiatric disorders, combined with a desire to remain part of an increasingly scientifically rigorous medical realm, led psychiatry to trade psychoanalytic theory for a new biopsychiatry that largely rejected a disease model rooted in individual biographies, psychological conflict, and psychosocial stressors. Shifting fiscal realities also contributed to psychiatry's need for greater diagnostic certainty and accountability for outcomes, as the percentage of outpatient psychiatric care paid by third-party payers (either private or public) in the United States rose from almost zero in the 1950s to nearly a quarter in the 1960s, and continued to rise steadily in the 1970s. The antipsychiatry movement of the 1960s, which critically viewed psychiatry's diagnostic categories as labels constructed by society in order to silence social deviance, created additional pressure on the discipline to define its targets in biological terms.
The diagnostic manual that grew out of this transition from psychodynamics to biopsychiatry was explicitly "atheorietical" with regard to etiology, but most of the diagnostic categories enumerated in the DSM-III were underpinned by an implicit assumption that biology and not psychological conflict was their primary cause. Symbolic of this was the excision of the word "reaction" from many diagnoses: thus a patient who would have been diagnosed with a "psychotic depressive reaction" prior to 1980 was now diagnosed with "major depression with psychotic features." Each diagnosis was thought of not only as stemming from a unique biological cause, but also as being made up of a unique (and determinant) set of symptoms—a marked departure from the psychodynamic view of disease, in which a given set of symptoms, depending as they did on the individual's life history and beliefs, could result from any number of underlying conflicts. DSM-III and DSM-IV have been heavily criticized for their approach to diagnosis, in which the presence of a minimum number of symptoms from a list determines the presence or absence of the disorder in question. However this approach is perhaps the best that can be expected from a field in which symptoms are generally thought to be direct reflections of an underlying disease of presumed, but as yet unknown, biological cause. As a means by which to increase diagnostic consensus, facilitate research into the efficacy of disease-specific cures, and justify insurance reimbursement, DSM-III and DSM-IV have been largely successful, and the DSM remains the dominant system of psychiatric classification in the United States and most other countries.
While psychoanalytic treatments have largely fallen from grace, psychoanalytic theory and language continue to influence American psychiatry and culture. Since the late-twentieth century, American psychiatry has traded this language for the language of biology and brain, but the expanded definition of psychiatric illness—one that includes problems formerly seen as inevitable parts of life—remains. Intriguingly while these problems originally were recast as psychiatric illnesses by virtue of their psychosocial etiology, beginning in the late twentieth century their status as disorders has them readily subject to purely biological interpretations and cures.
Therapeutics: From Behavioral Control to Biological Disease
The nature of psychiatric care has changed immensely since the early-twentieth century, shaped not only by prevailing psychiatric theory but also—and often more importantly—by practical realities such as setting, needs, and resources. State hospitalization and outpatient psychotherapy both largely have been replaced by a proliferation of psychotropic drugs, with considerable implications for how society views mental illness, as well as how people make sense of more everyday aspects of human feelings and behaviors.
Somatic therapies and behavioral control.
In the early twentieth century, American psychiatry was almost exclusively institutionally based. Nineteenth-century asylum founders had created these institutions as a means of providing a psychologically therapeutic environment in combination with physical treatment regimens, but by the turn of the century their optimism had worn off, replaced by a biological fatalism regarding the patients' chances of improving. By the early twentieth century, state mental hospitals—a designation that had replaced that of asylum—were vastly overcrowded institutions for the care of severely ill patients, many of whom became permanent residents.
In such a setting, where a handful of psychiatrists often cared for thousands of patients, patients were categorized not by diagnosis but by behavior and prognosis, and were housed in wards with labels such as "acutely excited," "chronic quiet," "chronically disturbed," and "convalescing." Disordered behavior was the primary target of psychiatric interventions, which consisted almost exclusively of somatic therapies: hydrotherapy (e.g., continuous baths or wet-sheet body wraps), insulin-induced comas, electroconvulsive therapy, and lobotomy. These treatments were believed to be therapeutic by virtue of their success in subduing out-of-control (diseased) behavior; and behavior, not diagnosis, determined the need for a particular somatic cure.
As the overcrowding of state hospitals suggests, there was little place for psychotherapy in institutional psychiatry. It was not until the rise of psychoanalysis and out-patient psychiatry that psychotherapy became an important intervention within the field. Though psychoanalysis was largely a treatment sought by well-to-do and well-educated individuals suffering from everyday anxiety, unhappiness, or boredom, in the 1950s analysts began treating not only those suffering from neurotic ills but also traditional psychiatric patients with severe psychotic disorders such as schizophrenia. These efforts led to significant controversy in the ensuing decades, contributing to psychiatry's abandonment of psychotherapy in favor of more biological approaches. For the most part, however, the demise of psychotherapy within psychiatry can be attributed to two causes: the advent of psychotropic drugs in the 1950s and competition from the growing fields of psychology and social work in the 1960s and 1970s. By the 1990s psychiatry had largely ceded matters of the mind to psychology, content to concern itself with matters of the brain.
Psychopharmacology and biological disease.
Though psychiatrists have long had at their disposal a number of drugs capable of sedating patients (for example bromides, barbiturates, hyoscine, and chloral hydrate), these drugs were never seen as therapeutic but rather as "chemical straitjackets." The age of psychopharmacology did not begin until the 1950s with the discovery of chlorpromazine, the first of what have come to be referred to as the antipsychotic drugs. Other classes of drugs followed, inducing a veritable therapeutic revolution in psychiatry. Since the last third of the twentieth century, the major classes of psychotropic drugs include antipsychotics, antidepressants, anxiolytics, mood stabilizers, and a miscellaneous assortment of other medications, and their use has largely displaced psychotherapeutic interventions as the mainstay of psychiatry practice.
Unlike the somatic therapies and psychotherapy, psychotropic drugs carry with them an implied diagnostic specificity: separate classes of drugs for psychosis, depression, anxiety, and bipolar disorder. Much as psychoanalytic theory and treatment went hand-in-hand (a psychodynamic cure for a psychodynamic ailment), the implied specificity of these drugs fits well with biopsychiatric thinking: a specific neurochemical cure for each neurochemical trouble. As with psychoanalysis, the language of biopsychiatry has heavily pervaded American culture, carrying with it both a scientific logic that links psychiatry to the rest of medicine and a compelling description of human experience that allows people to see themselves readily in biological terms, much as people in the twentieth century readily saw their present troubles as the product of their troubled personal pasts.
From "major tranquilizers" to antipsychotic drugs.
The early stages of the psychopharmacologic revolution began in the 1930s and 1940s, when researchers began modifying phenothiazine compounds in an effort to develop synthetic anti-histamines. Henri Laborit, a French military surgeon, was interested in these drugs for their analgesic, sedative, and hypothermic properties, believing that they might be of benefit in preventing shock associated with anesthesia. In 1949 he noted that promethazine—a phenothiazine derivative—produced a "euphoric quietude" in patients, prompting chemist Paul Charpentier, of the pharmaceutical company Rhône-Poulenc, to search for phenothiazine derivatives with even greater effects on the central nervous system. The result was chlorpromazine, a compound that would eventually become known as the first antipsychotic drug.
Initially Rhône-Poulenc believed that chlorpromazine might have a variety of applications, for conditions ranging from nausea to itching, and they named it Largactil to emphasize its many uses. By 1951, however, physicians began recognizing its ability to calm agitated patients without overly sedating them. Smith, Kline & French bought the North American rights in 1952 and received U.S. Food and Drug Administration (FDA) approval to market it under the trade name Thorazine in May 1954. By 1956, 4 million patients in the United States had taken chlorpromazine—primarily for psychiatric applications—yielding $75 million in profits in 1955 alone.
Psychiatrists began referring to these drugs as antipsychotics in the mid 1960s, but until that time they were generically referred to as the "major tranquilizers," and psychiatrists considered them useful for most types of mental disorder. Smith, Kline & French recognized that state hospitals, housing more than half a million captive potential consumers, represented an enormous market. The introduction of chlorpromazine did away with the use of lobotomy almost overnight: The drug was easy to administer; rapidly and therapeutically eliminated recalcitrant, hostile, and violent behavior; and ostensibly produced relatively minor side effects. Perhaps more importantly for the course that psychiatry has taken since, physicians also prescribed it for the nonpsychotic patients who had so recently made their way into state hospitals and outpatient care, thus cementing the medical status of these new diagnoses. It was not until the mid-1960s—by which time the success of Thorazine had led to a proliferation of similar drugs—that psychiatrists winnowed down the application of these drugs primarily to the treatment of psychotic disorders, a fact reflected in the increasing use of the term antipsychotic. This transformation from tranquilizer to antipsychotic, implying the discovery of a biological cure for a specific psychiatric disease, reinforced the conviction that schizophrenia (and, by extension, most psychiatric illness) was at its root a biological disorder of the brain.
The discovery of a drug that seemed effective in treating a specific disorder—or at least in controlling its particular set of symptoms—provided researchers with a major opportunity to explore the workings of the disordered brain. Basic science research into the biological action of antipsychotic drugs laid the foundation for the remarkable progress that has taken place in the neurosciences since the mid-twentieth century. In the late 1950s, Arvid Carlsson discovered the neurotransmitter status of dopamine and demonstrated that antipsychotic drugs block dopamine receptors in the brain, research for which he was awarded the Nobel Prize in 2000. His discoveries also led researchers and psychiatrists to formulate the "dopamine hypothesis" of schizophrenia: since antipsychotic drugs work by blocking dopamine, the cause of schizophrenia must be an excess of dopamine. This bit of logic, compellingly simple and yet disturbingly circular, has been repeated with other classes of psychiatric drugs and the disorders that they seem to ameliorate. Research into the action of apparently effective drugs has been the source of many accepted models of psychiatric illness, largely because scientists lacked better ways of making sense of what goes on inside the living brain—an unfortunate reality that many hope to remedy thanks to the advent of high-quality brain imaging.
The basic nature of antipsychotic drugs changed little in the decades that followed, and the initial optimism that the drugs possessed antischizophrenic properties was increasingly tempered as the drugs' limitations became more apparent. By the late 1960s and 1970s, psychiatrists also began to notice that the drugs produced a number of untoward side effects, foremost among them being tardive dyskinesia, a late-appearing, difficult-to-reverse disorder characterized by involuntary movements of the tongue, jaw, limbs and/or trunk. Psychiatrists continued to prescribe the drugs widely, however, largely because they were simply the best available treatment for a terrible and incurable disease.
In the 1980s, a new era in antipsychotic drugs began, coupled with new hopes for better outcomes. Clozapine, a drug that was actually synthesized in the late 1950s and used briefly until clinicians discovered that it could cause a fatal blood disease called agranulocytosis, was "rediscovered" in the mid-1980s. A number of large, multicenter studies found clozapine to be highly effective in treating refractory patients—that is, patients whose condition responded poorly to other antipsychotic drugs. Clozapine was approved for use in 1989 under the trade name Clozaril and reintroduced alongside a system for carefully monitoring patients for any signs of agranulocytosis. Clozaril became the first of a number of "atypical" antipsychotic drugs, which cause markedly fewer motor side effects compared to the older drugs but, as is increasingly evident, produce a wide range of other problems, most notably severe weight gain and insulin-resistant diabetes. Because these new drugs—six of which are on the market in the United States as of 2004—do much more than simply block dopamine receptors, their success has led scientists to rethink the dopamine hypothesis of antipsychotic drug action and of schizophrenia, but also has reinforced broader claims for the biological basis of psychiatric illness and hopes for more successful cures to follow.
Psychopharmacology and the psychopathology of everyday life.
Antipsychotic drugs are in many ways the most important class of psychotropic drugs, given their relative success in managing the most striking and debilitating psychiatric symptoms as well as their role in the early history of psychopharmacology. However they make up a relatively small share of the prescriptions written for psychiatric indications (though they are among the most profitable of all drugs, psychotropic or otherwise). The bulk of the psychotropic drug market, oddly enough, is devoted to the kinds of diagnoses that might never have made it onto the psychiatric landscape if not for the expansive territory staked out by psychodynamic psychiatrists in the mid-twentieth century.
Thanks to the widespread diagnosis and medical treatment of disorders like depression, anxiety, and attention deficit disorder, biopsychiatry has become as vital a part of American culture in the early twentieth century as psychoanalysis was a half-century earlier. As with antipsychotic drugs, the drugs used to treat these disorders were discovered to work only by accident, and the biological explanations for the disorders were gleaned from the actions of the drugs that seemed to treat them. When Prozac, the first of the selective serotonin reup-take inhibitors (SSRIs), was introduced in 1988, it was not long until the SSRIs (a class of drugs that also includes Paxil, Zoloft, and Celexa) became the leading treatment for depression. Less than two decades later—thanks in no small part to direct-to-consumer drug advertising—the belief that depression is caused by a serotonin deficiency had become an established bit of cultural knowledge.
Polypharmacy and off-label use.
Intriguingly, in spite of the apparent biological specificity of the drugs on the market, psychiatric practice does not adhere neatly to the categories for which drugs are named (and approved by the FDA). Psychiatrists often prescribe multiple drugs for a single patient with a single diagnosis, and this polypharmacy often combines drugs from different classes of drugs. It is not uncommon for a patient with a diagnosis of schizophrenia or bipolar disorder to be prescribed one or more antipsychotic drugs, a mood stabilizer, an antidepressant, and an anxiolytic, the combination of which is intended—in some unarticulated and scientifically unproven way—to improve the management of their disorder. Off-label prescribing—that is, the prescription of a drug for a condition other than that for which it is approved—is increasingly common. For example, psychiatrists routinely prescribe atypical antipsychotic drugs for patients with non-psychotic diagnoses, including children diagnosed with conduct disorders—a practice that is reminiscent of the widespread use of antipsychotic drugs in the state hospitals of the 1950s. It remains to be seen whether these practices will be validated by clinical research and, if so, what sort of biological explanations will be used to explain their effectiveness.
Since the mid-twentieth century, there has been a massive transformation in the understanding of how to treat psychiatric disease and, therefore, the understandings of its causes. For psychiatrists who cared for severely ill patients, antipsychotic drugs initially represented a different, albeit better and more efficient, means of treating behavioral symptoms, while for other psychiatrists the new drugs were merely adjuncts to the more important therapeutic task of psychological understanding and interpretation. As biological theory became more compelling, pharmaceutical marketing more effective, and cost-effectiveness a more essential determinant of therapeutic practice, drug therapy increasingly became the primary, and often only, means of psychiatric intervention. In a dramatic reversal of fortune, psychotherapy in the early-twenty-first century is at best an adjunct to pharmacotherapy and at worst a wasteful use of scarce health care resources.
Science: From Clinical Expertise to Randomized Controlled Trials
Since the mid-twentieth century, American psychiatry has been characterized by increasing efforts to appear both medical and scientific, in terms of the reliability of its diagnostic criteria, the biological specificity of its treatments, and the methods by which these treatments are legitimated. Such efforts suggest the image of a laggard field attempting to play catch-up with its more scientific medical colleagues, but such a characterization ignores major transformations in the science of medicine as a whole over this time period. These transformations, most notable among them the development of the randomized controlled trial (RCT), coincided both with psychiatry's brief psychoanalytic deviation from a biological approach to mental illness and with the advent of psychotropic drugs. Together these developments created the conditions and need for many of the changes that have characterized subsequent psychiatric history.
Prior to the mid-twentieth century, physicians rarely resorted to experimental methods as a means to prove whether or not a treatment worked. Instead the determinant of legitimate therapeutic knowledge was expert clinical opinion, exercised through historical case controls, open trials, and clinical judgment. These means of evaluating treatments have since been replaced by the RCT.
The basic elements of the RCT—blinding, controls, randomization, and placebos—each have their separate histories. Research psychologists have actively employed experimental methods since the mid-nineteenth century, using randomization and controls much earlier than did the clinical sciences, psychiatry included. From the point of view of medical science, however, the formal birth of the RCT was in 1946, when these features were brought together in the streptomycin trials of the British Medical Research Council.
The design of the RCT is intended to ensure that perceived treatment outcomes are in fact due to the treatment under investigation, rather than to external factors or bias. Thus a basic RCT consists of two groups, an experimental group (given the treatment under investigation) and a control group (given another treatment or a placebo). Patients are randomly assigned to these groups to prevent their individual characteristics from biasing the results, and all participants—researchers, clinicians, and patients—are blinded as to which group a given patient is in, so that they do not bias the results of the experiment.
The RCT and psychiatry.
Like all scientific methods, the RCT presupposes certain facts about the nature of the world, and thus circumscribes the questions that can be asked and the answers that can be extracted from nature. The RCT views treatment outcomes as data that are independent from the subjective opinions of both doctors and patients. Thus the RCT has arguably supported the turn toward a biological view of psychiatric illness and cure, including the development of discrete diagnostic categories and diagnostically specific treatments.
The influence of the RCT on psychiatry has been practical as well as philosophical. As much as any other medical professionals, psychiatrists wanted better methods of determining whether the methods they employed actually worked. Chlorpromazine was one of the first psychiatric interventions to undergo RCT evaluation, with successful outcomes. However subsequent evaluations of psychiatry's older somatic treatments ended in dismal failure, no doubt reinforcing psychiatrists' enthusiasm for the new pharmaceutical cures. Many other forms of psychotherapy have fared poorly as well when subjected to RCTs, though many psychiatrists have been more skeptical of these outcomes, given the ill fit between the reductionistic design of the RCT and the more context-and relationship-dependent nature of psychotherapeutic cures. In spite of these reservations, by the late 1960s and early 1970s, most psychiatrists and clinical scientists had accepted the RCT as the best means of judging whether a treatment works.
Critiques of the RCT.
Since the mid-1990s, the RCT has come under increasing scrutiny. A growing number of researchers have argued that the method favors biological treatments over psychological ones, and that it cannot assess the role that psychosocial factors (for instance, contexts and doctor-patient relationships) and individual factors (for example, the meanings a patient gives to a particular remedy) play in shaping how well the intervention works. Others contend that the clinical experiment is so unlike the unpredictable world of actual clinical practice that it may not provide a reliable gauge of whether a treatment will work in actual practice. Some critics have also challenged whether the RCT actually succeeds in eliminating bias. A number of literature surveys have found that the greatest predictor of an RCT's outcome is who funded it. Beginning in the late 1990s, a series of editorials and articles in major medical journals such as the Journal of the American Medical Association and the New England Journal of Medicine have wrestled with the problem of how financial interests shape, direct, and, at times, subvert the science of clinical evaluation, lamenting that the RCT, no matter how well it is executed, is vulnerable to the very biases it was designed to expunge.
Since the mid-twentieth century, psychiatry has undergone revolutionary changes in how psychiatrists diagnose patients, how they treat them, and how they evaluate whether a treatment works. These changes have brought with them major advances, especially in the neurosciences. But this history also suggests that psychiatry has lost something as it has narrowed its focus mainly to the brain and psychotropic drugs. Though psychiatrists are now trained to expertly manipulate a patient's drug regimen, they have become increasingly less able to situate a patient's suffering within a psychological and social context, and the doctor-patient interaction is often reduced to a querying and reporting of diagnostically sanctioned symptoms. Psychiatry, long charged with caring for those suffering from largely chronic conditions, has become focused on the diagnosis and cure of disease. This focus may someday bear therapeutic fruit, but until true cures are actually forthcoming it is important that the role of care not be lost. Like many of the shifts that psychiatry has undergone, these concerns are not unique to psychiatry, but are part of larger changes within medicine and the culture in which it is situated.
See also Consciousness ; Medicine ; Mind ; Psychoanalysis .
American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 1st ed. Washington, D.C.: American Psychiatric Association, 1952.
——. Diagnostic and Statistical Manual of Mental Disorders. 3rd ed. Washington, D.C.: American Psychiatric Association, 1980.
Braslow, Joel. Mental Ills and Bodily Cures: Psychiatric Treatment in the First Half of the Twentieth Century. Berkeley: University of California Press, 1997.
Carlsson, Arvid. "Does Dopamine have a Role in Schizophrenia?" Biological Psychiatry 13 (1978): 3–21.
Gelman, Sheldon. Medicating Schizophrenia: A History. New Brunswick, N.J.: Rutgers University Press, 1999.
Grob, Gerald N. "Origins of DSM-1: A Study in Appearance and Reality." American Journal of Psychiatry 148 (1991): 421–431.
Healy, David. The Antidepressant Era. Cambridge, Mass.: Harvard University Press, 1997.
——. The Creation of Psychopharmacology. Cambridge, Mass.: Harvard University Press, 2002.
Kutchins, Herb, and Kirk Stuart. Making Us Crazy: DSM: The Psychiatric Bible and the Creation of Mental Disorders. New York: Free Press, 1997.
Le Fanu, James. The Rise and Fall of Modern Medicine. New York: Carroll and Graf, 2000.
Luhrmann, Tanya M. Of Two Minds: The Growing Disorder in American Psychiatry. New York: Knopf, 2000.
Marks, Harry. The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900–1990. New York: Cambridge University Press, 1997.
Metzl, Jonathan. Prozac on the Couch: Prescribing Gender in the Era of Wonder Drugs. Durham, N.C.: Duke University Press, 2003.
Porter, Roy. Madness: A Brief History. New York: Oxford University Press, 2002.
Porter, Roy, and Mark Micale, eds. Discovering the History of Psychiatry. New York: Oxford University Press, 1994.
Shorter, Edward. A History of Psychiatry: From the Era of the Asylum to the Age of Prozac. New York: John Wiley and Sons, 1997.
Valenstein, Elliot S. Blaming the Brain: The Truth about Drugs and Mental Health. New York: Free Press, 1998.
Wilson, Mitchell. "DSM-III and the Transformation of American Psychiatry: A History." American Journal of Psychiatry 150 (1993): 399–410.
Joel T. Braslow
Sarah Linsley Starks