Biomedicine and Health: Epidemiology

views updated

Biomedicine and Health: Epidemiology


Epidemiology is the study of diseases that occur in groups or masses. Diseases in individual patients, or disease in an abstract sense, are secondary to the question of why a disease comes into a community and how people in the community are affected—or not affected—by it.

In 1662 London mathematician and haberdasher John Graunt (1620–1674), in a government-sponsored attempt to predict major eruptions of plague, published the first analysis of the causes of death. In the nineteenth century the science of epidemiology crystallized with the discovery of the germ theory of disease. As more and better statistics became available in the twentieth century, mathematical analysis identified patterns of disease. Beginning in 1950, the idea that cigarette smoke and other environmental chemicals could cause disease gave epidemiology a new, broader importance in medicine and society.

Historical Background and Scientific Foundations

Long before there was a science called epidemiology, people in the ancient world tried to explain diseases that periodically swept through the population. Theories about disease varied from culture to culture. People were inclined to blame an individual's actions, such as breaking taboos or offending the gods. Or one could blame another agent, such as a witch. But when large numbers of people in a community suffered from the same disease, social questions arose. Often the explanation was that the whole community had committed some offense against God. What else could all the sick people have in common to explain the disease?

Folk and Natural Beliefs about Disease Eventually the idea that diseases were natural phenomena led to general principles that explained why people became sick. The great founder of medicine, Hippocrates of Cos (c.460–c.375 BC), excluded gods, demons, or witches as causes of illnesses but wrote about epidemics as they appeared in nature. At one point he (or someone writing in his name) noted that when the winter winds were “northerly, with little rain, and it was cold … [u]nder such circumstances, cases of paralysis started to appear during the winter and became common, constituting an epidemic.”

Another basic idea was that of contagion. Contagion was well understood from ancient times simply because of experience. The Bible mandates ostracism for “the leper in whom the plague is,” a feared person who could by touch or proximity spread the disease. During the Peloponnesian Wars, a still-unidentified epidemic raged throughout the Mediterranean from 430–427 BC. The Greek historian Thucydides (460–404 BC) wrote, “Appalling too was the rapidity with which men caught the infection; dying like sheep if they attended on one another; and this was the principal cause of mortality. When they were afraid to visit one another, the sufferers died in their solitude.”

Folk and naturalistic beliefs persisted side by side for many centuries. But over time, Europeans came to know a series of epidemics that were particularly memorable beyond the usual scourges of fevers and diarrheas. The greatest was the Black Death, which spread through Europe in waves beginning in 1346–1347. The effects were spectacular. News would come that the plague had appeared nearby. People, with good reason, became fearful. Then local citizens would start showing symptoms, and so many would die that the corpses piled up, unburied. As much as one third of Europe's population perished, and society and the economy were profoundly disrupted.

One measure that communities instituted to protect themselves against the Black Death and other epidemics was quarantine, and by the eighteenth century, isolating a sick person—a traditional precautionary measure—had become an accepted public health technique.

Another epidemic that caused great concern was syphilis. It appeared suddenly, a virulent variety probably brought back to Europe by Columbus's sailors in 1496. But it spread quickly, and people soon understood that personal contact was responsible. They knew nothing of germs, but they did believe that some sort of poison passed from person to person.

Before even the beginnings of the science of epidemiology in the seventeenth and eighteenth centuries, increasingly concentrated urban populations meant that epidemics became more serious. Six terrible plagues hit London between 1563 and 1665, and identifying the cause became urgent. Medical experts all over Europe disagreed vehemently about whether diseases were spread by contagion, which could trigger massive, economically disruptive quarantines, or by environmental miasmas (unhealthy vapors), which could be fought by prayer and by individual attempts to live healthfully.

During this time, medical authorities generally held that epidemics came from the environment, particularly the air, which many described has having “an epidemic constitution” as the seasons and weather conditions varied. This was an ancient idea, based on thinking that a poisonous “miasma” could rise from putrefying organic matter, whether in swamps or from corpses rotting on a battlefield. The nose, they believed, could warn of danger in the atmosphere.

Beginnings of the Science of Epidemiology

The science of epidemiology began by identifying diseases that depended upon special circumstances and therefore had a reasonably certain cause. Most notable were occupational diseases. In 1700, Italian physician Bernardino Ramazzini (1633–1714) published a book called Diseases of Workers in which he described specific maladies that struck people in mining and manufacturing—those, for example, who developed symptoms of lead poisoning when they worked with white lead.

The second line of thinking was begun by the self-educated Englishman John Graunt. Working with the government as it attempted to anticipate outbreaks of plague, in 1662 he published Natural and Political Observations … Made upon the Bills of Mortality, an analysis of the British Bills of Mortality, weekly recordings of burials and christenings since 1603. He tabulated who died, of what, when, and where. Over the years, this model was used, praised, and extended, although it was not immediately recognized as important to the practice of medicine. As more and more categories of disease were reported in the bills, others besides Graunt started making numerical calculations and comparisons that showed patterns—vital statistics. By the eighteenth century Graunt and his successors had established a naturalistic way of viewing patterns in human illnesses.

One of the first ways in which the knowledge of disease and death was applied came with attempts to control smallpox epidemics. As early as the seventeenth century a few Europeans started inoculating people with “matter” from smallpox victims in an attempt to cause a mild form of the dreaded disease. In the early eighteenth century, accounts were published that compared death rates in Britain and North America of those who been inoculated and those who had not. The figures showed clearly that inoculation greatly diminished deaths from smallpox.

During the eighteenth century Enlightenment, concern with reason and verifiable reality demanded systematic thinking. Some experts began to see that they could not recommend serious social action such as quarantine until they had facts, rather than traditions and impressions, on which to base public health measures. Under these circumstances, the statistics that Graunt and others began to gather and analyze promised at least some basis for social action against epidemic disease.

At the same time, quantification was transforming late Enlightenment sciences such as chemistry. A number of striking figures in medicine and public policy also began more counting. In Paris, physician Pierre-Charles-Alexandre Louis (1787–1872) introduced what he called the “numerical method.” He compared the effectiveness of bloodletting in pneumonia and other diseases by observing the results in a number of patients. His approach to counting affected the thinking of those working with the spread of diseases.


British physician William Farr (1807–1883) was born in rural Shropshire to a very poor farmer family. With the help of a neighbor patron who recognized his talent, Farr was able to get an education and become a doctor. Most importantly, he studied in Paris with Pierre-Charles-Alexandre Louis (1787–1872). As a struggling medical practitioner he started writing about vital statistics. By 1839 he had enough recognition to join the British General Registrar's Office as statistician. Over the next 40 years, his reports and other writings won him high status in the medical profession and among his colleagues in epidemiology.

Farr treated epidemiology as a science, and he sought to formulate laws comparable to the mechanistic laws of other sciences. Illnesses appeared, he believed, in regular series, a regularity that statistics could describe. He did not need the germ theory, writing in Vital Statistics: If the latent cause of epidemics cannot be discovered … the mode in which it operates may be investigated. The laws of its action may be determined by observation, as well as the circumstances in which epidemics arise, or by which they may be controlled.

For generations after, epidemiologists read Farr's analyses. He understood and often introduced basic concepts such as retrospective, as opposed to prospective, studies of illness; different kinds of death rates; dose-response effect; and the idea that prevalence of an illness was a product of incidence and duration. Farr did so much to establish epidemiology as a science that he became an iconic figure even in his own lifetime. Like any good scientist, he knew the limits of his science: “The death rate is a fact,” he wrote; “anything beyond this is an inference.”

Moreover, Louis's work in medicine was contemporary with the development of probability theory among French mathematicians. These two factors set the stage for the emergence of the formal science of epidemiology, known at the time as “sanitary science,” the attempt by practical means to preserve the health of populations. And sanitarians, or public health workers, came increasingly to depend on the findings of epidemiologists to determine if an outbreak of some disease was sufficiently serious to impose quarantine measures.

Meanwhile, numerous medical practitioners who observed disease spreading made epidemiological inferences. Perhaps the most notable examples were American physician Oliver Wendell Holmes (1809–1894) and German-Hungarian physician Ignaz Philipp Semmelweis (1818–1865) in Vienna who theorized independently in the 1840s that postpartum women who suffered from “childbed fever,” which was often fatal, were probably being infected by their attending physicians, who unwittingly spread the disease from patient to patient with their hands, garments, and instruments. But such observations did not usually tie in with those who began to identify themselves as epidemiologists.

For most diseases that affected many people at one time, traditional ideas of contagion and miasma lasted well into the nineteenth century. Indeed, debates continued between those who believed disease was spread by miasmas and those who espoused the idea of contagion. In 1819 a British parliamentary committee heard experts and reported their attempt at a political compromise: they held “that the Plague is a disease communicable by contact only” but that “epidemic fever” was miasmatic in origin.

Modern Epidemiology Evolves

As Europeans and Americans gathered facts that they could count, many thinkers who were concerned about sickness were also concerned about poverty. In gathering facts about ill people, these early investigators—notably Edwin Chadwick (1800–1890) and Farr in England and Louis Villermé in France, saw that poverty and illness were connected. They proved that poor people were not, as romantic writers suggested, healthier and happier than other people, but quite the opposite. Their health reflected the squalid conditions in which they lived. This concern about the relationship between social differences and inequalities, on the one hand, and illness, on the other, remained throughout the history of epidemiology.

Epidemiology became intimately connected to a rising insurance industry, in which executives had to know what normal rates of illness and death might be. Because Germany pioneered government health insurance in the 1880s, epidemiologists used German statistics for decades to determine what might be statistically normal.

But public health authorities most of all came to use the science of epidemiology to set public policy on disease control. It was easy to show the effects when a community did not have an adequate sewage system or when epidemiologists identified diseases such as syphilis or hookworm. Public health authorities instituted mass education campaigns to get individuals to take actions to reduce the public health threat.

Before one could start counting diseases, however, one had to have diseases to count. In the first half of the nineteenth century, vague fevers and gastrointestinal disorders gave way to an increasing number of distinct syndromes that could be counted. These included plague and smallpox as well as cholera, malaria, yellow fever, measles, scarlet fever, and many others. Epidemiologically, each disease exhibited distinct patterns and characteristics.

In the nineteenth century, cholera spread across Europe in great, apparently uncontrollable waves. In 1854 physician John Snow (1813–1858) began to suspect that London's cholera epidemic was spread through the water supply. The city's water came from public pumps. By patient examination of case statistics from different neighborhoods, Snow showed that one pump, which drew water from a polluted section of the Thames, caused cholera in people who used water from it. After Snow demonstrated this to the authorities, they shut down the pump—one of the greatest symbolic actions in the history of medicine and public health.

In 1850, Snow helped found the London Epidemiological Society, which, while concerned primarily with science, also deeply affected public health. At the same time, investigators in other countries, too, were caught up in the quest to understand human and animal diseases. One of epidemiology's first contributions was demonstrating that vaccination for smallpox (introduced in 1798 as a successor to inoculation) protected people during smallpox epidemics. The society's Smallpox Committee, for example, showed that in five villages within two miles (3.2 km) of each other, the four that were entirely vaccinated escaped the disease, while in the fifth, 48 people became ill.

The major architect of classic epidemiology was British physician William Farr (1807–1883). He wrote the reports of the British Registrar General, setting a model for the science of epidemiology. He was in close contact with major figures in the new field of statistics in Europe, most of whom were interested in social statistics such as workforce, poverty, and crime. Farr, however, caught the attention of his colleagues with his collections of health data, which he believed showed the mathematical regularity of scientific laws.

By the 1890s, biology took center stage. The work of French chemist Louis Pasteur (1822–1895), British surgeon Joseph Lister (1827–1912), and a number of Germans, particularly physician Robert Koch (1843–1910), proved that bacteria and other microbes caused many human infectious diseases. By the beginning of the twentieth century, viruses had also been discovered.

The significance of the germ theory of disease for epidemiology was that instead of vague fevers, diseases could identified more precisely. Investigators could work on diseases identified by bacteriology rather than on groups of people identified by clinical impressions of many different physicians.


In 1885, L.H. Taylor reported in The Medical News on “The Epidemic of Typhoid Fever at Plymouth, Penna.”:

The borough of Plymouth, situated upon the banks of the Susquehanna River, … is a mining town of some eight or nine thousand inhabitants of various nationalities… The general health of the inhabitants has in past time not been worse than that of their neighbors in surrounding cities, but about the second week of April … an epidemic of fever, of great virulence, broke out, and so sudden was the onset, that within a very few days nearly a thousand people were stricken with the dread disease. The ravages were not confined to any class of people, nor to any section of the town, but the dwellers in the mansion as well as in the hovel were alike attacked; the house upon the hillside being not more free from the scourge that that situated in the valley… That the mountain stream supplying the town with water might have become polluted by fecal matter was first suggested by Dr. R. Davis, of Wilkesbarre…

Davis obviously knew about the latest ideas concerning water-borne disease and the fact that typhoid infection is carried in fecal matter.

In a population of 8,000 persons, there were 1,004 cases and 114 deaths within a few weeks. Plymouth received its water from a mountain stream that drained an almost uninhabited watershed. Nevertheless, the probable source was a man who lived in a house only 40 feet from the stream. He had contracted typhoid in Philadelphia in December, returned home, and relapsed, and was still desperately ill on March 19. His dejecta [excrement] were thrown on the bank of the stream but, because of the freezing weather, could not have entered the water until a thaw occurred between March 25 and 31. The first case [in Plymouth] occurred on April 9. Those who used well water, although in some instances living on the same streets as those who used the municipal supply, escaped.

But the impact of bacteriology went further. It was now possible to investigate epidemiological factors in the laboratory, using not only microbiological methods, but animal experiments to establish exactly how diseases spread. Of particular importance were studies on the varying levels of virulence of pathological microbes,

a factor that explained nonuniform patterns in the spread and effects of diseases.

By the late nineteenth century, governments and businesses in the Western world began to collect many statistics. Insurance companies began to cover illnesses, and many accumulated decades of statistics with which epidemiologists could work. Field epidemiology continued to expand as well.

Another aspect of field epidemiology around the turn of the twentieth century was the discovery of insect vectors, or carriers, of diseases. The three most impressive demonstrations of insect vectors came in only a few years around 1891–1903: malaria, Texas cattle fever, and yellow fever. Malaria and yellow fever were particularly troubling and baffling, for experience had shown that contact with victims did not lead to infection. Indeed, the two diseases furnished the best evidence that disease came through “miasmas” or bad air. The very word “malaria” means “bad air.”

The discovery of the mosquito's role in malaria took a number of years. A British physician in India, Patrick Manson (1844–1922), noticed that elephantiasis, caused by a wormlike parasite, coincided with the geographical range of mosquitoes. Manson theorized that the larval stage of the filaria nematode might be transmitted in a mosquito bite. He made similar inferences about malaria, but his ideas did not fully convince his colleagues until in 1897 one of them, British physician Ronald Ross (1857–1932), proved that the organism could be transmitted by a mosquito bite. In a similar way, Theobald Smith (1859–1934) of the U.S. Department of Agriculture showed in 1893 that a protozoan, not a bacillus, spread Texas cattle fever through tick bites.

Yellow fever, a deadly disease, had been the subject of many theories. One was the reasonable epidemiological inference that a mosquito was responsible. In the wake of previous discoveries, a team of American public health physicians used sometimes-fatal human experiments on brave volunteers to show in 1900 that mosquitoes did indeed transmit the infection. But what was transferred? Eventually the team showed that a filterable virus (a purely inferential entity discovered only a few years earlier) was responsible. Epidemiology also disclosed the means of control: mosquito eradication.

With yellow fever, a new factor entered epidemiology: a virus, which no one could see. Much of epidemiology continued as outbreaks of one disease or another were traced to bacteria. But eventually virus diseases claimed special attention as two more epidemics appeared.

The first was polio, called infantile paralysis because it often struck children with devastating effects. The second was the great influenza pandemic of 1918–1920, a devastating disease that struck young adults particularly hard, with a mortality rate estimated from 2 to 20%.

As in earlier times, the extent and patterns of incidence of these diseases were evident, but the cause remained a mystery until virology came of age in the middle decades of the twentieth century.

Simple reporting of disease occurrence, however, remained basic, but also took on new dimensions, particularly after World War I (1914–1918). First, chronic diseases that were apparently not infectious, such as cancer and heart disease, came under epidemiological scrutiny. Moreover, beginning just before World War II (1939–1945), antibiotics and immunizations began to diminish the incidence and threat of infectious diseases, bringing another kind of danger to the fore: chemicals and particles.

New Tools and The Ecological Model of Epidemiology

Among experts, the shock of the influenza pandemic, which began among army personnel, helped shift thinking in epidemiology in a new direction, particularly under the leadership of British bacteriologist William Whiteman Carlton Topley (1886–1944) and epidemiologist and statistician Major Greenwood (1880–1949). Shifting their focus from the infected individual alone, epidemiologists began to stress an ecological model, in which disease was part of a complex biological equilibrium. In this system, both environmental and individual factors (such as immune system resistance) were in dynamic interaction. This ecological model had immediate consequences. In the World War I antityphus campaign, for example, to remove or kill possibly typhus-carrying lice people were herded into delousing stations and forcibly washed. It would have been just as effective, said one critic, simply to have people change their shirts periodically.

Investigators dealing with epidemic populations now had new mathematical tools, especially factor analysis, in which phenomena with several aspects (such as age, previous health experience, location) could be separated and their significance measured. Technical analysis now began to mark many epidemiological publications. Authors began to distinguish between a normal disease distribution and one that indicated something aberrant that might identify a causal factor. Simply making a quick inspection of large numbers of data and drawing an inference was no longer acceptable.

In the 1950s epidemiology entered a new phase, with a rising incidence of lung cancer confirmed clearly in 1949 by a Danish study. Medical scientists knew little about any kind of cancer, and to have one variety appear suddenly and spread rapidly was baffling. This was, after all, a chronic disease, not an acute infection, and its appearance made little sense in the biological models then in use. Epidemiologists, however, began to implicate smoking.


Slowly, beginning in 1949–1950, epidemiological evidence began to suggest to some investigators that cigarette smoking might account for much of the rising incidence of lung cancer. This evidence was based on the past histories of victims, that is, it was retrospective evidence. Immediately some investigators began huge, significant prospective investigations, to see if people who were identifiably smokers would develop lung cancer, as opposed to those (like Seventh-Day Adventists) who did not. Before the 1950s were over, alarming positive results came in, and epidemiology was transformed in important respects.

First of all, the evidence that linked smoking with lung cancer and—a later accidental finding—with heart disease, was purely theoretical. Or at best it was mathematical, which was the same thing. Unlike bacterial or viral diseases, lung cancer did not permit laboratory scientists to demonstrate a clear connection between the cause (smoking) and the result (cancer). There were a few suggestive studies, but no unambiguous connection (that came only in the 1990s). Drastic economic and social actions that followed for half a century were based on sometimes-contested statistical inferences. Increasing numbers of scientists and public officials, however, believed that this new epidemiology had produced a fact: smoking caused lung cancer in many people. Many good, hard-headed scientists opposed the epidemiologists, however. To them, facts could be demonstrated and understood only in the laboratory, a stance that generally works well, but had limitations in this instance.

A second change, and one that marked much of epidemiology in the second half of the twentieth century, was what one could show with epidemiology. In the nineteenth and early twentieth centuries, epidemiologists traced disease patterns caused by living things: parasites and germs. In the second half of the twentieth century, epidemiologists added a new, major mission: tracing how inanimate chemicals and particles could poison people.

During the prosperous 1950s, American consumers began to purchase large numbers of synthetic and other chemicals for everyday living, including cleansers such as detergents, household poisons such as DDT, metals such as beryllium, and plastics of many kinds. Many were especially toxic to children. For some time, European countries, still suffering from wartime privations, did not show the same patterns of poisonings found in the United States, for such household chemicals were uncommon there. But they soon caught up.

Epidemiologists were therefore detecting disease processes that were inaccessible to bacteriologists and other scientists. Statistical profiles could even show when more than one factor was operating, such as that adding asbestos exposure to smoking greatly increased the likelihood of developing lung cancer. In the late twentieth century, epidemiologists were doing impressive mathematical modeling of disease incidence patterns.

Detecting the effects of chemicals and particles on human health already occupied workers in one area of epidemiology: occupational medicine, in which investigators were often asked to explain what was happening when a group of workers began all showing the same symptoms without any obvious cause. All the dangers of the workplace had to be considered, especially gases, dusts, and poisonous chemicals. One such case was the “radium girls” who painted glow-in-the-dark watch dials with radium paint around 1917. Many died from radiation sickness.

In a shift from occupational epidemiology to the general environment, when smog attacks in the 1940s killed many people, it was apparent that the environment could be dangerous. In Japan, epidemiological evidence identified the cause of Minamata Disease—industrial waste that had been ingested by locally caught fish eaten by most people in the area. In her famous 1962 book Silent Spring, American biologist Rachel Carson (1907–1964) revealed the evidence of chemical pollution, stimulating the early environmental movement.

It can be no surprise, then, that epidemiologists who knew about terrible maladies that struck people in mines and factories—and on farms—might look beyond the workplace for epidemics of bad health that might be caused by chemicals, gases, and particles. By the 1960s and 1970s the environmental health movement and environmental medicine developed. From the beginning, epidemiologists, armed with new tools of statistical analysis and thinking in broadly ecological terms, had established the framework for that new twist in the history of science and medicine.

Epidemiologists could still trace outbreaks of food poisoning or illnesses from polluted water. But in the last decades of the twentieth century, they also focused on the incidence of other types of illnesses with invisible causes—radiation or chemicals or particles. Like other sciences of the late twentieth century, epidemiology became more complex. Many investigators continued to analyze the link between socioeconomic class and poverty in disease patterns, most notably in mental diseases. Computer simulations added greatly to the statistical power of analysis of epidemiological phenomena. Vague conditions that were very difficult to characterize, such as allergies, demanded analysis but resisted being counted. Most distressing were the growing exceptions, the anomalies that cropped up in the everyday world of clinically obvious diseases and epidemics.

The first complication, one noted in the nineteenth century, was that in any epidemic, some people were naturally immune. One of the glories of modern epidemiology was its ability to isolate or overlook these exceptions. But even worse was the existence of mild cases that were not detectable and could not be counted by epidemiologists but which still contributed to contagion. Polio furnished many instances, but so, too, did the influenzas.

The next great complication came from the increasing detection of multiple causes of symptom complexes. Some types of diabetes were very confusing because hereditary conditions had to be combined with diet and, it was later determined, activity levels and other factors. Or, as in many diseases, animal reservoirs could repeatedly cause unpredictable outbreaks, whether of traditional illnesses or new ones, such as Lyme disease (carried by ticks on deer and other animals).

Still another, and later, complication was the appearance of pathological organisms (including viruses) that evolved a resistance to the medicines that controlled those pathogens. Influenza viruses showed a remarkable ability to mutate, and they often caused major epidemics before changes in the interaction between humans and viruses stopped the spread of the disease.

Throughout the twentieth century, the mathematics used by epidemiologists took them directly into the field of statistics. Practical problems, as in calculating how accidents affected health, led epidemiological scientists into advanced calculations that became major contributions to theoretical statistics.

Epidemiology scored some impressive victories. In the years before and after World War II, the high natural fluorine content of drinking water in many localities led to mottled teeth but also, epidemiologists found, fewer dental caries. Adding fluorine to public drinking water supplies led to a dramatic increase in dental health. Another set of findings removed lead from gasoline in 1970 in the United States, with a dramatic decrease in children with lead poisoning.

Even as epidemiologists moved into the twenty-first century, their old standard functions still served the public well. The experts who identified the severe acute respiratory syndrome (SARS) epidemic in 2002–2003, for example, were able to trigger public health responses that were surprisingly effective in stopping this dangerous disease that spread from China to Canada by airplane.

Modern Cultural Connections

At the end of the twentieth century, epidemiologists were identifying new health dangers such as HIV and the human form of mad cow disease. They played a very large part in testing the effects and safety of new medications—and in discovering unanticipated problems in treatments already widely used.

Yet, at the beginning of the twenty-first century, epidemiologists found themselves discussing the place of their many-faceted science. Was the purely scientific work caught between bacteriology on one side and public health on the other? Both of their sister disciplines had expanded and, with epidemiology, began to explore the concept of “risk factors” for disease. Epidemiology had itself expanded to cover new types of diseases, including those created by human social arrangements. Another concern was the health-care delivery system and the public health effects of individual medical treatments.

Within the field, leading thinkers continue to debate basic questions: Are investigators working with valid units? Is the modern concept of cohort (groups of individuals sharing characteristics such as age or symptoms within the same timeframe) a valid unit for analysis? Or, on another level, can epidemiological methods untangle the interactions between genetic and environmental factors in different diseases? Indeed, epidemiological scientists go back to doubting the very idea of causation, and, in the age of genetic medicine, the idea of contagion continues to draw refinements and doubters from inside and outside of the science.

Epidemiologists continue to ask questions. What is the natural limit of the human life span? What factors are involved not only in health but vitality? How, for example, does child nutrition affect later mental and physical functioning? Epidemiology thrived in the ecological approach that expanded so dramatically. It was no wonder that some epidemiologists saw in their science “a return to Hippocrates,” who looked to “airs, waters, and places” to explain human illnesses.

And yet the scientists who dealt with these large questions of wellbeing increasingly cast their work in terms of “risk.” Everyone in society lives with the risk of accident and disease, however unequally they are distributed. Some risks are constant, like seasonal ailments; others come unexpectedly, like accident or stroke. Epidemiologists must deal with them all, using science to understand the world in which we all live.

See Also Biology: Ecology; Biomedicine and Health: Antibiotics and Antiseptics; Biomedicine and Health: Bacteriology; Biomedicine and Health: The Germ Theory of Disease; Biomedicine and Health: Virology.



Cassedy, James H. American Medicine and Statistical Thinking, 1800–1860. Cambridge, MA: Harvard University Press, 1984.

Coleman, William. Yellow Fever in the North: The Methods of Early Epidemiology. Madison: University of Wisconsin Press, 1987.

Doull, James A. “The Bacteriological Era (1876–1930).” In The History of American Epidemiology. Edited by Franklin H. Top. St. Louis: Mosby, 1952.

Gaudillière, Jean-Paul, and Ilana Löwy, eds. Heredity and Infection: The History of Disease Transmission. London: Routledge, 2001.

Hirst, L. Fabian. The Conquest of Plague: A Study of the Evolution of Epidemiology. Oxford: Clarendon Press, 1953.

Jorland, Gérard, Annick Opinel, and George Weisz, eds. Body Counts: Medical Quantification in Historical and Sociological Perspective. Montreal: McGill-Queen's University Press, 2005.

Lilienfeld, Abraham M. Times, Places, and Persons: Aspects of the History of Epidemiology. Baltimore: Johns Hopkins University Press, 1980.

Matthews, J. Rosser. Quantification and the Quest for Medical Certainty. Princeton: Princeton University Press, 1995.

Mendelsohn, J. Andrew. “From Eradication to Equilibrium: How Epidemics Became Complex after World War I.” In Greater Than the Parts: Holism in Biomedicine, 1920–1950. Edited by Christopher Lawrence and George Weiss. Cambridge, MA: Harvard University Press, 1984.

Morabia, Alfredo, ed. A History of Epidemiologic Methods and Concepts. Basel: Birkhäuser Verlag, 2004.

Ranger, Terence, and Paul Slack, eds. Epidemics and Ideas: Essays on the Historical Perception of Pestilence. Cambridge: Cambridge University Press, 1992.

Rothstein, William G. Public Health and the Risk Factor: A History of an Uneven Medical Revolution. Rochester: University of Rochester Press, 2003.

Top, Franklin H. History of American Epidemiology. St. Louis: The C.V. Mosby Company, 1952.


Amsterdamska, Olga. “Demarcating Epidemiology,” Science, Technology, & Human Values. 30, no. 1 (2005): 17–51.

John Burnham

About this article

Biomedicine and Health: Epidemiology

Updated About content Print Article


Biomedicine and Health: Epidemiology