Water Pollution

views updated May 29 2018


WATER POLLUTION. Extensive water pollution in the United States began in the nineteenth century as a result of urbanization, industrial development, and modern agricultural practices. Although lumbering and mining despoiled individual lakes and rivers, the nation's cities were the sites of the most severe pollution. Early industrial by-products joined human sewage and animal waste to foul drinking water supplies. By the early 1800s, even horses declined New York City's public water, and one quarter of Boston's wells produced undrinkable water. Severe epidemics of the waterborne diseases cholera and typhoid fever swept through major cities, most notably New York in 1832.

The early response to such pollution was not so much to clean the water but rather to build reservoirs and aqueducts to import fresh water for direct delivery to neighborhoods and even some individual homes. Cities built large sewer systems to flush these waters away, usually either out to sea or down a near by river. Sewers thus spread the previously more localized pollution, often fouling the water sources of other cities.

In the 1890s, scientists decisively linked many diseases, including typhoid and cholera, to the presence of human waste in water supplies. Cities began to filter their drinking water with remarkable results. The national urban death rate from typhoid, 36 per 100,000 in 1900, dropped to only 3 per 100,000 by 1935 and was virtually nonexistent by the twentieth century's end. The urban water projects that combined filtration, delivery, and disposal ranked among the largest public works projects in the nation's history. Chicago, for example, reversed the direction of the Chicago and Calumet Rivers, so by 1900 they no longer carried the city's waste into Lake Michigan, its primary source of fresh water. By the end of the twentieth century, New York City moved about 1.5 billion gallons of fresh water through more than 300 miles of aqueducts and 27 artificial lakes.

The industrial pollution of bodies of water not used for drinking proved more difficult to control. In 1912, Congress charged the Public Health Service (PHS) with investigating water pollution. Two years later, the PHS established the first water quality standards. In the 1920s, the service investigated industrial pollution but with little effect. State governments retained the primary responsibility for water regulation. Following the lead of Pennsylvania, many states sought to balance environmental quality with the needs of industry by giving relatively high protection to waters used for drinking supplies while allowing others to be freely used for waste disposal. New Deal programs provided significant federal funds to water pollution control, and over the course of the 1930s the population served by sewage treatment nearly doubled. But those programs left pollution control in the hands of state governments.

After World War II, continued urban pollution and runoff from artificial fertilizers increasingly used in agriculture degraded the water quality of many lakes. Eutrophication occurs when plants and bacteria grow at abnormally high rates due to elevated quantities of nitrogen or phosphorus. The decomposition of this elevated biomass consumes much of the water's oxygen, often leading to a cascade of changes in aquatic ecosystems. Many species of fish grow scarce or die off altogether, and algae "blooms" can make water unsafe to swim in or to drink. Although small urban lakes suffered from eutrophication as early as the 1840s, after World War II, population growth, increasing nitrogen-rich agricultural runoff, and the addition of phosphates to detergents polluted even bodies of water as large as Lake Erie. By 1958, the bottom portion of a 2,600-square-mile portion of the lake was completely without oxygen, and algae grew in mats two feet thick over hundreds of square miles more. The nation's economic prosperity intensified problems, as pollution from heavy industry made some rivers and streams lifeless. In the 1960s, Cleveland authorities pronounced the Cuyahoga River a fire hazard, and at the end of the decade the river actually caught on fire. The more mobile and long-lasting industrial products polluted even waters remote from cities and industry. DDT, other pesticides and synthetic chemicals, mercury, and acid rain threatened numerous species and previously unaffected lakes and streams.

Such manifestations of a deepening pollution crisis prompted environmentalists and lawmakers to redouble pollution-control efforts. The major response, the 1972 Clean Water Act, shifted responsibility for the nation's waterways and water supply to the federal government. In the following decades, federal funds and regulations issued under the act's authority significantly raised standards for water purity. Repeatedly amended, the act halted the rate of water pollution, even in the face of decades of population and economic growth. Most industries and municipalities greatly reduced their pollution discharges, with the consequent reversal of the eutrophication of many bodies of water, including Lake Erie. Nevertheless, "non-point" pollution sources, such as agricultural runoff and vehicle exhaust, continued to degrade water quality. The act made virtually no progress in improving groundwater contamination. At the end of the twentieth century, regulating groundwater quality and grappling with nonpoint pollution remained the most formidable obstacles to those seeking to reverse water pollution.


Elkind, Sarah S. Bay Cities and Water Politics: The Battle for Resources in Boston and Oakland. Lawrence: University Press of Kansas, 1998.

Melosi, Martin V. The Sanitary City: Urban Infrastructure in America from Colonial Times to the Present. Baltimore: Johns Hopkins University Press, 2000.

Outwater, Alice. Water: A Natural History. New York: Basic Books, 1996.

Benjamin H.Johnson

See alsoClean Water Act ; Conservation ; Sanitation, Environmental ; Waste Disposal ; Water Law ; Waterways, Inland .

water industry

views updated May 17 2018

water industry. Water for human consumption was traditionally obtained from wells, ponds, or rivers. This remained adequate until rapid urbanization during the 17th and 18th cents., particularly the growth of London and the industrial cities elsewhere, brought about the need for better supply. The solutions, sought during the 19th cent., invariably came in response to the related problems of public hygiene and the spread of water-borne diseases, such as typhus and cholera.

Among the earliest developments were the companies established in London using water from the Thames and its tributaries, one of the first being the New River scheme undertaken by Sir Hugh Myddleton in 1609. During the 17th cent. other companies were established, notably the York Buildings Company, sanctioned by letters patent from Charles II in 1675 and incorporated in 1691. The water was taken from the river by canals equipped with sluices and pumped by horse-powered gins to cisterns on higher ground, from which it was conveyed to wealthy customers' dwellings by service pipes, connected to 7-inch. wooden pipes laid through the streets. Public supply in London and other towns until the early 19th cent. was by similar conduits of wooden (and later iron) pipes to carry spring or well water to lead cisterns, which fed subsidiary lead pipes supplying communal taps or water carriers.

But as population increased, water supplies became contaminated by sewage and other effluent, creating major problems of public health in the larger towns, particularly after the first cholera epidemic in 1831–2. A series of investigations culminating in Edwin Chadwick's Report on the Sanitary Condition of the Labouring Population (1842) acted as spurs to improvement. Self-cleansing sewers of improved design required a constant supply of water under pressure and this was easier to provide in some places than others. The Metropolitan Board of Works scheme, designed by Joseph Bazalgette for the drainage of London, and the largest for the time, was not completed until 1865. As in other 19th-cent. schemes, steam power was used for pumping.

During the early stages of industrialization, canal-building and the use of water power as a prime mover led to a greater understanding of hydraulics and water engineering. Both of these developments necessitated the construction of dams, sluices, and water channels, the design of which influenced the construction of reservoirs for water supply under gravity. Two of the earliest, using earth with a core of puddled clay, were those of the Edinburgh Water Company (1822) and the Shaws waterworks for domestic and industrial supply in Greenock (1825–7). Many similar structures were built in the Pennines, south Wales, and other suitable locations. However, these dams were relatively small and their strength limited. There were disastrous collapses at Holmfirth near Huddersfield in 1852 and Dale Dike near Sheffield in 1864. Traditional methods of dam construction were abandoned and replaced by masonry dams, the first being built at Vyrnwy in Wales to supply Liverpool, begun in 1881 and completed in 1892.

As demand increased, other natural or man-made reservoirs were developed. Glasgow tapped Loch Katrine in the Trossachs, Manchester the resources of the Lake District, while Birmingham, like Liverpool, exploited the potential of Wales. There the drowning of valleys for English water supply caused considerable local resentment, particularly among Welsh nationalists. However, from the age of ‘gas and water’ civic pride in the 19th cent., water supply was traditionally a non-political issue, controlled by local authorities and water boards. During the 1980s, amid much political controversy, it became one of several nationalized industries to be sold to the private sector. The difficulty of maintaining a regular supply to homes and industry of such stupendous quantities of water, compounded by the occasional drought, has meant that the water industry in the 1990s has been much in the public eye.

Ian Donnachie