Industrial Revolution, Second

views updated

INDUSTRIAL REVOLUTION, SECOND

materials
energy
engineering, communications, and transportation
biological technology
conclusions
bibliography

The term Second Industrial Revolution refers to a set of technological events that took place roughly between 1865 and 1914. Between the customary end of the First Industrial Revolution in the 1820s and the start of a new wave of spectacular inventions, technological progress had been anything but stagnant. The implicit notion that the periods of "Industrial Revolution" were more innovative than the half century in between is risky at best. Yet in some ways the innovations of the last third of the nineteenth century were not only spectacular and revolutionary, but they also set the technological trajectory of much of the twentieth century and made it into the period in which more economic growth took place than had in the entire history of the human race before.

Technology does not drive history. It is itself a function of cultural, social, and economic factors that have been a source of dispute and argument for generations. In the twenty-first century technology is widely seen as no less political and "socially constructed" than any other element of human culture. Such approaches are useful for certain purposes, but they tend at times to obscure that technology affects the human material condition irreversibly in directions that are notoriously difficult to predict. The years of the Second Industrial Revolution produced both the best and the worst that humankind could achieve in manipulating nature. Whatever one might think of the effects on the technological changes in this period, they ensured that human life in many dimensions would be transformed irrecognizably. Technology is knowledge. The artifacts and machines that often embody it are nothing without the underlying knowledge that makes it possible to design and construct them, and then to operate and repair them. The story of technology is therefore the story of the growth of useful knowledge, of the institutions and incentives that make it grow and disseminate it, and of the social capabilities that lead to its application.

The years from 1865 to 1914 witnessed an unprecedented growth in useful knowledge. Whether or not it was the most explosive period of growth in scientific knowledge ever is impossible to demonstrate. But it surely is the age in which science and technology became inextricably linked, in which the modern process of research and development itself was invented, and in which science indisputably established itself as indispensable to the process of economic growth. All the same, any facile generalizations about the connection between science and technology need to take account of the three basic facts of this period. First, science owed technology at least as much as technology owed science. Second, a great deal of technology still advanced with little or no scientific understanding of the processes involved. Third, many scientific advances took place that led to only few applications, and whose impact on technology, if any, was far in the future. What the experience of this age shows is that while "full understanding" was rarely a prerequisite for a technique to work and be implemented, a technique can be improved and adapted at a faster rate when it is better under-stood than when a technique has been discovered by accident or through trial-and-error. Many of the techniques that emerged in the post-1865 decades were based on a very partial understanding (or epistemic base) of the physical or chemical processes involved.

However, this base expanded during the period and eventually led to improvements and applications that would have amazed the original inventors. The relation between the epistemic base and the technique was at times subtle and involved, far beyond the standard "linear" model postulating that science leads to applied science and applied science to technology. Science played more of a role too in the new techniques of the post-1865 revolutions than it did in the First Industrial Revolution, but its role varied considerably between different economic activities.

The process of invention thus cannot be separated from "social knowledge." That such knowledge


was essential is illustrated by the frequent duplication of invention. Three of the best-known inventions of the Second Industrial Revolution, the Bessemer process of steelmaking, the red dye named alizarin, and the Bell telephone were made at practically the same time by different inventors. Far from demonstrating that "necessity is the mother of invention," this underlines that there was a common pool of underlying knowledge to which different people had access.

Inventions can be classified into macroinventions, major conceptual breakthroughs that occur fairly rarely and often are the result of individual genius and perseverance, and microinventions, which consist of small cumulative improvements and adaptations of existing technology. Many microinventions consisted of making small improvements in existing techniques or recombining them into new ones. Most productivity growth clearly occurred thanks to the cumulative effect of these small improvements. Without the great discontinuities, however, minor advances sooner or later were likely to have run into diminishing returns.

The new industries that the Second Industrial Revolution created often differed from the old ones in more than just the reliance on formal knowledge. The scale of production at the plant level increased: chemical and electrical plants were in general subject to economies of scale. The huge steel ships required much larger yards, and mass production, as we shall see, became the rule rather than the exception. As a result, the relation between industry and markets changed: new techniques required support from finance and capital markets, they needed workers with different skills and attitudes. If the modal worker in Europe in 1850 was still employed at home or in a small workshop, by 1914 most worked in large plants or offices.

Singling out technology as the prime player in the Second Industrial Revolution is to some extent arbitrary. The many subtle and complex ways in which technology interacted with the institutions of capitalism, changes in transportation and communications, political changes, urbanization, and imperialism, all created an economy that was far more "globalized" than anything that had come before or was to come after in the first half of the twentieth century. Yet without technology, none of these changes could have come about, and without better useful knowledge, technology would have eventually frozen in its tracks as it had done so often in previous centuries.

In what follows, this entry will outline the main technological developments of this era, noting in each case both the economic significance and the epistemic base that the techniques rested on. The entry will deal with the following general classes: materials, energy, engineering and communications, and biological technology (agriculture and medicine).

materials

Steel

The material most associated with the Second Industrial Revolution is steel. By 1850 the age of iron had become fully established. But for many uses, wrought iron was inferior to steel. The wear and tear on wrought iron machine parts and rails made them expensive in use, and for many uses, especially in machines and construction, wrought iron was insufficiently tenacious and elastic. The problem was not making steel; the problem was making cheap steel. This problem was famously solved by Henry Bessemer (1813–1898) in 1856. The Bessemer converter used the fact that the impurities in cast iron consisted mostly of carbon, and that this carbon could be used as a fuel if air were blown through the molten metal. The interaction of the air's oxygen with the steel's carbon created intense heat, which kept the iron liquid. Thus, by adding the correct amount of carbon or by stopping the blowing at the right time, the desired mixture of iron and carbon could be created, the high temperature and turbulence of the molten mass ensuring an even mixture. At first, Bessemer steel was of very poor quality, but then a British steelmaker, Robert Mushet (1811–1891), discovered that the addition of spiegeleisen, an alloy of carbon, manganese, and iron, into the molten iron as a recarburizer solved the problem. The other drawback of Bessemer steel, as was soon discovered, was that phosphorus in the ores spoiled the quality of the steel, and thus the process was for a while confined to low-phosphorus Swedish and Spanish ores.

A different path was taken by Continental metallurgists, who jointly developed the Siemens-Martin open-hearth process, based on the idea of cofusion—melting together low-carbon wrought iron and high-carbon cast iron. The technique used hot waste gases to preheat incoming fuel and air, and mixed cast iron with wrought iron in the correct proportions to obtain steel. The hearths were lined with special brick linings to maintain the high temperatures. The process allowed the use of scrap iron and low grade fuels, and thus turned out to be more profitable than the Bessemer process in the long run. Open-hearth steel took longer to make than Bessemer steel, but as a result permitted better quality control. Bessemer steel also tended to fracture inexplicably under pressure, a problem that was eventually traced to small nitrogen impurities. In 1900 Andrew Carnegie, the American steel king, declared that the open-hearth process was the future of the industry.

Like the Bessemer process, the Siemens-Martin process was unable to use the phosphorus-rich iron ores found widely on the European Continent. Scientists and metallurgists did their best to resolve this bottleneck, but it fell to two British amateur inventors, Percy Gilchrist (1851–1935) and Sidney Thomas (1850–1885), to hit upon the solution in 1875. By adding to their firebricks limestone, which combined with the harmful phosphorus to create a basic slag, they neutralized the problem. It seems safe to say that the German steel industry could never have developed as it did without this invention. Not only were the cost advantages huge, but the Germans (who adopted the "basic" process immediately) also managed to convert the phosphoric slag into a useful fertilizer. While the Bessemer and Siemens-Martin processes produced bulk steel at rapidly falling prices, high-quality steel continued for a long time to be produced in Sheffield using the old crucible technique.

Cheap steel soon found many uses beyond its original spring and dagger demand; by 1880 buildings, ships, and railroad tracks were increasingly made out of steel. It became the fundamental material from which machines, weapons, and implements were made, as well as the tools that made them. Much of the technological development in other industries depended on the development of steel, from better tools and implements to


structures and locomotives it replaced other materials wherever possible. It created economic empires and legendary fortunes, from the great German magnates such as Krupp and Thyssen to the mammoth Carnegie steel corporation.

Steel was hardly an example of the science-leads-to-technology model. The Bessemer steelmaking process of 1856 was made by a man who by his own admission had "very limited knowledge of iron metallurgy." Henry Bessemer's knowledge was so limited that the typical Bessemer blast, in his own words, was "a revelation to me, as I had in no way anticipated such results" (quoted in Carr and Taplin, p.19). All the same, the growth of the epistemic base in the preceding half century was pivotal to the development of the process. Bessemer knew enough chemistry to realize that his process had succeeded and similar experiments by others had failed because the pig iron he had used was, by accident, singularly free of phosphorus. By adding carbon at the right time, he would get the correct mixture of carbon and iron—that is, steel. Subsequent improvements, too, were odd combinations of careful research and inspired guesswork. Steel was a paradigmatic "general purpose technology" because it combined with almost any area of production to increase productivity and capability.

Chemicals

In chemicals, the story is a bit similar in that the first steps toward a new industry were made by a lucky if informed Briton. Sir William Henry Perkin (1838–1907), by a fortunate accident, made the first major discovery in what was to become the modern chemical industry. Perkin, however, was trained by August Wilhelm von Hofmann (1818–1892), who was teaching at the Royal College of Chemistry at the time, and his initial work was inspired and instigated by his German teacher. The eighteen-year-old Perkin searched for a chemical process to produce artificial quinine. While pursuing this work, he accidentally discovered in 1856 aniline purple, or as it became known, mauveine, which replaced the natural dye mauve. Three years later a French chemist, Emanuel Verguin (1814–1864), discovered aniline red, or magenta, as it came to be known. German chemists then began the search for other artificial dyes, and almost all additional successes in this area were scored by them. In the 1860s, Hofmann and Friedrich Kekulé von Stradonitz (1829–1896) formulated the structure of the dyestuff's molecules. In 1869, after years of hard work, a group of German chemists synthesized alizarin, the red dye previously produced from madder roots, beating Perkin to the patent office by one day. The discovery of alizarin in Britain marked the end of a series of brilliant but unsystematic inventions, whereas in Germany it marked the beginning of the process through which the Germans established their hegemony in chemical discovery.

Although Victorian Britain was still capable of achieving the lucky occasional masterstroke that opened a new area, the patient, systematic search for solutions by people with formal scientific and technical training better suited the German traditions. In 1840 Justus von Liebig (1803–1873), a chemistry professor at Giessen, published his Organic Chemistry in Its Applications to Agricultureand Physiology, which explained the importance of fertilizers and advocated the application of chemicals in agriculture. Other famed German chemists, such as Friedrich Wöhler, Robert Wilhelm Bunsen, Leopold Gmelin, Hofmann, and Kekulé von Stradonitz, jointly created modern organic chemistry, without which the chemical industry of the second half of the nineteenth century would not have been possible. It was one of the most prominent examples of how formal scientific knowledge came to affect production techniques. German chemists succeeded in developing indigotin (synthetic indigo, perfected in 1897) and sulphuric acid (1875). Soda-making had been revolutionized by the Belgian Ernest Solvay (1838–1922) in the 1860s. In explosives, dynamite, discovered by Alfred Nobel (1833–1896), was used in the construction of tunnels, roads, oil wells, and quarries. If ever there was a labor-saving invention, this was it. In the production of fertilizer, developments began to accelerate in the 1820s. Some of them were the result of resource discoveries, like Peruvian guano, which was imported in large quantities to fertilize the fields of England. Others were by-products of industrial processes. A Dublin physician, James Murray (1788–1871), showed in 1835 that superphosphates could be made by treating phosphate rocks with sulphuric acid. The big breakthrough came, however, in 1840 with the publication of von Liebig's work, commissioned by the British Association for the Advancement of Sciences. Research proceeded in England, where John Bennet Lawes (1814–1900) carried out path-breaking work in his famous experimental agricultural station at Rothamsted, where he put into practice the chemical insights provided by von Liebig. In 1843 he established a superphosphates factory that used mineral phosphates. In Germany, especially Saxony, state-supported institutions subsidized agricultural research and the results eventually led to vastly increased yields. Nitrogen fertilizers were produced from the caliche (natural sodium nitrate) mined in Chile.

The most striking macroinvention in chemistry came late in the Second Industrial Revolution. The Haber process to make ammonia—developed by Fritz Haber (1868–1934) and the chemists Carl Bosch (1874–1940) and Alwin Mittasch (1869–1953) of BASF (Baden Aniline and Soda Manufacturing)—and the discovery around 1908 of how to convert ammonia into nitric acid, made it possible for Germany to continue producing nitrates for fertilizers and explosives after its Chilean supplies were cut off during World War I. The ammonia-producing process must count as one of the most important inventions in the chemical industry ever and has been dubbed by Vaclav Smil as the most important invention of the modern age. It used two abundant substances, nitrogen and hydrogen, to produce the basis of the fertilizer and explosives industries for many years to come.

Chemistry also began its road toward the supply of new materials. Charles Goodyear (1800–1860), an American tinkerer, invented in 1839 the vulcanization process of rubber that made widespread industrial use of rubber possible. Another American, John Wesley Hyatt (1837–1920), succeeded in creating in 1869 the first synthetic plastic, which he called celluloid. Its economic importance was initially modest because of its inflammability, and it was primarily used for combs, knife handles, piano keys, and baby rattles, but it was a harbinger of things to come. The breakthrough in synthetic materials came only in 1907, when the Belgian-born American inventor Leo Hendrik Baekeland (1863–1944) discovered Bakelite. The reason for the long delay in the successful development of Bakelite was simply that neither chemical theory nor practice had been able to cope with such a substance before. Even Baekeland did not fully understand his own process, as the macromolecular chemical theories that explain synthetic materials were not developed until the 1920s. Once again, science and technology were moving ahead in leapfrogging fashion.

energy

By 1850 energy production had reached a rather odd situation: steam power was spreading at an unprecedented rate through the adoption of locomotives, steamships, and stationary engines, yet the underlying physics was not well understood. This changed around 1850, when the work of Rudolf Clausius, James Prescott Joule, and others established what William Thomson (Lord Kelvin) called thermodynamics. Thermodynamics was rather quickly made accessible to engineers and designers by such men as William Rankine. The understanding of the principles governing the operation of devices that converted heat into work and their efficiency not only led to substantial improvement in the design of steam engines but also made people realize the limitations of steam. The Second Industrial Revolution thus led to the development of the internal combustion engine, an invention that like no other has determined the material culture of the twentieth century.

Internal combustion

Harnessing energy efficiently was never a simple problem. During the nineteenth century dozens of inventors, realizing the advantages of internal combustion over steam, tried their hand at the problem. A working model of a gas engine was first constructed by the Belgian Étienne Lenoir (1822–1900) in 1859 and perfected in 1876, when a German traveling salesman, Nicolaus August Otto (1832–1891), built a gas engine using the eponymous four-stroke principle. Otto had worked on the problem since 1860, when he had read about Lenoir's machine in a newspaper. He was an inspired amateur without formal technical training. Otto initially saw the four-stroke engine as a makeshift solution to the problem of achieving a high enough compression and only later was his four-stroke principle, which is still the heart of most automobile engines, acclaimed as a brilliant breakthrough. The four-stroke principle was recognized as the only way in which a Lenoir-type engine could work efficiently. The "silent Otto," as it became known (to distinguish it from a noisier and less successful earlier version), was a huge financial success. The advantage of the gas engine was not its silence, but that, unlike the steam engine, it could be turned on and off at short notice. In 1885 two Germans, Gottlieb Wilhelm Daimler (1834–1900) and Carl Friedrich Benz (1844–1929), succeeded in building an Otto-type, four-stroke gasoline-burning engine, employing a primitive surface carburetor to mix the fuel with air. Benz's engine used an electrical induction coil powered by an accumulator, foreshadowing the modern spark plug. In 1893 Wilhelm Maybach (1846–1929), one of Daimler's employees, invented the modern float-feed carburetor.

The automobile became an economic reality after a lightning series of microinventions that solved many of the teething problems of the new technology: pneumatic tires, the radiator, the differential, the crank-starter, the steering wheel, and pedal-brake control. But the application of the four-stroke internal combustion engine and its cousin, the diesel engine, went far beyond the motor car: it made heavier-than-air flying machines possible, took a mobile source of energy to the fields in the form of tractors, and eventually replaced steam for all but the most specialized uses. Unlike the Otto engine, Rudolf Diesel's (1858–1913) machine was designed by a qualified engineer, trained in the modern physics of engines in search of an efficient thermodynamic cycle. Internal combustion marked a new age in energy sources. Until 1865 coal and peat had been the only sources of fossil fuel; by 1914 oil produced in Texas, Romania, and Russia was pointing to a new oil-driven energy age, now in its postmaturity.

Electricity

The other development in energy technology associated with this age was electricity. Since the middle of the eighteenth century, electrical phenomena had fascinated inventors. Electrical power was used in scientific research, public displays, and from the late 1830s on in telegraphy. Other uses were believed to be possible, as Jules Verne's 20,000 Leagues under the Sea (1870) illustrates. Between 1865 and 1890 most of the technical obstacles to Verne's dreams were overcome.

Among those were the problem of generation, resolved in the years 1866 to 1868 through the discovery of the principle of self-excitation, the need to transform high voltage current (the most efficient way to transport electricity) to low voltage (safe and effective in usage) through the Gaulard-Gibbs transformer, and the choice between alternating and direct current, decided in the late 1880s in favor of the former when Nikola Tesla (1856–1943) designed the polyphase electrical motor that could run on alternating current.

Electricity was of course not a form of energy but rather a mode of transporting and utilizing it; the real sources of power still had to come from traditional sources of energy such as water, steam, and later diesel engines. The steam turbine, another invention deeply indebted to thermodynamics, was developed by Charles Algernon Parsons (1859–1932) and Carl Gustaf Patrik de Laval (1845–1913) in 1884 and became central to electricity generation. The connection between science and the development of electricity is complex. In the decades before the Second Industrial Revolution scientists like Michael Faraday, André-Marie Ampère, and Georg Simon Ohm established empirical regularities such as the laws named after them.

Whether those insights constituted an epistemic base of sufficient width remains in dispute. The electron, the basic entity that makes electricity work was discovered only in 1897, and many of the major figures in the electricity revolution—above all, of course, Thomas Alva Edison (1847–1931)—had little or no command of the formal physics developed by James Clerk Maxwell (1831–1879), Hermann Ludwig Ferdinand von Helmholtz (1821–1894), and other pioneers of classical electrodynamics.

Yet the use of electricity transformed the age. It, too, was a general purpose technology if ever there was one. In lighting, heating, refrigeration, transportation, and production engineering, electricity lived up to the hopes that Faraday had maintained. It was clean, quiet, flexible, easy to control, and comparatively safe. Lights and motors could be turned on and off with the flip of a finger. On the shopfloor it was more efficient because it did not need shafting and belting, a major drain on energy space. Electrical power could be turned on when needed. But above all, electricity was divisible and democratic and could be provided to small producers and consumers with the same ease with which it was provided to large firms. In that sense, it went against the tide of the Second Industrial Revolution, which tended to favor large-scale production.

engineering, communications, and transportation

The Second Industrial Revolution changed production engineering through a transformation so revolutionary and dramatic that while it was essentially unknown in 1850, by 1914 it was already becoming dominant, and by 1950 manufacturing was unimaginable without it. This process, often referred to as "mass production," actually consisted of a number of elements. One of those was modularity or the reliance on interchangeable parts, first introduced in firearms and clocks, and later applied to almost everything that had moving parts and to other components that could be produced in large batches and then assembled. For parts to be interchangeable, they needed to be produced with a tolerance sufficiently low to allow true interchangeability, which of course meant very high levels of accuracy in machine tools and cheap, high-quality materials, above all steel. The economies of scale that this permitted were only matched by the ease of repair that modularity implied. The development of continuous flow processes, often associated with assembly lines, were actually pioneered by meat packers engaged in disassembly. The advantage of this system was that it rationalized a fine division of labor and yet imposed on the process a rate of speed determined by the employer. Henry Ford's (1863–1947) automobile assembly plant combined the concept of inter-changeable parts with that of continuous flow processes, and allowed him to mass-produce a complex product and yet keep its price low enough that it could be sold as a people's vehicle. Europe saw that it worked and imitated: Fordismus became a buzzword in German manufacturing.

Mass production engineering was made possible by more than just precision engineering and clever redesign of plants. Success depended not only on the ingenuity and energy of the inventor but also on the willingness of contemporaries to accept the novelty, for workers to accept mind-numbingly monotonous work in which they surrendered any pretense of creativity, and for consumers to accept cookie-cutter identical products made in huge batches. Bicycles, sewing machines, agricultural equipment, clothes, and eventually Ford's model T provided the consumer with the option of cheap but nondistinctive products; those who wanted custom-made products had to pay for them. To be sure, not all manufacturing was of that nature, and there was always room for small, flexible firms who produced specialized products and used niche techniques.

The Second Industrial Revolution was also the age of the rise of technological "systems" an idea that was first enunciated in full by the historian Thomas Hughes. Again, some rudimentary "systems" of this nature were already in operation before 1870: railroad and telegraph networks and in large cities gas, water supply, and sewage systems were in existence. These systems expanded enormously after 1870, and a number of new ones were added: electrical power and telephone being the most important additions. Large technological systems turned from an exception to a commonplace.

Systems required a great deal of coordination that free markets did not always find easy to supply, and hence governments or other leading institutions ended up stepping in to determine technological standards such as railroad gauges, electricity voltages, the layout of typewriter keyboards, rules of the road, and other forms of standardization. The notion that technology consisted of separate components that could be optimized individually—never quite literally true—became less and less appropriate after 1870. This period was also the age in which communications technology changed pari passu with everything else. The most dramatic macroinvention had occurred before: the electric telegraph permitted transmission of information at a speed faster than people could move, previously limited to such devices as the semaphore and homing pigeons. After 1870 the telegraph continued to be improved and prices fell as the reach increased. Alexander Graham Bell's (1847–1922) and Elisha Gray's (1835–1901) telephone, supplemented by the switchboard (1878) and the loading coil (1899), made the telephone one of the most successful inventions of all time. Unlike the telegraph, it has had a remarkable capability of combining with other techniques such as satellites and cellular technology. The principle of wireless telegraphy, as yet unsuspected at that time, was implicit in the theory of electromagnetic waves proposed on purely theoretical grounds by James Clerk Maxwell (1831–1879) in 1865. The electro-magnetic waves suggested by Maxwell were finally demonstrated to exist by a set of brilliant experiments conducted by Heinrich Rudolph Hertz in 1888. The Englishman Oliver Joseph Lodge (1851–1940) and the Italian Guglielmo Marconi (1874–1937) combined the theories of these ivory tower theorists into wireless telegraphy in the mid-1890s, and in 1906 the Americans Lee De Forest (1873–1961) and Reginald Aubrey Fessenden (1866–1932) showed how wireless radio could transmit not only Morse signals but sound waves as well through the miracle of amplitude modification.

Transportation capabilities were shocked above all by the ability of people to conquer the air. For decades it was unclear whether or not that achievement would be attained through machines lighter than air, such as Zeppelins, and before Kitty Hawk (1903) many serious scientists doubted that machines heavier than air could ever fly. Here, then,


was a classic case of practice outracing theory, though the Wright brothers (Wilbur [1867–1912] and Orville [1871–1948]) were well informed with the best-practice aeronautical engineering. Ocean shipping remained by far the most important form of transport, gaining in efficiency and frequency as ships were gaining in size, thanks to steel, turbines, and improvements in design. By 1914 sailing ships, for millennia the main source of long-distance mobility, were relegated to a rich person's toy, replaced by the giants such as the Titanic. On the other end of size scale stood that simple, obvious, person-sized mode of personal transportation, the bicycle—perfected by the Coventry mechanics James (1830–1881) and John K. Starley (1854–1901)—which, much like the mass transit systems that emerged in the 1880s, ensured poor people an alternative to walking.

biological technology

In an age when food was still the largest item in most household budgets, the standard of living of the population depended, above all, on food supply and nutrition. The new technologies of the nineteenth century affected food supplies through production, distribution, preservation, and eventually preparation. The fall in shipping prices meant that after 1870 Europe was the recipient of cheap agricultural products from the rest of the world. Agricultural productivity owed much to the extended use of fertilizers. Following the emergence of an epistemic base in organic chemistry in the 1840s, farmers learned to use nitrates, potassium, and phosphates. The productivity gains in European agriculture are hard to imagine without the gradual switch from natural fertilizer, produced mostly in loco by farm animals, to commercially produced chemical fertilizers.

Fertilizers were not the only scientific success in farming: the use of fungicides, such as the Bordeaux mixture, invented in 1885 by the French botanist Pierre-Marie Alexis Millardet (1838–1902) in 1885, helped conquer the dreaded potato blight that had devastated Ireland forty years earlier. Food supplies were also enhanced by better food preservation methods: refrigerated ships could carry fresh meat and dairy, but dehydration and canning were also continuously improved. Food consumption and human health were vastly enhanced by what remains the greatest discovery of the era: the germ theory of disease. Once it was understood that food putrefaction was caused by microorganisms, new processes like pasteurization could help preserve essential nutrients and drinking water could be purified (later chlorinated). The germ theory had an enormous impact on preventive medicine even if clinical technology advanced but little and helped bring about the sharp decline in mortality rates during the quarter century before World War I. While the pharmaceutical industry made little progress (Bayer's spectacularly successful aspirin in 1898 notwithstanding), the Second Industrial Revolution left people not only richer but also on average a lot healthier.

conclusions

The Second Industrial Revolution was, in many ways, the continuation of the first. In many industries there was direct continuity. Yet it differed from its predecessor in a number of crucial aspects. First, it had a direct effect on real wages and standards of living, which rose significantly between 1870 and 1914. Its impact on economic growth and productivity was far more unequivocal than any technological advance associated with the First Industrial Revolution. It vastly augmented the direct effect of technological advances on daily life and household consumption. It contributed to the global integration of markets and information, a process that sadly came to an end in the fateful days of August 1914. Finally, by changing the relation between knowledge of nature and how it affected technological practices, it irreversibly changed the way technological change itself occurs. In so doing, what was learned and done in these years paved the way for many more industrial revolutions to come.

See alsoBanks and Banking; Capitalism; Economic Growth and Industrialism; Industrial Revolution, First; Science and Technology.

bibliography

Bryant, Lynwood. "The Beginnings of the Internal Combustion Engine." In Technology in Western Civilization, edited by Melvin Kranzberg and Carroll W. Pursell, Jr., vol. 1, 648–663. New York, 1967.

——. "The Role of Thermodynamics: The Evolution of the Heat Engine." Technology and Culture 14 (1973): 152–165.

Carr, James Cecil, and W. Taplin. A History of the British Steel Industry. Oxford, U.K., 1962.

Hounshell, David A. From the American System to Mass Production, 1800–1932: The Development of Manufacturing Technology in the United States. Baltimore, Md., 1984.

Hughes, Thomas P. Networks of Power: Electrification in Western Society, 1880–1930. Baltimore, Md., 1983.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. New York, 1990.

——. The Gifts of Athena: Historical Origins of the Knowledge Economy. Princeton, N.J., 2002.

Smil, Vaclav. Creating the Twentieth Century: Technical Innovations of 1867–1914 and Their Lasting Impact. New York, 2005.

Smith, Crosbie, and M. Norton Wise. Energy and Empire: A Biographical Study of Lord Kelvin. Cambridge, U.K., 1989.

Joel Mokyr

About this article

Industrial Revolution, Second

Updated About encyclopedia.com content Print Article