Skip to main content

Scientific and Technical Understanding of Energy

SCIENTIFIC AND TECHNICAL UNDERSTANDING OF ENERGY

The word "energy" entered English and other European languages in the sixteenth century from Aristotle's writings, and was restricted to meanings close to his until the nineteenth century. The total entry for "Energy" in the first edition (1771) of the Encyclopaedia Britannica is as follows: "A term of Greek origin signifying the powers, virtues or efficacy of a thing. It is also used figuratively, to denote emphasis of speech." These meanings survive; Swift's energetic pronouncement in 1720 that "Many words deserve to be thrown out of our language, and not a few antiquated to be restored, on account of their energyand sound" still rings a bell. But they are no longer dominant. Stronger by far is the scientific meaning, fixed between 1849 and 1855 by two men, William Thomson (the future Lord Kelvin) and W. J. Macquorn Rankine, professors, respectively, of natural philosophy and engineering at the University of Glasgow in Scotland.

Though Rankine and Thomson gave the term "energy" its modern scientific meaning, they did not originate the concept. The idea that through every change in nature some entity stays fixed arose in a complex engagement of many people, culminating between 1842 and 1847 in the writings of four men: Robert Mayer, James Prescott Joule, Hermann Helmholtz, and Ludvig Colding. The later choice of a word, easy as it may seem, is no mere detail. In the 1840s the lack of a word for the concept struggling to be born was a heavy impediment. The two nearest "Kraft" in German, and "force" in English—were riddled with ambiguity. With "energy" everything fell into place.

Three issues are involved in the coming-into-being of both concept and term: (1) the eighteenth-century debate on Leibniz's notion of vis viva("living force"), (2) the unfolding eighteenth- and nineteenth-century understanding of steam engines, and (3) the search after 1830 for correlations among "physical forces." Concept must come first, with (3) treated before (2). On force as a term, compare (1) and (3) with Newton's definition. For the Newtonian, a force is a push or a pull, which, unless balanced by another, makes objects to which it is applied accelerate. "Living force" was not that at all: It corresponded to the quantity that we, under Thomson's tutelage, know as kinetic energy. As for forces being correlated, force in that context bore a sense not unlike the modern physicist's four "forces of nature," gravity, electromagnetism, and the two nuclear ones. It referred to the active principle in some class of phenomena, vital force, chemical force, the forces of electricity, magnetism, heat, and so on. To correlate these—to unify them under some over-arching scheme—was as much the longed-for grail in 1830 as Grand Unification would be for physicists after 1975.

CORRELATION OF FORCES AND CONSERVATION OF ENERGY

Sometime around 1800 there came over the European mind a shadow of unease about Newtonian science. Beauty, life, and mystery were all expiring in a desert of atoms and forces and soulless mechanisms—so the poets held, but not only they. Science also, many people felt, must seek higher themes, broader principles, deeper foundations—symmetries, connections, structures, polarities, beyond, across, and above particular findings. Elusive, obscure, sometimes merely obscurantist, remarkably this inchoate longing actualized itself in a few decades in a succession of luminous discoveries, and flowed forward into the idea of energy.

In Germany it took shape in a much-praised, much-derided movement, interminable in discourse about Urprinciples and life forces, spearheaded by Lorenz Oken and Friedrich Schelling under the name Naturphilosophie, turgid indeed, but source of a powerful new credo. Nature is one, and so must all the sciences be; the true philosopher is one who connects. Thus in 1820 Hans Christian Oersted joined magnetism to electricity with his amazing discovery that an electric current deflects a magnet. Readers of Oersted baffled by strange talk of "polarities" and the "electric conflict" should know that it stems from Schelling: Oersted was a Naturphilosoph. But there were other voices. Similar hopes expressed in a more down-to-earth English tone mark John Herschel's Preliminary Discourse on the Study of Natural Philosophy(1830), Mary Somerville's Connexion of the Physical Sciences(1839), and W. R. Grove's On the Correlation of Physical Forces (1843). Fluent in German and Hanoverian in descent, Herschel has special interest because his "natural philosophy" had so resolutely English a cast, remote from Naturphilosophie. His master was Bacon, yet Herschel sought connection. So above all did the man whose first published paper (1816) was on the composition of caustic lime from Tuscany, and his last (1859) on effects of pressure on the melting point of ice, the profoundly English Michael Faraday.

From worldviews about the connectedness of everything to the correlation of specific "forces" to enunciating the law of energy was a long journey requiring two almost antithetical principles: great boldness in speculation and great exactness in measurement. Two eras may be identified: an era of correlation from the discovery of electrochemistry in 1800 through much of Faraday's career, and the era of energy beginning in the late 1840s. The two are connected: Without correlation there might have been no energy, and with energy new light was thrown on correlations, but they are not the same. In the great correlations of Oersted and Faraday, the discovery, when finally made, often embodied a surprise. An electric current exerts a magnetic force, as Oersted had expected, but it is a transverse force. A magnet generates an electric current, as Faraday had hoped, but it must be a moving magnet, and the current is transient. A magnetic field affects light traversing glass, as Faraday had surmised, but its action is a twisting one far from his first guess. Not number but vision was the key, an openness to the unexpected. Energy is different. To say that energy of one kind has been transformed into energy of another kind makes sense only if the two are numerically equal.

Between Faraday and Joule, the man after whom very properly the unit of energy, the joule, is named, lay an instructive contrast. Both master experimenters with a feeling for theory, they stood at opposite conceptual poles: geometry versus number. For Faraday, discovery was relational. His ideas about fields of force reveal him, though untrained, as a geometer of high order. His experiments disclose the same spatial sense; number he left mainly to others. "Permeability" and "susceptibility," the terms quantifying his work on magnetism are from Thomson; the quantity characterizing his magnetooptical effect is (after the French physicist Marcel Verdet) Verdet's constant. For Joule, number was supreme. His whole thinking on energy originated in the hope that electric motors run from batteries would outperform steam engines. His best experiments yielded numbers, the electrical and mechanical equivalents of heat. His neatest theoretical idea was to compute one number, the extraordinarily high velocity gas molecules must have if pressure is to be attributed to their impacts on the walls of the container.

The doctrine of energy came in the 1840s to the most diverse minds: to Mayer, physician on a Dutch ship, off Java in 1842; to Joule, brewer turned physicist, at Manchester in 1843; to Colding, engineer and student of Oersted's, at Copenhagen in 1843; to Helmholtz, surgeon in the Prussian Army, at Potsdam in 1847. In a famous text, "Energy Conservation as an Example of Simultaneous Discovery" (1959), Thomas Kuhn listed eight others, including Faraday and Grove, directly concerned with interconvertibility of forces, and several more peripherally so, arguing that all in some sense had a common aim. Against this it is necessary to reemphasize the distinction between correlation and quantification (Faraday, the geometrist, found more "correlations" than any energist) and the desperate confusion about force. In exchanges in 1856 with the young James Clerk Maxwell, Faraday discussed "conservation of force"—meaning what? Not conservation of energy but his geometric intuition of a quite different law, conservation of flux, the mathematical result discovered in 1828 by M. V. Ostrogradsky, afterward Teutonized as Gauss's theorem. In 1856, energy—so natural to the twenty-five-year-old Maxwell—was utterly foreign to the sixty-four-year-old master from another age.

Physicists regard energy, and the laws governing it, as among the most basic principles of their science. It may seem odd that of the four founders two were doctors, one an engineer, and one only, Joule (and he only inexactly), could have been called then a physicist. The surprise lessens when we recognize that much of the impetus of discovery came from the steam engine, to which one key lay in that potent term coined and quantified in 1782 by James Watt, "horsepower." Seen in one light, horses and human beings are engines.

WORK, DUTY, POWER, AND THE STEAM ENGINE

The steam engine was invented early in the eighteenth century for pumping water out of mines. Later Watt applied it to machinery, and others to trains and ships, but the first use was in mines, especially the deep tin and copper mines of Cornwall, in southwestern England. Crucial always, to use the eighteenth-century term, was the dutyof the engine: the amount of water raised through consuming a given quantity of fuel: so many foot-pounds of water per bushel of coal. To the mineowner it was simple. Formerly, pumping had been done by horses harnessed to a rotary mill. Now (making allowance for amortization of investment), which was cheaper: oats for the horses or coal for the engine? The answer was often nicely balanced, depending on how close the metals mine was to a coal mine.

Improving the duty meant using fuel more efficiently, but to rationalize that easy truth was the work of more than a century. Theoretically, it depended, among other things, on the recognition in 1758 by Adam Black—Watt's mentor at Glasgow—of a distinction between heat and temperature, and the recognition by nineteenth-century chemists of an absolute zero of temperature. On the practical side lay a series of brilliant inventions, beginning in 1710 with Thomas Newcomen's first "atmospheric" engine, preceded (and impeded) by a patent of 1698 by Thomas Savery. Eventually two kinds of efficiency were distinguished, thermal and thermodynamic. Thermal efficiency means avoiding unnecessary loss, heat going up the chimney, or heat dissipated in the walls of the cylinder. Thermodynamic efficiency, more subtly, engages a strange fundamental limit. Even in an ideal engine with no losses, not all the heat can be converted into work.

Both Savery's and Newcomen's engines worked by repeatedly filling a space with steam and condensing the steam to create a vacuum. Savery's sucked water straight up into the evacuated vessel; Newcomen's was a beam engine with a pump on one side and a driving cylinder on the other, running typically at ten to fifteen strokes a minute. The horsepower of the earliest of which anything is known was about 5.5. Comprehensively ingenious as this engine was, the vital difference between it and Savery's lay in one point: how the vacuum was created. Savery poured cold water over the outside, which meant that next time hot steam entered, much of it was spent reheating the cylinder. Newcomen sprayed water directly into the steam. The higher thermal efficiency and increased speed of operation made the difference between a useless and a profoundly useful machine.

Newcomen engines continued to operate, especially in collieries, almost to the end of the eighteenth century. Yet they, too, were wasteful. The way was open for a great advance, James Watt's invention in 1768 of the separate "condenser." Next to the driving cylinder stood another cylinder, exhausted and permanently cold. When the moment for the down-stroke came, a valve opened, the steam rushed over and condensed; the valve then closed and the cycle was repeated. The driving cylinder stayed permanently hot.

Watt and Matthew Boulton, his business partner, were obsessed with numbers. Testing Clydesdale dray horses, the strongest in Britain, Watt fixed their lifting power (times 1.33) at 33,000 ft-lbs per minute. Thus was horsepower defined. Comparing his first engines with the best Newcomen ones, he found his to have four times higher duty. His next leap was to make the engine double acting and subject to "expansive working." Double action meant admitting steam alternately above and below the piston in a cylinder closed at both ends; for an engine of given size it more than doubled the power. Expansive working was one of those casual-seeming ideas that totally reorder a problem. Steam expands. This being so, it dawned on Watt that the duty might be raised by cutting off the flow into the cylinder early—say, a third of the way through the stroke. Like something for nothing, the steam continued to push without using fuel. Where was the effort coming from? The answer, not obvious at the time, is that as steam expands, it cools. Not pressure alone, but pressure and temperature together enter the lore of engines.

Behind the final clarifying insight provided in 1824 by Sadi Carnot, son of the French engineer-mathematician and revolutionary politician Lazare Carnot, lay three decades of invention by working engineers. One, too little remembered, was a young employee of Boulton and Watt, James Southern. Sometime in the 1790s he devised the indicator diagram to optimize engine performance under expansive working. In this wonderfully clever instrument a pressure gauge (the "indicator") was mounted on the cylinder, and a pen attached by one arm to it and another to the piston rod drew a continuous closed curve of pressure versus volume for engines in actual operation. The area of the curve gave the work (force times distance) done by the engine over each cycle; optimization lay in adjusting the steam cutoff and other factors to maximize this against fuel consumption. Known to most purchasers only by the presence of a mysterious sealed-off port on top of the cylinder—and exorbitant patent royalties for improved engine performance—the indicator was a jealously guarded industrial secret. The engineer John Farey, historian of the steam engine, had heard whispers of it for years before first seeing one in operation in Russia in 1819. To it as background to Carnot must be added three further points: (1) the invention in 1781 by one of Watt's Cornish rivals, J. C. Hornblower, of compounding (two stages of expansion successively in small and large cylinders); (2) the use of high-pressure steam; and (3) the recognition, especially by Robert Stirling, inventor in 1816 of the "Stirling cycle" hot air engine, that the working substance in engines need not be steam. Of these, high pressure had the greatest immediate impact, especially in steam traction.

Because Watt, who had an innate distaste for explosions, insisted on low pressures, his engines had huge cylinders and were quite unsuitable to vehicles. No such craven fears afflicted Richard Trevithick in Cornwall or Oliver Evans in Pennsylvania, each of whom in around 1800 began building high-pressure engines, locomotive and stationary, as a little later did another Cornish engineer, Arthur Woolf. Soon a curious fact emerged. Not only were high-pressure engines more compact, they were often more efficient. Woolf's stationary engines in particular, combining high pressure with compounding, performed outstandingly and were widely exported to France. Their success set Carnot thinking. The higher duty comes not from pressure itself but because high-pressure steam is hotter. There is an analogy with hydraulics. The output of a waterwheel depends on how far the water falls in distance: the output of a heat engine depends on how far the working substance falls in temperature. The ideal would be to use the total drop from the temperature of burning coal—about 1,500°C (2,700°F)—to room temperature. Judged by this standard, even high-pressure steam is woefully short; water at 6 atmospheres (a high pressure then) boils at 160°C (321°F).

Carnot had in effect distinguished thermal and thermodynamic inefficiencies, the latter being a failure to make use of the full available drop in temperature. He clarified this by a thought experiment. In real engines, worked expansively, the entering steam was hotter than the cylinder, and the expanded steam cooler, with a constant wasteful shuttling of heat between the two. Ideally, transfer should be isothermal—that is, with only infinitesimal temperature differences. The thought experiment—put ten years later (1834) into the language of indicator diagrams by the French mining engineer Émil Clapeyron—comprised four successive operations on a fixed substance. In two it received or gave up heat isothermally; in the other two it did or received work with no heat transfer. Neither Carnot nor Clapeyron could quantify thermodynamic efficiency; Thomson in 1848 could. He saw that the Carnot cycle would define a temperature scale (now known as the Kelvin scale) identical with the one chemists had deduced from gas laws. For an ideal engine working between temperatures T1 (cold) and T2 (hot) measured from absolute zero, the thermodynamic efficiency is (T2 T1)/T2. No engine can surpass that.

Nothing in nature is lost; that was the message of the 1840s. Chemical, mechanical, and electrical processes are quantifiable, exact, obey a strict law of energy. Equally exact are the amounts of heat these processes generate. But the reverse does not hold. The history of steam was one long struggle against loss; not even Carnot's engine can extract all the energy the fuel has to give. This was the conceptual paradox three men, Thomson and Rankine in Scotland, and Rudolf Clausius in Germany, faced in 1849–1850 in creating the new science of thermodynamics. Out of it has arisen a verbal paradox. For physicists conservation of energy is a fact, the first law of thermodynamics. For ordinary people (including off-duty physicists), to conserve energy is a moral imperative.

Heat as energy is somehow an energy unlike all the rest. Defining how takes a second law, a law so special that Clausius and Thomson found two opposite ways of framing it. To Clausius the issue was the ageold one of perpetual motion. With two Carnot engines coupled in opposition, the rule forbidding perpetual motion translated into a rule about heat. His second law of thermodynamics was that heat cannot flow without work uphill from a cold body to a hot body. Refrigerators need power to stay cold. Thomson's guide was Joseph Fourier's great treatise on heat Théorie analytique de la chaleur(1827), and the fact that unequally heated bodies, left to themselves, come always to equilibrium. His version was that work has to be expended to maintain bodies above the temperature of their surroundings. Animals need food to stay warm.

In a paper with the doom-laden title "On a Universal Tendency in Nature to a Dissipation of Mechanical Energy," Thomson in 1852 laid out the new law's dire consequences. How oddly did the 1850s bring two clashing scientific faiths: Darwin's, that evolution makes everything better and better; and Thomson's, that thermodynamics makes everything worse and worse. Thirteen years later Clausius rephrased it in the aphorism "The energy of the world is constant, the entropy of the world strives toward a maximum." After Maxwell in 1870 had invented his delightful (but imaginary) demon to defeat entropy, and Ludwig Boltzmann in 1877 had related it formally to molecular disorder through the equation which we now, following Max Planck, write S = klog W, the gloom only increased. With it came a deep physics problem, first studied by Maxwell in his earliest account of the demon but first published by Thomson in 1874, the "reversibility paradox." The second law is a consequence of molecular collisions, which obey the laws of ordinary dynamics—laws reversible in time. Why then is entropy not also reversible? How can reversible laws have irreversible effects? The argument is sometimes inverted in hope of explaining another, even deeper mystery, the arrow of time. Why can one move forward and backward in space but only forward in time? The case is not proven. Most of the supposed explanations conceal within their assumptions the very arrow they seek to explain.

In the history of heat engines from Newcomen's on, engineering intuition is gradually advancing scientific knowledge. The first engines strictly designed from thermodynamic principles were Rudolf Diesel's, patented in 1892. With a compression ratio of fifty to one, and a temperature drop on reexpansion of 1,200°C (∼2,160°F), his were among the most efficient ever built, converting 34 percent of the chemical energy of the fuel into mechanical work.

SOLIDIFICATION OF THE VOCABULARY OF ENERGY

Two concepts entangling mass and velocity made chaos in eighteenth-century dynamics, Newton's vis motrixmv and Leibniz's vis vivamv 2 (now written 1½2 mv2, as first proposed by Gaspard Coriolis in 1829). It was a battle of concepts but also a battle of terms. Then casually in 1807 Thomas Young in his Course of Lectures on Natural Philosophy and the Mechanical Arts, dropped this remark: "The term energy may be applied with great propriety to the product of the mass or weight of a body times the number expressing the square of its velocity." Proper or not, energy might have fallen still born in science had not John Farey also, with due praise of Young, twice used it in 1827 in his History of the Steam Engine for vis viva. There, dormant, the word lay, four uses in two books, until 1849, when Thomson in last-minute doubt added this footnote to his second paper on Carnot's theorem, "An Account of Carnot's Theory on the Motive Power of Heat": "When thermal agency is spent in conducting heat through a solid what becomes of the mechanical effect it might produce? Nothing can be lost in the operation of nature—no energy can be destroyed. What then is produced in place of the mechanical effect that is lost?"

In that superb instance of Thomson's genius for producing "science-forming" ideas (the phrase is Maxwell's), thermodynamics is crystallized and energy is given almost its modern meaning. His next paper, "On the Quantities of Mechanical Energy Contained in a Fluid in Different States as to Temperature and Density" (1851), elevated energy to the title and contained the first attempt to define it within the new framework of thought. The next after was the sad one on dissipation. The torch then passed to Rankine, who in two exceptionally able papers, "On the General Law of the Transformation of Energy" (1853) and "Outlines of the Science of Energetics" (1855), completed the main work. The first introduced the term (not the concept) "conservation of energy," together with "actual or sensible energy" for vis viva, and "potential or latent energy" for energy of configuration. "Potential energy" has stuck; "actual energy" was renamed "kinetic energy" by Thomson and P. G. Tait in their Treatise on Natural Philosophy (1867); their definition (vol. 1, sec. 213) nicely explains the Coriolis factor: "The Vis Viva or Kinetic Energyof a moving body is proportional to the mass and the square of the velocity, conjointly. If we adopt the same units for mass and velocity as before there is particular advantage in defining kinetic energy as half the product of the mass and the square of the velocity."

A paper in French by Thomson, "Mémoire sur l'energie mécanique du système solaire" (1854), was the first use in a language other than English; but it was Clausius who by adapting the German Energieto the new scientific sense assured its future. He met much opposition from Mayer and some from Helmholtz, both of whom favored Kraft. Mayer indeed held fiercely that Kraft or force in the sense "forces of nature" had conceptual priority, so that Newton's use of "force" should be disowned. More was at stake here than words. The dispute goes back to the three meanings of "force." For Mayer, conservation and correlation were two sides of the same coin, and the single word "Kraft" preserved the connection. For advocates of "energy," "Kraft/force" in this general sense was a vague term with no quantitative significance. "Energy" was so precise, explicit, and quantitative a concept that it required a term to itself.

Beyond the expressions "conservation of energy" and "potential energy," and the notion and name of a science of energetics (revived in the 1890s by Wilhelm Ostwald), Rankine gave far more in concept and term than he receives credit for. He supplied the first theory of the steam engine, the important term "adiabatic" to denote processes in which work is done with no transfer of heat, and—by a curious accident—the name "thermodynamics" for the science. In 1849 Thomson, seeking a generic name for machines that convert heat into work, had used "thermo-dynamic engine." Rankine followed suit, introducing in 1853 the vital concept of a thermo-dynamic function. Then in 1854 he submitted to the Philosophical Transactions of the Royal Society a long paper, "On the Geometrical Representation of the Expansive Action of Heat and the Theory of Thermo-Dynamic Engines." Papers there carried a running head, author's name to the left, title to the right. Faced with Rankine's lengthy title, the compositor set "Mr. Macquorn Rankine" on the left and "On Thermo-Dynamics" on the right. So apt did this seem that "thermodynamics" (without the hyphen) quickly became the secure name of the science. Later critics, unaware of its roots, have accused the inventors of thermodynamics of muddled thinking about dynamics; in fact, this and the history of Rankine's function provide one more instance of the power and subtlety of words. The "thermo-dynamic function" is what we, after Clausius who rediscovered it and gave it in 1865 its modern name, call entropy. In its original setting as the mathematical function proper to the efficient functioning of engines, Rankine's name was a good one. In the long view, however, Clausius's name for it, formed by analogy with energy, was better, even though the etymological reading he based it on (energy = work content, entropy = transformation content) was false. It was shorter and catchier but, more important, stressed both the parallel and the difference between the two laws of thermo-dynamics.

In 1861, at the Manchester meeting of the British Association for the Advancement of Science, two engineers, Charles Bright and Latimer Clark, read a paper "On the Formation of Standards of Electrical Quantity and Resistance," which had an influence on science and everyday life out of all proportion to the thirteen lines it takes in the Association Report. Its aim was simple. As practical telegraphers, Bright and Clark wished to attain internationally agreed definitions, values, and names for working quantities in their field. Their proposal led to the setting up under Thomson of a famous British Association committee to determine—from energy principles—an absolute unit of electrical resistance. Another, and fascinating, topic was Clark's suggestion made verbally at Manchester that units should be named after eminent scientists. He proposed "volt," "ohm," and "ampere" for units of potential, resistance, and current, though on ampere there was dispute, for Thomson's committee preferred to name the unit of current for the German electrician Weber.

International blessing and the setting of many names came at the Paris Congress of Electricians in 1881, but with no choice yet for units of energy and power. A year later C. William Siemens, the Anglicized younger brother of Ernst Werner von Siemens, was president of the British Association. In his address reported in the British Association report in 1882 he proposed "watt" and "joule"—the one "in honour of that master mind in mechanical science, James Watt ... who first had a clear physical conception of power and gave a rational method of measuring it," the other "after the man who has done so much to develop the dynamical theory of heat." According to the Athenaeumin 1882, Siemens's suggestions were "unanimously approved" by the Physics Section of the British Association a few days later. Joule, who was sixty-four at the time and had another seven years to live, seems to have been the only man so commemorated during his lifetime.

QUANTIZATION, E = MC2 AND EINSTEIN

Accompanying the new nineteenth-century vista on heat were insights into its relation to matter and radiation that opened a fresh crisis. Gases, according to the "kinetic" theory, comprise large numbers of rapidly moving colliding molecules. As remade by Clausius and Maxwell in the 1850s it was an attractive—indeed, a correct—picture; then something went horribly wrong. From the Newtonian mechanics of colliding bodies, Maxwell deduced a statistical theorem—later called the equipartition theorem—flatly contrary to experiment. Two numbers, 1.408 for the measured value of a certain ratio, 1.333 for its calculated value, failed to agree. So upset was he that in a lecture at Oxford in 1860 he said that this one discrepancy "overturned the whole theory." That was too extreme, for the kinetic theory, and the wider science of statistical mechanics, continued to flourish and bring new discoveries, but there was truth in it. Equipartition of energy ran on, an ever-worsening riddle, until finally in 1900 Max Planck, by focusing not on molecules but on radiation, found the answer. Energy is discontinuous. It comes in quantized packets E = hν, where h is a constant ("Planck's constant") and n the frequency of radiation. Expressed in joule-seconds, h has the exceedingly small value 6.67x10-34. Quantum effects are chiefly manifested at or below the atomic level.

The forty years from equipartition to Planck deserve more thought than they ordinarily get. The theorem itself is one hard to match for sheer mathematical surprise. It concerns statistical averages within dynamical systems. As first put by Maxwell, it had two parts: (1) if two or more sets of distinct molecules constantly collide, the mean translational energies of the individual ones in each set will be equal, (2) if the collisions produce rotation, the mean translational and rotational energies will also be equal. This is equipartition. Mathematically, the result seemed impregnable; the snag was this. Gases have two specific heats, one at constant pressure, the other at constant volume. Calculating their ratio Cp /Cv from the new formula was easy; by 1859 the value was already well known experimentally, having been central to both acoustics and Carnot's theory. Data and hypothesis were excellent; only the outcome was wrong.

Worse was to come. Boltzmann in 1872 made the same weird statistical equality hold for everymode in a dynamical system. It must, for example, apply to any internal motions that molecules might have. Assuming, as most physicists did by then, that the sharp lines seen in the spectra of chemical elements originate in just such internal motions, any calculation now of Cp /Cv would yield a figure even lower than 1.333. Worse yet, as Maxwell shatteringly remarked to one student, equipartition must apply to solids and liquids as well as gases: "Boltzmann has proved too much."

Why, given the scope of the calamity, was insight so long coming? The answer is that the evidence as it stood was all negative. Progress requires, data sufficiently rich and cohesive to give shape to theoretical musings. Hints came in 1887 when the Russian physicist V. A. Michelson began applying equipartition to a new and different branch of physical inquiry, the radiation emitted from a heated "black body"; but even there, the data were too sparse. Only through an elaborate interplay of measurement and speculation could a crisis so profound be settled.

Gradually it was, as Wilhelm Wien and others determined the form of the radiation, and Planck guessed and calculated his way to the new law. Found, it was like a magic key. Einstein with dazzling originality applied it to the photoelectric effect. Bohr, after Rutherford's discovery of the nucleus, used it to unlock the secrets of the atom. The worry about gases evaporated, for quantization limits which modes of energy affect specific heats. As for that later worry of Maxwell's about solids, trouble became triumph, a meeting ground between theory and experiment. Among many advances in the art of experimentation after 1870, one especially was cryogenics. One by one all the gases were liquefied, culminating with hydrogen in 1898 and helium in 1908, respectively at 19.2 K and 4.2 K above absolute zero. Hence came accurate measurements near zero of the specific heats of solids, started by James Dewar. They fell off rapidly with temperature. It was Einstein in 1907 who first saw why—because the modes of vibrational energy in solids are quantized. More exact calculations by Peter Debye, and by Max Born and Theodore von Kármán, both in 1912, made Cat low temperatures vary as T3. It was a result at once beautifully confirmed by F. A. Lindemann (Lord Cherwell) in experiments with Walther Nernst in Berlin.

From Planck's h to Bohr's atom to the austere reorganization of thought required in the 1920s to create quantum mechanics is a drama in itself. Crucial to its denouement was Heisenberg's "uncertainty principle," a natural limit first on knowing simultaneously the position xand momentum pof a body (Δx Δp ~h) and second, no less profound, on energy. The energy E of a dynamical system can never be known exactly: measured over a time interval Δt it has an uncertainty ΔE ~ h/Δt. But whether quantum uncertainty is an epistemological truth (a limit on knowledge) or an ontological one (a limit in nature) is an endless, uncertain debate.

If the E= hν of quantum theory is one beguilingly simple later advance in the concept of energy, the supremely famous mass-energy law E = mc2 is another. This and its import for nuclear energy are usually credited solely to Einstein. The truth is more interesting.

It begins with J. J. Thomson in 1881 calculating the motion of an electric charge on Maxwell's electromagnetic theory, a theme Maxwell had barely touched. To the charge's ordinary mass Thomson found it necessary to add an "electromagnetic mass" connected with electrical energy stored in the surrounding field. It was like a ship moving through water—some of the water is dragged along, adding to the ship's apparent mass. Had the calculation been perfect, Thomson would have found E = mc2, as Einstein and others did years later. As it was, Thomson's formula yields after a trivial conversion the near-miss E = 3½4 mc2.

After Thomson nothing particular transpired, until a number of people, of whom Joseph Larmor in 1895 was perhaps first, entertained what historians now call the "electromagnetic worldview." Instead of basing electromagnetism on dynamics, as many had tried, could one base dynamics on electromagnetism? How far this thought would have gone is unclear had it not been for another even greater leap of Thomson's, this time experimental: his discovery of the electron. In 1897, by combining measurements of two kinds, he proved that cathode rays must consist of rapidly moving charged objects with a mass roughly one one-thousandth of the hydrogen atom's. Here was revolution—the first subatomic particle. Hardly less revolutionary was his conjecture that its mass was allelectromagnetic. A shadow ship plowing through an electromagnetic sea: That was Thomson's vision of the electron.

The argument then took two forms: one electromagnetic, the other based on radioactivity. The first was tied to a prediction of Maxwell's in 1865, proved experimentally in 1900 by Petr Lebedev, that light and other electromagnetic radiations exert pressure. This was the line Einstein would follow, but it was Henri Poincaré who, in 1900 at a widely publicized meeting in honor of H. A. Lorentz, first got the point. The Maxwell pressure corresponds to a momentum, given by the density of electromagnetic energy in space divided by the velocity c of light. Momentum is mass times velocity. Unite the two and it is natural to think that a volume of space containing radiant energy Ehas associated with it a mass m = E/c2. Meanwhile, as if from nowhere, had come radioactivity—in Rutherford's phrase, "a Tom Tiddler's ground" where anything might turn up. In 1903 Pierre Curie and his student Albert Laborde discovered that a speck of radium placed in a calorimeter emitted a continuous flux of heat sufficient to maintain it at 1.5°C above the surroundings. Their data translated into a power output of 100W/kg. Where was the power coming from? Was this finally an illimitable source of energy, nature's own perpetual-motion machine? It was here that Einstein, in a concise phrase, carried the argument to its limit. In his own derivation of E = mc2 in 1905, five years after Poincaré's observation, he remarked that any body emitting radiation should lose weight.

The events of World War II, and that side of Einstein that made credit for scientific ideas in his universe a thing more blessed to receive than to give, have created a belief that atomic energy somehow began with him. In truth the student of early twentieth-century literature notices how soon after radioactivity there arose among people who knew nothing of E = mc2 or Einstein a conviction that here was a mighty new source of energy. Geologists hailed it with relief, for as the Irish physicist John Joly demonstrated in 1903, it provided an escape from Kelvin's devastating thermodynamic limit on the age of Earth. But not only they, and not always with relief. In Jack London's nightmare futurist vision The Iron Heel, published in 1908, atomic gloom permeates the plot. No less interesting is this, from a 1907 work London may have read that remains among the most illuminating popularizations of the events of that era, R. K. Duncan's The New Knowledge: "March, 1903, ... was a date to which, in all probability, the men of the future will after refer as the veritable beginning of the larger powers and energies that they will control. It was in March, 1903, that Curie and Laborde announced the heat-emitting power of radium. The fact was simple of demonstration and unquestionable. They discovered that a radium compound continuously emits heat without combustion, or change in its molecular structure.... It is all just as surprising as though Curie had discovered a red-hot stove which required no fuel to maintain it on heat." But there was fuel: the fuel was mass.

The highest (and most puzzling) of Einstein's offerings is not E = mc2, where his claim is short, but the spectacular reinterpretation of gravitational energy in the new theory he advanced in 1915 under the name general relativity. Owing much to a brilliant paper of 1908 by his former teacher Hermann Minkowski, this, of course, is not the popular fable that everything is relative. It is a theory of relations in which two paired quantities—mass-energy and space-time—are each joined through the velocity of light mass-energy via E = mc2, space-time via a theorem akin to Pythagoras's where time multiplied by c enters as a fourth dimension, not additively but by subtraction –(ct)2. When in this context Einstein came to develop a theory of gravity to replace Newton's, he met two surprises, both due ultimately to the curious—indeed deeply mysterious—fact that in gravitation, unlike any other force of nature, mass enters in two ways. Newton's law of acceleration F = ma makes it the receptacle of inertia; his law of gravity F = GMm/r2 makes it the origin of the force. If, therefore, as with electromagnetism, the energy is disseminated through space, it will have the extraordinary effect of making gravitation a source for itself. Consider Earth. Its mass is 6×1021 tons; the mass of the gravitational field around it, computed from E = mc2, though tiny by comparison, has the by-human-standards enormous value of 4×1010 tons. It, too, exerts an attraction. For denser bodies this "mass of the field" is far greater. Because of it any relativistic theory of gravitation has to be nonlinear.

Even deeper, drawn from his much-discussed but too little understood "falling elevator" argument, was Einstein's substituting for Newton's equivalence between two kinds of mass an equivalence between two kinds of acceleration. How innocent it seems and how stunning was the transformation it led to. The theory could be rewritten in terms not of mass-energy variables but of space-time variables. Gravitation could be represented as warping space-time—in just the right nonlinear way. For this one form, energy had been transmuted into geometry. But was the geometrization limited to gravity, or might it extend to energy of every kind, beginning with electromagnetic? That great failed vision of the geometrization of physics would charm and frustrate Einstein and many others for the next forty years, and it retains its appeal.

Looking back over more than a century since energy began as concept and term, it is impossible not to be struck both by the magnitude of the advance and the strenuousness of the puzzles that remain. In the design of engines, the practice of quantum mechanics, the understanding of elementary particles, the application of relativity, detailed prescriptions and laws exist that work perfectly over the range of need. Beyond lies doubt. To physicists with their conviction that beauty and simplicity are the keys to theory, it is distressing that two of the most beautifully simple conceptions proposed in physics, J. J. Thomson's that all mass is electromagnetic, Einstein's that all energy is geometric, both manifestly verging on truth, have both failed, with nothing either as simple or as beautiful arising since to take their place. Then there are issues such as quantum uncertainty and the problems of dissipation, reversibility, and the arrow of time, where to a certain depth everything is clear, but deeper (and the depth seems inescapable), everything is murky and obscure—questions upon which physicists and philosophers, such as Milton's theologians, sit apart in high debate and find no end in wandering mazes lost.

C. W. F. Everitt

BIBLIOGRAPHY

Born, M. (1951). Atomic Physics, 5th ed. New York: Hafner. Brush, S. G. (1976). The Kind of Motion We Call Heat. Amsterdam: North Holland.

Brush, S. G. (1983). Statistical Physics and the Atomic Theory of Matter. Princeton, NJ: Princeton University Press.

Cantor, G. (1991). Michael Faraday: Sandemanian and Scientist: A Study of Science and Religion in the Nineteenth Century. London: Macmillan.

Cardwell, D. S. L. (1971). From Watt to Clasuius. Ithaca, NY: Cornell University Press.

Cardwell, D. S. L. (1989). James P. Joule. Manchester, Eng.: Manchester University Press.

Carnot, S. (1960). Reflections on the Motive Power of Fire by Sadi Carnot; and Other Papers on the Second Law of Thermodynamics, by E. Clapeyron and R. Clausius, ed. E. Mendoza. New York: Dover.

Duncan, R. K. (1907). The New Knowledge: A Popular Account of the New Physics and the New Chemistry in Their Relation to the New Theory of Matter. London: Hodder and Stoughton.

Elkana, Y. (1974). The Discovery of the Conservation of Energy.London: Hutchinson Educational.

Everitt, C. W. F. (1975). James Clerk Maxwell: Physicist and Natural Philosopher. New York: Charles Scribner's Sons.

Ewing, J. A. (1894). The Steam-Engine and Other Heat-Engines. Cambridge, Eng.: Cambridge University Press.

Farey, J. (1827). A Treatise on the Steam Engine. London: Longman, Rees, Orme, Brown, and Green.

Harman, P. (1998). The Natural Philosophy of James Clerk Maxwell. New York: Cambridge University Press.

Jammer, M. (1957). Concepts of Force.Cambridge: Harvard University Press.

Joly, J. (1909). Radioactivity and Geology: An Account of the Influence of Radioactive Energy on Terrestrial History. London: A. Constable.

Kuhn, T. S. (1977). "Energy Conservation as an Example of Simultaneous Discovery." In The Essential Tension. Chicago: University of Chicago Press.

Kuhn, T. S. (1978). Black-Body Theory and the Quantum Discontinuity, 1894–1912. New York: Oxford University Press.

Lorentz, H. A.; Einstein A.; Minkowski, H.; Weyl, H. (1923). The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity, with Notes by A. Sommerfeld. London: Dover.

Miller, A. I. (1987). "A précis of Edmund Whittaker's 'The Relativity Theory of Poincaré and Lorentz.'" Archives Internationales d'Histoire de Science 37:93–103.

Pais, A. (1982). Subtle Is the Lord: The Science and the Life of Albert Einstein. New York: Oxford University Press.

Rankine, W. J. M. (1881). "On the General Law of the Transformation of Energy." In Miscellaneous Scientific Papers: From the Transactions and Proceedings of the Royal and Other Scientific and Philosophical Societies, and the Scientific Journals; with a Memoir of the Author by P. G. Tait, ed. W. J. Millar. London: C. Griffin.

Rankine, W. J. M. (1881). "Outlines of the Science of Energetics." In Miscellaneous Scientific Papers: From the Transactions and Proceedings of the Royal and Other Scientific and Philosophical Societies, and the Scientific Journals; with a Memoir of the Author by P. G. Tait, ed. W. J. Millar. London: C. Griffin.

Smith, C., and Wise, N. (1889). Energy and Empire : A Biographical Study of Lord Kelvin. New York: Cambridge University Press.

Thomson, W. (1882–1911). "An Account of Carnot's Theory on the Motive Power of Heat." In Mathematical and Physical Papers: Collected from Different Scientific Periodicals from May 1841 to the Present Time, in 6 Vols., vol. 1, pp. 156–164. Cambridge, Eng.: Cambridge University Press.

Thomson, W. (1882–1911). "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy." In Mathematical and Physical Papers: Collected from Different Scientific Periodicals from May 1841 to the Present Time, in 6 Vols., vol. 1, pp. 511–514. Cambridge, Eng.: Cambridge University Press.

Thomson, W. and Tait, P. G. (1888–1890). Treatise on Natural Philosophy. Cambridge, Eng.: Cambridge University Press.

Truesdell, C. (1980). The Tragicomical History of Thermodynamics, 1822–1854. New York : Springer-Verlag.

Whittaker, E. T. (1951–1953). A History of the Theories of Aether and Electricity, rev. ed. New York: T. Nelson. Williams, L. P. (1965). Michael Faraday. New York: Basic Books.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Scientific and Technical Understanding of Energy." Macmillan Encyclopedia of Energy. . Encyclopedia.com. 20 Nov. 2018 <https://www.encyclopedia.com>.

"Scientific and Technical Understanding of Energy." Macmillan Encyclopedia of Energy. . Encyclopedia.com. (November 20, 2018). https://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/scientific-and-technical-understanding-energy

"Scientific and Technical Understanding of Energy." Macmillan Encyclopedia of Energy. . Retrieved November 20, 2018 from Encyclopedia.com: https://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/scientific-and-technical-understanding-energy

Learn more about citation styles

Citation styles

Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).

Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.

Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:

Modern Language Association

http://www.mla.org/style

The Chicago Manual of Style

http://www.chicagomanualofstyle.org/tools_citationguide.html

American Psychological Association

http://apastyle.apa.org/

Notes:
  • Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
  • In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.