ENERGY. Sufficient dietary energy is essential to the survival and health of all animals. For understanding the biology and health of humans, energy is particularly important for a number of reasons. First, food and energy represent critical points of interaction between humans and their environment. The environments in which humans live determine the range of food resources that are available and how much energy and effort are necessary to procure those resources. Indeed, the dynamic between energy intake and energy expenditure is quite different for a subsistence farmer of Latin America than it is for an urban executive living in the United States. Beyond differences in the physical environment, social, cultural, and economic variation also shape aspects of energy balance. Social and cultural norms are important for shaping food preferences, whereas differences in subsistence behavior and socioeconomic status strongly influence food availability and the effort required to obtain food.
Additionally, the balance between energy expenditure and energy acquired has important adaptive consequences for both survival and reproduction. Obtaining sufficient food energy has been an important stressor throughout human evolutionary history, and it continues to strongly shape the biology of traditional human populations today.
This article examines aspects of energy expenditure and energy intake in humans. How energy is measured is first considered, with a look at how both the energy content of foods and the energy requirements for humans are determined. Next, aspects of energy consumption and the chemical sources of energy in different food items are examined. Third, the physiological basis of variation in human energy requirements is explored, specifically a consideration of the different factors that determine how much energy a person must consume to sustain him- or herself. Finally, patterns of variation in energy intake and expenditure among modern human populations are examined, with the different strategies that humans use to fulfill their dietary energy needs highlighted.
Calorimetry: Measuring Energy
The study of energy relies on the principle of calorimetry, the measurement of heat transfer. In food and nutrition, energy is most often measured in kilocalories (kcal). One kilocalorie is the amount of heat required to raise the temperature of 1 kilogram (or 1 liter) of water 1°C. Thus, a food item containing 150 kilocalories (two pieces of bread, for example) contains enough stored chemical energy to increase the temperature of 150 liters of water by 1°C. Another common unit for measuring energy is the joule or the kilojoule (1 kilojoule [kJ] = 1,000 joules). The conversion between calories and joules is as follows: 1 kilocalorie equals 4.184 kilojoules.
To directly measure the energy content of foods, scientists use an instrument known as a bomb calorimeter. This instrument burns a sample of food in the presence of oxygen and measures the amount of heat released (that is, kilocalories or kilojoules). This heat of combustion represents the total energetic value of the food.
Basic principles of calorimetry are also used to measure energy expenditure (or requirements) in humans and other animals. Techniques for measuring energy expenditure involve either measuring heat loss directly (direct calorimetry) or measuring a proxy of heat loss such as oxygen consumption (O2) or carbon dioxide (CO2) production (indirect calorimetry). Direct calorimetry is done under controlled laboratory conditions in insulated chambers that measure changes in air temperature associated with the heat being released by a subject. Although quite accurate, direct calorimetry is not widely used because of its expense and technical difficulty.
Thus, methods of indirect calorimetry are more commonly used to quantify human energy expenditure. The most widely used of these techniques involve measuring oxygen consumption. Because the body's energy production is dependent on oxygen (aerobic respiration), O2 consumption provides a very accurate indirect way of measuring a person's energy expenditure. Every liter of O2 consumed by the body is equivalent to an energy cost of approximately 5 kilocalories. Consequently, by measuring O2 use while a person is performing a particular task (for example, standing, walking, or running on a treadmill), the energy cost of the task can be determined.
With the Douglas bag method for measuring O2 uptake, subjects breathe through a valve that allows them to inhale room air and exhale into a large collection bag. The volume and the O2 and CO2 contents of the collected air sample are then measured to determine the total amount of oxygen consumed by the subject. Recent advances in computer technology allow for the determination of O2 consumption more quickly without having to collect expired air samples. One computerized system for measuring oxygen consumption, like the Douglas bag method, determines energy costs by measuring the volume and the O2 and CO2 concentrations of expired air samples.
Sources of Food Energy
The main chemical sources of energy in our foods are carbohydrates, protein, and fats. Collectively, these three energy sources are known as macronutrients. Vitamins and minerals (micronutrients) are required in much smaller amounts and are important for regulating many aspects of biological function.
Carbohydrates and proteins have similar energy contents; each provides 4 kilocalories of metabolic energy per gram. In contrast, fat is more calorically dense; each gram provides about 9 to 10 kilocalories. Alcohol, although not a required nutrient, also can be used as an energy source, contributing 7 kcal/g. Regardless of the source, excess dietary energy can be stored by the body as glycogen (a carbohydrate) or as fat. Humans have relatively limited glycogen stores (about 375–475 grams) in the liver and muscles. Fat, however, represents a much larger source of stored energy, accounting for approximately 13 to 20 percent of body weight in men and 25 to 28 percent in women.
The largest source of dietary energy for most humans is carbohydrates (45–50 percent of calories in the typical American diet). The three types of carbohydrates are monosaccharides, disaccharides, and polysaccharides. Monosaccharides, or simple sugars, include glucose, the body's primary metabolic fuel; fructose (fruit sugar); and galactose. Disaccharides, as the name implies, are sugars formed by a combination of two monosaccharides. Sucrose (glucose and fructose), the most common disaccharide, is found in sugar, honey, and maple syrup. Lactose, the sugar found in milk, is composed of glucose and galactose. Maltose (glucose and glucose), the least common of the disaccharides, is found in malt products and germinating cereals. Polysaccharides, or complex carbohydrates, are composed of three or more simple sugar molecules. Glycogen is the polysaccharide used for storing carbohydrates in animal tissues. In plants, the two most common polysaccharides are starch and cellulose. Starch is found in a wide variety of foods, such as grains, cereals, and breads, and provides an important source of dietary energy. In contrast, cellulose—the fibrous, structural parts of plant material—is not digestible by humans and passes through the gastrointestinal tract as fiber.
Fats provide the largest store of potential energy for biological work in the body. They are divided into three main groups: simple, compound, and derived. The simple or "neutral fats" consist primarily of triglycerides. A triglyceride consists of two component molecules: glycerol and fatty acid. Fatty acid molecules, in turn, are divided into two broad groups: saturated and unsaturated. These categories reflect the chemical bonding pattern between the carbon atoms of the fatty acid molecule. Saturated fatty acids have no double bonds between carbons, thus allowing for the maximum number of hydrogen atoms to be bound to the carbon (that is, the carbons are "saturated" with hydrogen atoms). In contrast, unsaturated fatty acids have one (monounsaturated) or more (polyunsaturated) double bonds. Saturated fats are abundant in animal products, whereas unsaturated fats predominate in vegetable oils.
Compound fats consist of a neutral fat in combination with some other chemical substance (for example, a sugar or a protein). Examples of compound fats include phospholipids and lipoproteins. Phospholipids are important in blood clotting and insulating nerve fibers, whereas lipoproteins are the main form of transport for fat in the bloodstream.
Derived fats are substances synthesized from simple and compound fats. The best known derived fat is cholesterol. Cholesterol is present in all human cells and may be derived from foods (exogenous) or synthesized by the body (endogenous). Cholesterol is necessary for normal development and function because it is critical for the synthesis of such hormones as estradiol, progesterone, and testosterone.
Proteins, in addition to providing an energy source, are also critical for the growth and replacement of living tissues. They are composed of nitrogen-containing compounds known as amino acids. Of the twenty different amino acids required by the body, nine (leucine, isoleucine, valine, lysine, threonine, methionine, phenylalanine, tryptophan, and histidine) are known as "essential" because they cannot be synthesized by the body and thus must be derived from food. Two others, cystine and tyrosine, are synthesized in the body from methionine and phenylalanine, respectively. The remaining amino acids are called "nonessential" because they can be produced by the body and need not be derived from the diet.
Determinants of Daily Energy Needs
A person's daily energy requirements are determined by several different factors. The major components of an individual's energy budget are associated with resting or basal metabolism, activity, growth, and reproduction. Basal metabolic rate (BMR) represents the minimum amount of energy necessary to keep a person alive. Basal metabolism is measured under controlled conditions while a subject is lying in a relaxed and fasted state.
In addition to basal requirements, energy is expended to perform various types of work, such as daily activities and exercise, digestion and transport of food, and regulating body temperature. The energy costs associated with food handling (i.e., the thermic effect of food) make up a relatively small proportion of daily energy expenditure and are influenced by amount consumed and the composition of the diet (e.g., high-protein meals elevate dietary thermogenesis). In addition, at extreme temperatures, energy must be spent to heat or cool the body. Humans (unclothed) have a thermoneutral range of 25 to 27°C (77–81°F). Within this temperature range, the minimum amount of metabolic energy is spent to maintain body temperature. Finally, during one's lifetime, additional energy is required for physical growth and for reproduction (e.g., pregnancy, lactation).
In 1985 the World Health Organization (WHO) presented its most recent recommendations for assessing human energy requirements. The procedure used for determining energy needs involves first estimating BMR from body weight on the basis of predictive equations developed by the WHO. These equations are presented in Table 1. After estimating BMR, the total daily energy expenditure (TDEE) for adults (18 years old and above) is determined as a multiple of BMR, based on the individual's activity level. This multiplier, known as the physical activity level (PAL) index, reflects the proportion of energy above basal requirements that an individual spends over the course of a normal day. The PALs associated with different occupational work levels for adult men and women are presented in Table 2. The WHO recommends that minimal daily activities such as dressing, washing, and eating are commensurate with a PAL of 1.4 for both men and women. Sedentary lifestyles (e.g., office work) require PALs of 1.55 for men and 1.56 for women. At higher work levels, however, the sex differences are greater. Moderate work is associated with a PAL of 1.78 in men and 1.64 in women, whereas heavy work levels (for example, manual labor, traditional agriculture) necessitate PALs of 2.10 and 1.82 for men and women, respectively.
|Equations for predicting basal metabolic rate (BMR) based on body weight (Wt in kilograms)|
|0–2.9||60.9 (Wt) – 54||61.0 (Wt) – 51|
|3.0–9.9||27.7 (Wt) + 495||22.5 (Wt) + 499|
|10.0–17.9||17.5 (Wt) + 651||12.2 (Wt) + 746|
|18.0–29.9||15.3 (Wt) + 679||14.7 (Wt) + 496|
|30.0–59.9||11.6 (Wt) + 879||8.7 (Wt) + 829|
|60+||13.5 (Wt) + 487||10.5 (Wt) + 596|
|source: FAO/WHO/UNU, 1985|
In addition to the costs of daily activity and work, energy costs for reproduction also must be considered. The WHO recommends an additional 285 kcal/day for women who are pregnant and an additional 500 kcal/day for those who are lactating.
Energy requirements for children and adolescents are estimated differently because extra energy is necessary for growth and because relatively less is known about variation in their activity patterns. For children and adolescents between 10 and 18 years old, the WHO recommends the use of age-and sex-specific PALs. In contrast, energy requirements for children under 10 years old are determined by multiplying the child's weight by an ageand sex-specific constant. The reference values for boys and girls under 18 years old are presented in Table 3.
Human Variation in Sources of Food Energy
Compared to most other mammals, humans are able to survive and flourish eating a remarkably wide range of foods. Human diets range from completely vegetarian (as observed in many populations of South Asia) to those based almost entirely on meat and animal foods (for example, traditional Eskimo/Inuit populations of the Arctic). Thus, over the course of evolutionary history, humans have developed a high degree of dietary plasticity.
|Physical activity levels (PALs) associated with different types of occupational work among adults (18 years and older)|
|source: FAO/WHO/UNU, 1985|
|Energy constants and PALs recommended for estimating daily energy requirements for individuals under the age of 18|
|Energy constant (kcal/kg body weight)|
|source: FAO/WHO/UNU, 1985; James and Schofield, 1990|
This ability to utilize a diverse array of plant and animal resources for food is one of the features that allowed humans to spread and colonize ecosystems all over the world.
Table 4 presents information on per capita energy intakes and the percentage of energy derived from plant and animal foods for subsistence-level (i.e., food-producing) and industrial human societies. The relative contribution of animal foods varies considerably, ranging from less than 10 percent of dietary energy in traditional farming communities of tropical South America, to more than 95 percent among traditionally living Inuit hunters of the Canadian Arctic.
Subsistence-level agricultural populations, as a group, have the lowest consumption of animal foods. Among hunting and gathering populations, the contribution of animal foods to the diet is variable, partly reflecting the environments in which these populations reside. For example, the !Kung San, who live in arid desert environments of southern Africa, have among the lowest levels of animal food consumption among hunter-gatherers. In contrast, hunters of the Arctic rely almost entirely on animal foods for their daily energy. Foragers living in forest and grassland regions of the tropics (for example, the Ache and the Hiwi) have intermediate levels of animal consumption.
Regardless of whether they are from plant or animals, the staple foods in most human societies are calorically dense. Indeed, one of the hallmarks of human
|Per capita energy intake (kcal/day) and percentage of dietary energy derived from animal and plant foods in selected human populations|
|Population||Energy intake (kcal/day)||Energy from animal foods||Energy from plant foods|
|!Kung San (Botswana)||2,100||33||67|
|Quechua (highland Peru)||2,002||5||95|
|Yapú (lowland Colombia)||1,968||11||89|
evolutionary history has been humankind's success at developing subsistence strategies that maximize the energy returns from available food resources. The initial evolution of human "hunting and gathering" economies some 2 million years ago is an example of this. By incorporating more meat into their diet, man's hominid ancestors were able to increase the energy contents of their diets.
With the evolution of agriculture, human populations began to manipulate relatively marginal plant species so as to increase their productivity, digestibility, and energy content. Today, staple agricultural crops such as rice, wheat, and other cereal grains are calorically dense (more than 300 kilocalories per 100 grams), and are much richer sources of energy than the wild plants from which they evolved.
Novel methods of food processing also allow humans to increase the energy content and digestibility of their foods. The most fundamental of these techniques is the use of fire for cooking, a strategy adopted by man's hominid ancestors at least 400,000 years ago. Cooking makes plant foods more digestible by helping to break down complex carbohydrates. Recent work has shown that cooking can increase the energy content of starchy tubers (potatoes, cassava) by more than 70 percent.
Another interesting example of processing food to raise its energy content is seen among populations living in the high Andes of South America. Here, small potatoes are left outside for several days to be repeatedly frozen during the cold nights and then dried under the intense daytime sun. The resulting product, called chuño, can be stored for many months and has an energy content more than three times that of a fresh potato (330 kilocalories per 100 grams versus 90 kilocalories per 100 grams).
Human Variation in Energy Expenditure
Humans also show considerable variation in levels of energy expenditure. Recent work by Allison E. Black and colleagues indicates that daily energy expenditure in human groups typically ranges from 1.2 to 5.0 times BMR (i.e., PAL = 1.2–5.0). The lowest levels of physical activity, PALs of 1.20 to 1.25, are observed among hospitalized and nonambulatory populations. In contrast, the highest levels of physical activity (PALs of 2.5–5.0) have been observed among elite athletes and soldiers in combat training. Within this group, Tour de France cyclists have the highest recorded daily energy demands of 8,050 kcal/day (a PAL = 4.68)!
Table 5 presents data on body weight, total daily energy expenditure, and PALs of adult men and women from selected human groups. Men of the subsistence-level populations (that is, foragers, pastoralists, and agriculturalists) are, on average, 20 kilograms (45 pounds) lighter than their counterparts from the industrialized world, and yet have similar levels of daily energy expenditure (2,897 versus 2,859 kcal/day). The same pattern is true for women; those from subsistence-level populations are 12.5 kilograms (28 pounds) lighter than women of industrialized societies, but have higher levels of daily energy expenditure (2,227 versus 2,146 kcal/day).
Thus, daily energy needs are expressed relative to BMR; it is found that adults living a "modern" lifestyle in the industrialized world have significantly lower physical activity levels than those living more "traditional" lives. Among men, PALs in the industrialized societies average 1.67 (range = 1.53 to 1.84), as compared to 1.90 (range = 1.58 to 2.38) among the subsistence-level groups. Physical activity levels among women average 1.63 in the industrialized world (range = 1.48 to 1.69) and 1.78 (range = 1.56 to 2.03) among the subsistence-level societies.
The differences in daily energy demands between subsistence-level and industrialized populations are further highlighted in Figure 1, which shows daily energy expenditure (kilocalories/day) plotted relative to body weight (in kilograms). The two lines denote the best-fit regressions for both groups. These regressions show that at the same body weight, adults of the industrialized world have daily energy needs that are 600 to 1,000 kilocalories lower than those of people living in subsistence-level societies.
It is these declines in physical activity and daily energy expenditure associated with "modern" lifestyles that are largely responsible for the growing problem of obesity throughout the world. In the United States, rates of obesity have increased dramatically over the last twenty years, such that now over half of the adult American population is either overweight or obese. Equally disturbing has been the emergence of obesity as a problem in part of the developing world where it was virtually unknown less than a generation ago. In some sense, obesity and other chronic diseases of the modern world (diabetes and
|Weight (kg), total daily energy expenditure (TDEE in kcal/day), basal metabolic rate (BMR in kcal/day), and physical activity level (PAL) of selected human groups|
|Group||Sex||Weight (kg)||TDEE (kcal/day)||BMR (kcal/day)||PAL (TDEE/BMR)|
|75 and older||M||72.6||2,199||1,434||1.53|
|!Kung San foragers||M||46.0||2,319||1,383||1.68|
|Highland Ecuador, agriculturalists||M||61.3||3,810||1,601||2.38|
|Coastal Ecuador, agriculturalists||M||55.6||2,416||1,529||1.58|
cardiovascular disease, for example) represent a continuation of trends that started early in man's evolutionary history. Humans have developed a diet that is extremely rich in calories while at the same time minimizing the amount of energy necessary for physical work and activity. Ongoing work in nutritional science is attempting to better understand the biological and environmental factors that influence patterns of energy consumption and expenditure to promote human health and well-being.
See also Assessment of Nutritional Status; Body Composition; Hunting and Gathering; Inuit; Nutrition Transition: Worldwide Diet Change; Physical Activity and Nutrition.
Black, Allison E., W. Andrew Coward, Tim J. Cole, and Andrew M. Prentice. "Human Energy Expenditure in Affluent Societies: An Analysis of 574 Double-Labelled Water Measurements." European Journal of Clinical Nutrition 50 (1996): 72–92.
Consolazio, C. Frank, Robert E. Johnson, and Louis J. Pecora. Physiological Measurements of Metabolic Functions in Man. New York: McGraw-Hill, 1963.
Durnin, John V. G. A., and Reginald Passmore. Energy, Work and Leisure. London: Heineman, 1967.
Food and Agriculture Organization, World Health Organization, and United Nations University (FAO/WHO/UNU). Energy and Protein Requirements. Report of Joint FAO/ WHO/UNU Expert Consultation. WHO Technical Report Series No. 724. Geneva: World Health Organization, 1985.
Gibson, Rosalind S. Principles of Nutritional Assessment. Oxford: Oxford University Press, 1990.
James, William P. T., and E. Claire Schofield. Human Energy Requirements: A Manual for Planners and Nutritionists. Oxford: Oxford University Press, 1990.
Kleiber, Max. The Fire of Life: An Introduction to Animal Energetics, 2d ed. Huntington, N.Y.: Krieger, 1975.
Leonard, William R. "Human Nutritional Evolution." In Human Biology: A Biocultural and Evolutionary Approach, edited by Sara Stinson, Barry Bogin, Rebecca Huss-Ashmore, and Dennis O'Rourke, pp. 295–343. New York: Wiley-Liss, 2000.
Leonard, William R., and Marcia L. Robertson. "Comparative Primate Energetics and Hominid Evolution." American Journal of Physical Anthropology 102 (1997): 265–281.
McArdle, William D., Frank I. Katch, and Victor L. Katch. Exercise Physiology: Energy, Nutrition and Human Performance, 5th ed. Philadelphia: Lippincott Williams and Wilkins, 2001.
McLean, Jennifer A., and G. Tobin. Animal and Human Calorimetry. Cambridge: Cambridge University Press, 1987.
Ulijaszek, Stanley J. Human Energetics in Biological Anthropology. Cambridge: Cambridge University Press, 1995.
William R. Leonard
Leonard, William R.. "Energy." Encyclopedia of Food and Culture. 2003. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1G2-3403400196.html
Leonard, William R.. "Energy." Encyclopedia of Food and Culture. 2003. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3403400196.html
As with many concepts in physics, energy—along with the related ideas of work and power—has a meaning much more specific, and in some ways quite different, from its everyday connotation. According to the language of physics, a person who strains without success to pull a rock out of the ground has done no work, whereas a child playing on a playground produces a great deal of work. Energy, which may be defined as the ability of an object to do work, is neither created nor destroyed; it simply changes form, a concept that can be illustrated by the behavior of a bouncing ball.
HOW IT WORKS
In fact, it might actually be more precise to say that energy is the ability of "a thing" or "something" to do work. Not only tangible objects (whether they be organic, mechanical, or electromagnetic) but also non-objects may possess energy. At the subatomic level, a particle with no mass may have energy. The same can be said of a magnetic force field.
One cannot touch a force field; hence, it is not an object—but obviously, it exists. All one has to do to prove its existence is to place a natural magnet, such as an iron nail, within the magnetic field. Assuming the force field is strong enough, the nail will move through space toward it—and thus the force field will have performed work on the nail.
Work: What It Is and Is Not
Work may be defined in general terms as the exertion of force over a given distance. In order for work to be accomplished, there must be a displacement in space—or, in colloquial terms, something has to be moved from point A to point B. As noted earlier, this definition creates results that go against the common-sense definition of "work."
A person straining, and failing, to pull a rock from the ground has performed no work (in terms of physics) because nothing has been moved. On the other hand, a child on a playground performs considerable work: as she runs from the slide to the swing, for instance, she has moved her own weight (a variety of force) across a distance. She is even working when her movement is back-and-forth, as on the swing. This type of movement results in no net displacement, but as long as displacement has occurred at all, work has occurred.
Similarly, when a man completes a full push-up, his body is in the same position—parallel to the floor, arms extended to support him—as he was before he began it; yet he has accomplished work. If, on the other hand, he at the end of his energy, his chest is on the floor, straining but failing, to complete just one more push-up, then he is not working. The fact that he feels as though he has worked may matter in a personal sense, but it does not in terms of physics.
Work can be defined more specifically as the product of force and distance, where those two vectors are exerted in the same direction. Suppose one were to drag a block of a certain weight across a given distance of floor. The amount of force one exerts parallel to the floor itself, multiplied by the distance, is equal to the amount of work exerted. On the other hand, if one pulls up on the block in a position perpendicular to the floor, that force does not contribute toward the work of dragging the block across the floor, because it is not par allel to distance as defined in this particular situation.
Similarly, if one exerts force on the block at an angle to the floor, only a portion of that force counts toward the net product of work—a portion that must be quantified in terms of trigonometry. The line of force parallel to the floor may be thought of as the base of a triangle, with a line perpendicular to the floor as its second side. Hence there is a 90°-angle, making it a right triangle with a hypotenuse. The hypotenuse is the line of force, which again is at an angle to the floor.
The component of force that counts toward the total work on the block is equal to the total force multiplied by the cosine of the angle. A cosine is the ratio between the leg adjacent to an acute (less than 90°) angle and the hypotenuse. The leg adjacent to the acute angle is, of course, the base of the triangle, which is parallel to the floor itself. Sizes of triangles may vary, but the ratio expressed by a cosine (abbreviated cos) does not. Hence, if one is pulling on the block by a rope that makes a 30°-angle to the floor, then force must be multiplied by cos 30°, which is equal to 0.866.
Note that the cosine is less than 1; hence when multiplied by the total force exerted, it will yield a figure 13.4% smaller than the total force. In fact, the larger the angle, the smaller the cosine; thus for 90°, the value of cos = 0. On the other hand, for an angle of 0°, cos = 1. Thus, if total force is exerted parallel to the floor—that is, at a 0°-angle to it—then the component of force that counts toward total work is equal to the total force. From the standpoint of physics, this would be a highly work-intensive operation.
GRAVITY AND OTHER PECULIARITIES OF WORK.
The above discussion relates entirely to work along a horizontal plane. On the vertical plane, by contrast, work is much simpler to calculate due to the presence of a constant downward force, which is, of course, gravity. The force of gravity accelerates objects at a rate of 32 ft (9.8 m)/sec2. The mass (m ) of an object multiplied by the rate of gravitational acceleration (g ) yields its weight, and the formula for work done against gravity is equal to weight multiplied by height (h ) above some lower reference point: mgh.
Distance and force are both vectors—that is, quantities possessing both magnitude and direction. Yet work, though it is the product of these two vectors, is a scalar, meaning that only the magnitude of work (and not the direction over which it is exerted) is important. Hence mgh can refer either to the upward work one exerts against gravity (that is, by lifting an object to a certain height), or to the downward work that gravity performs on the object when it is dropped. The direction of h does not matter, and its value is purely relative, referring to the vertical distance between one point and another.
The fact that gravity can "do work"—and the irrelevance of direction—further illustrates the truth that work, in the sense in which it is applied by physicists, is quite different from "work" as it understood in the day-to-day world. There is a highly personal quality to the everyday meaning of the term, which is completely lacking from its physics definition.
If someone carried a heavy box up five flights of stairs, that person would quite naturally feel justified in saying "I've worked." Certainly he or she would feel that the work expended was far greater than that of someone who had simply allowed the the elevator to carry the box up those five floors. Yet in terms of work done against gravity, the work done on the box by the elevator is exactly the same as that performed by the person carrying it upstairs. The identity of the "worker"—not to mention the sweat expended or not expended—is irrelevant from the standpoint of physics.
Measurement of Work and Power
In the metric system, a newton (N) is the amount of force required to accelerate 1 kg of mass by 1 meter per second squared (m/s2). Work is measured by the joule (J), equal to 1 newton-meter (N · m). The British unit of force is the pound, and work is measured in foot-pounds, or the work done by a force of 1 lb over a distance of one foot.
Power, the rate at which work is accomplished over time, is the same as work divided by time. It can also be calculated in terms of force multiplied by speed, much like the force-multiplied-by-distance formula for work. However, as with work, the force and speed must be in the same direction. Hence, the formula for power in these terms is F · cos θ · v, where F =force, v =speed, and cos θ is equal to the cosine of the angle θ (the Greek letter theta) between F and the direction of v.
The metric-system measure of power is the watt, named after James Watt (1736-1819), the Scottish inventor who developed the first fully viable steam engine and thus helped inaugurate the Industrial Revolution. A watt is equal to 1 joule per second, but this is such a small unit that it is more typical to speak in terms of kilowatts, or units of 1,000 watts.
Ironically, Watt himself—like most people in the British Isles and America—lived in a world that used the British system, in which the unit of power is the foot-pound per second. The latter, too, is very small, so for measuring the power of his steam engine, Watt suggested a unit based on something quite familiar to the people of his time: the power of a horse. One horsepower (hp) is equal to 550 foot-pounds per second.
SORTING OUT METRIC AND BRITISH UNITS.
The British system, of course, is horridly cumbersome compared to the metric system, and thus it long ago fell out of favor with the international scientific community. The British system is the product of loosely developed conventions that emerged over time: for instance, a foot was based on the length of the reigning king's foot, and in time, this became standardized. By contrast, the metric system was created quite deliberately over a matter of just a few years following the French Revolution, which broke out in 1789. The metric system was adopted ten years later.
During the revolutionary era, French intellectuals believed that every aspect of existence could and should be treated in highly rational, scientific terms. Out of these ideas arose much folly—especially after the supposedly "rational" leaders of the revolution began chopping off people's heads—but one of the more positive outcomes was the metric system. This system, based entirely on the number 10 and its exponents, made it easy to relate one figure to another: for instance, there are 100 centimeters in a meter and 1,000 meters in a kilometer. This is vastly more convenient than converting 12 inches to a foot, and 5,280 feet to a mile.
For this reason, scientists—even those from the Anglo-American world—use the metric system for measuring not only horizontal space, but volume, temperature, pressure, work, power, and so on. Within the scientific community, in fact, the metric system is known as SI, an abbreviation of the French Système International d'Unités —that is, "International System of Units."
Americans have shown little interest in adopting the SI system, yet where power is concerned, there is one exception. For measuring the power of a mechanical device, such as an automobile or even a garbage disposal, Americans use the British horsepower. However, for measuring electrical power, the SI kilowatt is used. When an electric utility performs a meter reading on a family's power usage, it measures that usage in terms of electrical "work" performed for the family, and thus bills them by the kilowatt-hour.
Three Types of Energy
KINETIC AND POTENTIAL ENERGY FORMULAE.
Earlier, energy was defined as the ability of an object to accomplish work—a definition that by this point has acquired a great deal more meaning. There are three types of energy: kinetic energy, or the energy that something possesses by virtue of its motion; potential energy, the energy it possesses by virtue of its position; and rest energy, the energy it possesses by virtue of its mass.
The formula for kinetic energy is KE = ½ mv 2. In other words, for an object of mass m, kinetic energy is equal to half the mass multiplied by the square of its speed v. The actual derivation of this formula is a rather detailed process, involving reference to the second of the three laws of motion formulated by Sir Isaac Newton (1642-1727.) The second law states that F = ma, in other words, that force is equal to mass multiplied by acceleration. In order to understand kinetic energy, it is necessary, then, to understand the formula for uniform acceleration. The latter is vf 2 = v0 2 + 2as, where v f2 is the final speed of the object, v 02 its initial speed, a acceleration and s distance. By substituting values within these equations, one arrives at the formula of ½ mv 2 for kinetic energy.
The above is simply another form of the general formula for work—since energy is, after all, the ability to perform work. In order to produce an amount of kinetic energy equal to ½ mv 2 within an object, one must perform an amount of work on it equal to Fs. Hence, kinetic energy also equals Fs, and thus the preceding paragraph simply provides a means for translating that into more specific terms.
The potential energy (PE) formula is much simpler, but it also relates to a work formula given earlier: that of work done against gravity. Potential energy, in this instance, is simply a function of gravity and the distance h above some reference point. Hence, its formula is the same as that for work done against gravity, mgh or wh, where w stands for weight. (Note that this refers to potential energy in a gravitational field; potential energy may also exist in an electromagnetic field, in which case the formula would be different from the one presented here.)
REST ENERGY AND ITS INTRIGUING FORMULA.
Finally, there is rest energy, which, though it may not sound very exciting, is in fact the most intriguing—and the most complex—of the three. Ironically, the formula for rest energy is far, far more complex in derivation than that for potential or even kinetic energy, yet it is much more well-known within the popular culture.
Indeed, E = mc 2 is perhaps the most famous physics formula in the world—even more so than the much simpler F = ma. The formula for rest energy, as many people know, comes from the man whose Theory of Relativity invalidated certain specifics of the Newtonian framework: Albert Einstein (1879-1955). As for what the formula actually means, that will be discussed later.
Falling and Bouncing Balls
One of the best—and most frequently used—illustrations of potential and kinetic energy involves standing at the top of a building, holding a baseball over the side. Naturally, this is not an experiment to perform in real life. Due to its relatively small mass, a falling baseball does not have a great amount of kinetic energy, yet in the real world, a variety of other conditions (among them inertia, the tendency of an object to maintain its state of motion) conspire to make a hit on the head with a baseball potentially quite serious. If dropped from a great enough height, it could be fatal.
When one holds the baseball over the side of the building, potential energy is at a peak, but once the ball is released, potential energy begins to decrease in favor of kinetic energy. The relationship between these, in fact, is inverse: as the value of one decreases, that of the other increases in exact proportion. The ball will only fall to the point where its potential energy becomes 0, the same amount of kinetic energy it possessed before it was dropped. At the same point, kinetic energy will have reached maximum value, and will be equal to the potential energy the ball possessed at the beginning. Thus the sum of kinetic energy and potential energy remains constant, reflecting the conservation of energy, a subject discussed below.
It is relatively easy to understand how the ball acquires kinetic energy in its fall, but potential energy is somewhat more challenging to comprehend. The ball does not really "possess" the potential energy: potential energy resides within an entire system comprised by the ball, the space through which it falls, and the Earth. There is thus no "magic" in the reciprocal relationship between potential and kinetic energy: both are part of a single system, which can be envisioned by means of an analogy.
Imagine that one has a 20-dollar bill, then buys a pack of gum. Now one has, say, $19.20. The positive value of dollars has decreased by $0.80, but now one has increased "non-dollars" or "anti-dollars" by the same amount. After buying lunch, one might be down to $12.00, meaning that "anti-dollars" are now up to $8.00. The same will continue until the entire $20.00 has been spent. Obviously, there is nothing magical about this: the 20-dollar bill was a closed system, just like the one that included the ball and the ground. And just as potential energy decreased while kinetic energy increased, so "non-dollars" increased while dollars decreased.
The example of the baseball illustrates one of the most fundamental laws in the universe, the conservation of energy: within a system isolated from all other outside factors, the total amount of energy remains the same, though transformations of energy from one form to another take place. An interesting example of this comes from the case of another ball and another form of vertical motion.
This time instead of a baseball, the ball should be one that bounces: any ball will do, from a basketball to a tennis ball to a superball. And rather than falling from a great height, this one is dropped through a range of motion ordinary for a human being bouncing a ball. It hits the floor and bounces back—during which time it experiences a complex energy transfer.
As was the case with the baseball dropped from the building, the ball (or more specifically, the system involving the ball and the floor) possesses maximum potential energy prior to being released. Then, in the split-second before its impact on the floor, kinetic energy will be at a maximum while potential energy reaches zero.
So far, this is no different than the baseball scenario discussed earlier. But note what happens when the ball actually hits the floor: it stops for an infinitesimal fraction of a moment. What has happened is that the impact on the floor (which in this example is assumed to be perfectly rigid) has dented the surface of the ball, and this saps the ball's kinetic energy just at the moment when the energy had reached its maximum value. In accordance with the energy conservation law, that energy did not simply disappear: rather, it was transferred to the floor.
Meanwhile, in the wake of its huge energy loss, the ball is motionless. An instant later, however, it reabsorbs kinetic energy from the floor, undents, and rebounds. As it flies upward, its kinetic energy begins to diminish, but potential energy increases with height. Assuming that the person who released it catches it at exactly the same height at which he or she let it go, then potential energy is at the level it was before the ball was dropped.
WHEN A BALL LOSES ITS BOUNCE.
The above, of course, takes little account of energy "loss"—that is, the transfer of energy from one body to another. In fact, a part of the ball's kinetic energy will be lost to the floor because friction with the floor will lead to an energy transfer in the form of thermal, or heat, energy. The sound that the ball makes when it bounces also requires a slight energy loss; but friction—a force that resists motion when the surface of one object comes into contact with the surface of another—is the principal culprit where energy transfer is concerned.
Of particular importance is the way the ball responds in that instant when it hits bottom and stops. Hard rubber balls are better suited for this purpose than soft ones, because the harder the rubber, the greater the tendency of the molecules to experience only elastic deformation. What this means is that the spacing between molecules changes, yet their overall position does not.
If, however, the molecules change positions, this causes them to slide against one another, which produces friction and reduces the energy that goes into the bounce. Once the internal friction reaches a certain threshold, the ball is "dead"—that is, unable to bounce. The deader the ball is, the more its kinetic energy turns into heat upon impact with the floor, and the less energy remains for bouncing upward.
Varieties of Energy in Action
The preceding illustration makes several references to the conversion of kinetic energy to thermal energy, but it should be stressed that there are only three fundamental varieties of energy: potential, kinetic, and rest. Though heat is often discussed as a form unto itself, this is done only because the topic of heat or thermal energy is complex: in fact, thermal energy is simply a result of the kinetic energy between molecules.
To draw a parallel, most languages permit the use of only three basic subject-predicate constructions: first person ("I"), second person ("you"), and third person ("he/she/it.") Yet within these are endless varieties such as singular and plural nouns or various temporal orientations of verbs: present ("I go"); present perfect ("I have gone"); simple past ("I went"); past perfect ("I had gone.") There are even "moods," such as the subjunctive or hypothetical, which permit the construction of complex thoughts such as "I would have gone." Yet for all this variety in terms of sentence pattern—actually, a degree of variety much greater than for that of energy types—all subject-predicate constructions can still be identified as first, second, or third person.
One might thus describe thermal energy as a manifestation of energy, rather than as a discrete form. Other such manifestations include electromagnetic (sometimes divided into electrical and magnetic), sound, chemical, and nuclear. The principles governing most of these are similar: for instance, the positive or negative attraction between two electromagnetically charged particles is analogous to the force of gravity.
One term not listed among manifestations of energy is mechanical energy, which is something different altogether: the sum of potential and kinetic energy. A dropped or bouncing ball was used as a convenient illustration of interactions within a larger system of mechanical energy, but the example could just as easily have been a roller coaster, which, with its ups and downs, quite neatly illustrates the sliding scale of kinetic and potential energy.
Likewise, the relationship of Earth to the Sun is one of potential and kinetic energy transfers: as with the baseball and Earth itself, the planet is pulled by gravitational force toward the larger body. When it is relatively far from the Sun, it possesses a higher degree of potential energy, whereas when closer, its kinetic energy is highest. Potential and kinetic energy can also be illustrated within the realm of electromagnetic, as opposed to gravitational, force: when a nail is some distance from a magnet, its potential energy is high, but as it moves toward the magnet, kinetic energy increases.
ENERGY CONVERSION IN A DAM.
A dam provides a beautiful illustration of energy conversion: not only from potential to kinetic, but from energy in which gravity provides the force component to energy based in electromagnetic force. A dam big enough to be used for generating hydroelectric power forms a vast steel-and-concrete curtain that holds back millions of tons of water from a river or other body. The water nearest the top—the "head" of the dam—thus has enormous potential energy.
Hydroelectric power is created by allowing controlled streams of this water to flow downward, gathering kinetic energy that is then transferred to powering turbines. Dams in popular vacation spots often release a certain amount of water for recreational purposes during the day. This makes it possible for rafters, kayakers, and others downstream to enjoy a relatively fast-flowing river. (Or, to put it another way, a stream with high kinetic energy.) As the day goes on, however, the sluice-gates are closed once again to build up the "head." Thus when night comes, and energy demand is relatively high as people retreat to their homes, vacation cabins, and hotels, the dam is ready to provide the power they need.
OTHER MANIFESTATIONS OF ENERGY.
Thermal and electromagnetic energy are much more readily recognizable manifestations of energy, yet sound and chemical energy are two forms that play a significant part as well. Sound, which is essentially nothing more than the series of pressure fluctuations within a medium such as air, possesses enormous energy: consider the example of a singer hitting a certain note and shattering a glass.
Contrary to popular belief, the note does not have to be particularly high: rather, the note should be on the same wavelength as the glass's own vibrations. When this occurs, sound energy is transferred directly to the glass, which is shattered by this sudden net intake of energy. Sound waves can be much more destructive than that: not only can the sound of very loud music cause permanent damage to the ear drums, but also, sound waves of certain frequencies and decibel levels can actually drill through steel. Indeed, sound is not just a by-product of an explosion; it is part of the destructive force.
As for chemical energy, it is associated with the pull that binds together atoms within larger molecular structures. The formation of water molecules, for instance, depends on the chemical bond between hydrogen and oxygen atoms. The combustion of materials is another example of chemical energy in action.
With both chemical and sound energy, however, it is easy to show how these simply reflect the larger structure of potential and kinetic energy discussed earlier. Hence sound, for instance, is potential energy when it emerges from a source, and becomes kinetic energy as it moves toward a receiver (for example, a human ear). Furthermore, the molecules in a combustible material contain enormous chemical potential energy, which becomes kinetic energy when released in a fire.
Rest Energy and Its Nuclear Manifestation
Nuclear energy is similar to chemical energy, though in this instance, it is based on the binding of particles within an atom and its nucleus. But it is also different from all other kinds of energy, because its force component is neither gravitational nor electromagnetic, but based on one of two other known varieties of force: strong nuclear and weak nuclear. Furthermore, nuclear energy—to a much greater extent than thermal or chemical energy—involves not only kinetic and potential energy, but also the mysterious, extraordinarily powerful, form known as rest energy.
Throughout this discussion, there has been little mention of rest energy; yet it is ever-present. Kinetic and potential energy rise and fall with respect to one another; but rest energy changes little. In the baseball illustration, for instance, the ball had the same rest energy at the top of the building as it did in flight—the same rest energy, in fact, that it had when sitting on the ground. And its rest energy is enormous.
This brings back the subject of the rest energy formula: E = mc 2, famous because it made possible the creation of the atomic bomb. The latter, which fortunately has been detonated in warfare only twice in history, brought a swift end to World War II when the United States unleashed it against Japan in August 1945. From the beginning, it was clear that the atom bomb possessed staggering power, and that it would forever change the way nations conducted their affairs in war and peace.
Yet the atom bomb involved only nuclear fission, or the splitting of an atom, whereas the hydrogen bomb that appeared just a few years after the end of World War II used an even more powerful process, the nuclear fusion of atoms. Hence, the hydrogen bomb upped the ante to a much greater extent, and soon the two nuclear superpowers—the United States and the Soviet Union—possessed the power to destroy most of the life on Earth.
The next four decades were marked by a superpower struggle to control "the bomb" as it came to be known—meaning any and all nuclear weapons. Initially, the United States controlled all atomic secrets through its heavily guarded Manhattan Project, which created the bombs used against Japan. Soon, however, spies such as Julius and Ethel Rosenberg provided the Soviets with U.S. nuclear secrets, ensuring that the dictatorship of Josef Stalin would possess nuclear capabilities as well. (The Rosenbergs were executed for treason, and their alleged innocence became a celebrated cause among artists and intellectuals; however, Soviet documents released since the collapse of the Soviet empire make it clear that they were guilty as charged.)
Both nations began building up missile arsenals. It was not, however, just a matter of the United States and the Soviet Union. By the 1970s, there were at least three other nations in the "nuclear club": Britain, France, and China. There were also other countries on the verge of developing nuclear bombs, among them India and Israel. Furthermore, there was a great threat that a terrorist leader such as Libya's Muammar al-Qaddafi would acquire nuclear weapons and do the unthinkable: actually use them.
Though other nations acquired nuclear weapons, however, the scale of the two super-power arsenals dwarfed all others. And at the heart of the U.S.-Soviet nuclear competition was a sort of high-stakes chess game—to use a metaphor mentioned frequently during the 1970s. Soviet leaders and their American counterparts both recognized that it would be the end of the world if either unleashed their nuclear weapons; yet each was determined to be able to meet the other's ever-escalating nuclear threat.
United States President Ronald Reagan earned harsh criticism at home for his nuclear buildup and his hard line in negotiations with Soviet President Mikhail Gorbachev; but as a result of this one-upmanship, he put the Soviets into a position where they could no longer compete. As they put more and more money into nuclear weapons, they found themselves less and less able to uphold their already weak economic system. This was precisely Reagan's purpose in using American economic might to outspend the Soviets—or, in the case of the proposed multi-trillion-dollar Strategic Defense Initiative (SDI or "Star Wars")—threatening to outspend them. The Soviets expended much of their economic energy in competing with U.S. military strength, and this (along with a number of other complex factors), spelled the beginning of the end of the Communist empire.
E = MC2.
The purpose of the preceding historical brief is to illustrate the epoch-making significance of a single scientific formula: E = mc 2. It ended World War II and ensured that no war like it would ever happen again—but brought on the specter of global annihilation. It created a superpower struggle—yet it also ultimately helped bring about the end of Soviet totalitarianism, thus opening the way for a greater level of peace and economic and cultural exchange than the world has ever known. Yet nuclear arsenals still remain, and the nuclear threat is far from over.
So just what is this literally earth-shattering formula? E stands for rest energy, m for mass, and c for the speed of light, which is 186,000 mi (297,600 km) per second. Squared, this yields an almost unbelievably staggering number.
Hence, even an object of insignificant mass possesses an incredible amount of rest energy. The baseball, for instance, weighs only about 0.333 lb, which—on Earth, at least—converts to 0.15 kg. (The latter is a unit of mass, as opposed to weight.) Yet when factored into the rest energy equation, it yields about 3.75 billion kilowatt-hours—enough to provide an American home with enough electrical power to last it more than 156,000 years!
How can a mere baseball possess such energy? It is not the baseball in and of itself, but its mass; thus every object with mass of any kind possesses rest energy. Often, mass energy can be released in very small quantities through purely thermal or chemical processes: hence, when a fire burns, an almost infinitesimal portion of the matter that went into making the fire is converted into energy. If a stick of dynamite that weighed 2.2 lb (1 kg) exploded, the portion of it that "disappeared" would be equal to 6 parts out of 100 billion; yet that portion would cause a blast of considerable proportions.
As noted much earlier, the derivation of Einstein's formula—and, more to the point, how he came to recognize the fundamental principles involved—is far beyond the scope of this essay. What is important is the fact, hypothesized by Einstein and confirmed in subsequent experiments, that matter is convertible to energy, a fact that becomes apparent when matter is accelerated to speeds close to that of light.
Physicists do not possess a means for propelling a baseball to a speed near that of light—or of controlling its behavior and capturing its energy. Instead, atomic energy—whether of the wartime or peacetime varieties (that is, in power plants)—involves the acceleration of mere atomic particles. Nor is any atom as good as another. Typically physicists use uranium and other extremely rare minerals, and often, they further process these minerals in highly specialized ways. It is the rarity and expense of those minerals, incidentally—not the difficulty of actually putting atomic principles to work—that has kept smaller nations from developing their own nuclear arsenals.
WHERE TO LEARN MORE
Beiser, Arthur. Physics, 5th ed. Reading, MA: Addison-Wesley, 1991.
Berger, Melvin. Sound, Heat and Light: Energy at Work. Illustrated by Anna DiVito. New York: Scholastic, 1992.
Gardner, Robert. Energy Projects for Young Scientists. New York: F. Watts, 1987.
"Kinetic and Potential Energy" Thinkquest (Web site). <http://library.thinkquest.org/2745/data/ke.htm> (March 12, 2001).
Snedden, Robert. Energy. Des Plaines, IL: Heinemann, Library, 1999.
Suplee, Curt. Everyday Science Explained. Washington, D.C.: National Geographic Society, 1996.
"Work and Energy" (Web site). <http://www.glenbrook.k12.il.us/gbssci/phys/Class/energy/energtoc.html> (March 12, 2001).
World of Coasters (Web site). <http://www.worldofcoasters.com> (March 12, 2001).
Zubrowski, Bernie. Raceways: Having Fun with Balls and Tracks. Illustrated by Roy Doty. New York: Morrow, 1985.
CONSERVATION OF ENERGY:
A law of physics which holds that within a system isolated from all other outside factors, the total amount of energy remains the same, though transformations of energy from one form to another take place.
For an acute (less than 90°) in a right triangle, the cosine (abbreviated cos) is the ratio between the adjacent legand the hypotenuse. Regardless of the size of the triangle, this figure is a constant for any particular angle.
The ability of an object (or in some cases a non-object, such as a magnetic force field) to accomplish work.
The force that resists motion when the surface of one object comes into contact with the surface of another.
The British unit of power, equal to 550 foot-pounds per second.
In a right triangle, the side opposite the right angle.
The SI measure of work. One joule (1 J) is equal to the work required to accelerate 1 kilogram of mass by 1 meter per second squared (1 m/s2) over a distance of 1 meter. Due to the small size of the joule, however, it is often replaced by the kilowatt-hour, equal to 3.6 million(3.6 · 106) J.
The energy that an object possesses by virtue of its motion.
Physical substance that occupies space, has mass, is composed of atoms (or in the case of subatomic particles, is part of an atom), and is convertible into energy.
The sum of potential energy and kinetic energy within a system.
The energy that an object possesses by virtue of its position.
The rate at which work is accomplished over time, a figure rendered mathematically as work divided by time. The SI unit of power is the watt, while the British unit is the foot-pound per second. The latter, because it is small, is usually reckoned in terms of horsepower.
The energy an object possesses by virtue of its mass.
A triangle that includes a right (90°) angle. The other two angles are, by definition, acute or less than 90°.
A quantity that possesses only magnitude, with no specific direction.
An abbreviation of the French Système International d'Unités, which means "International System of Units." This is the term within the scientific community for the entire metric system, as applied to a wide variety of quantities ranging from length, weight and volume to work and power, as well as electromagnetic units.
In discussions of energy, the term "system" refers to a closed set of interactions free from interference by outside factors. An example is the baseball dropped from a height to illustrate potential energy and kinetic energy the ball, the space through which it falls, and the ground below together form a system.
A quantity that possesses both magnitude and direction.
The metric unit of power, equal to 1 joule per second. Because this is such a small unit, scientists and engineers typically speak in terms of kilowatts, or units of 1,000 watts.
The exertion of force over a given distance. Work is the product of force and distance, where force and distance are exerted in the same direction. Hence the actual formula for work is F · cos θ · s, where F = force, s = distance, and cos θ is equal to the cosine of the angle θ (the Greek letter theta) between F and s. In the metric or SI system, work is measured by the joule (J), and in the British system by the foot-pound.
"Energy." Science of Everyday Things. 2002. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1G2-3408600087.html
"Energy." Science of Everyday Things. 2002. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3408600087.html
Energy is the capacity for doing work. In physics, "work" has a more formal definition than in everyday life: it means the ability to exert a force through a distance. If you pick up this book, energy stored in molecular bonds inside your body is released to move the book's mass. The energy was stored in the molecules of the foods you ate and is released through a chemical reaction. Food provides the fuel that gives us energy.
Similarly, whether we are talking about automobile engines or power plant boilers, we need to have a fuel with stored energy that can be released in a useable way. Fossil fuels such as coal, oil, and natural gas provide much of the energy we use in industry and in our personal lives. These fuels were created by geological processes over millions of years, as plants and marine microorganisms consisting largely of carbon became buried under the earth. These fossilized materials were eventually transformed into coal or oil by the high pressures and temperatures inside the planet.
Because of the long time and extreme conditions needed to create fossil fuels, we cannot just replace them at will—they are a nonrenewable resource. Every time we pump oil from the ground we are depleting an irreplaceable natural resource. Eventually, we will exhaust the supplies of fossil fuels in the earth, and we will have to develop alternative energy sources to power our society. Exactly when we will run out of fossil fuels is a subject of great debate. A careful distinction must be made here between "reserves" and "resources." Reserves are defined as economically recoverable with known technology and within a price range close to the present price; resources are theoretical maximum potentials based on geological information, and include reserves. The Energy Information Administration (EIA) of the United States Department of Energy has estimated the worldwide coal resources at 1,083 billion tons; the oil reserve at approximately 1,200 billion barrels, with resources estimated at three trillion barrels; and the worldwide natural gas reserve at 5,500 trillion cubic feet . The nonprofit Corporation for Public Access to Science and Technology (CPAST) in St. Louis, Missouri, has estimated from earlier data published in the United States Department of Energy 1996 Annual Energy Review that these combined fossil fuels resources would last until the year 2111 if usage remained constant at 1995 levels. The EIA predicts that coal resources could last for 220 years at the current usage rates. Estimates change when new technology makes fuel that was previously considered "unrecoverable" suddenly accessible; these numbers should only be used as rough guidelines.
Transforming Energy into Work: Gasoline Engines and Steam Boilers
Gasoline, which consists largely of hydrocarbon molecules—chains of connected carbon and hydrogen atoms—acts as a fuel in an automobile engine. It is a product of the distillation of raw petroleum. The energy that holds these carbon and hydrogen atoms together is stored in the bonds between each atom.
In an automobile, gasoline is mixed with air in the combustion chamber of an engine cylinder, the mixture is compressed by a piston, and a spark from the spark plug ignites the mixture. The ideal chemical reaction for this process is:
The energy is released in the form of heat, which causes the gases to expand and pushes the piston outward. The piston is connected to a rod and a crankshaft that ultimately transform the energy locked up in molecules into the revolution of wheels, setting your car in motion. The combustion products of carbon dioxide and water are expelled through the exhaust system into the atmosphere.
Similarly, a boiler in a power plant relies on the release of energy from burning coal or natural gas to heat water and convert it into steam. The steam turns the blades of a turbine-powered generator that ultimately causes electrons to move through a wire, converting the energy from the fuel into electrical energy that can be used to power appliances in your home.
In each of these cases, energy stored in chemical bonds is transformed into useful energy that can perform work.
Energy and Pollution
In addition, the chemical reaction shown above is an ideal one, but conditions in the real world are usually far from ideal. If the right amounts of oxygen and gasoline are not present in the cylinder of a car engine (because of a dirty air filter or a faulty fuel injection system, for example), poisonous carbon monoxide can form. Similarly, some of the hydrocarbons might escape from the engine unburned, releasing pollutants such as methane into the air. Nitrogen from the air inside the cylinder can combine with oxygen to form the pollutants nitric oxide and nitrogen dioxide, collectively know as NOxcompounds, which can be converted to ground-level ozone in the presence of sunlight. Even carbon dioxide—one of the "ideal" products of complete combustion in an engine or a power plant—has been identified as a "green-house gas" that is partially responsible for global warming.
The coal used in power plants does not emerge from the ground as pure carbon. It is laced with varying amounts of different contaminants, including sulfur, which vary from coal mine to coal mine. These, too, can find their way into the atmosphere as pollutants when the coal is burned to heat the water in a boiler. Most notably, sulfur oxide, emitted into the air, converts to sulfuric
|type of energy||percentage of u.s. energy pool||contribution to pollution (percentage of carbon emissions)|
|source: lawrence livermore national laboratory,energy & environment directorate. "us energy flow 2000." available from http://en-env.llnl.gov/flow.|
acid, a major component in acid rain. Power plants are required to clean up these emissions before they reach the atmosphere, to varying degrees, but again, no process is 100-percent efficient.
Besides the pollutants associated with the use of fossil fuels, drilling for oil and mining coal can be an additional source of pollution. An oil spill while drilling or transporting oil can lead to disastrous ecological damage, and rain runoff from a strip mine can carry coal particles and chemical byproducts into the local water supply.
Nuclear and Alternative Fuels
Nuclear energy is not based on combustion of fuel. Rather, the energy is released as unstable radioactive compounds decay into more stable forms. For example, radioactive uranium 238 decays to uranium 235, releasing energy in the process. This energy can be used to heat water without burning coal or oil, so its use is therefore cleaner. However, radiation emitted in the event of an accident at a nuclear power plant could harm people and wildlife and contaminate the food supply. Nuclear waste, in the form of spent fuel rods, is a very long-term by-product of nuclear energy.
Cleaner-burning fuels can be produced by processing agricultural products ("biomass") into ethanol. Thousands of acres of corn could be grown specifically for energy production, not consumption by people or animals. Because the ethanol that results comes from a controllable chemical distillation process, it is very pure and uncontaminated, and thus burns cleaner. Also, because a new crop can be grown every year, these are renewable energy sources.
Hydropower, or the use of moving or falling water to generate energy, is one of the oldest technologies that still contributes significantly to our energy needs. Falling water was often used in old mills to turn a paddlewheel and move the heavy stones that were used to grind grain into flour. Later, the same concept was transferred to the production of electricity. Hydroelectric plants, such as the one in Niagara Falls, divert some of the water from the falls into the power plant. There the kinetic energy (the energy of objects in motion) of the falling water turns turbines and generates electricity that can be sold to residents and industrial users in the area.
Solar power, wind power, and fuel cells powered by a reaction of hydrogen plus oxygen to form water are other alternative energy sources that are being explored.
Industry and Environment
Suppose you are the owner of a manufacturing plant. You need large amounts of fuel to keep your plant running. To maximize your profits, you would like to purchase this fuel very cheaply. The cheapest option would be if the energy company could take the fuel straight from the ground and sell it to you "as is." But fossil fuels must be processed before they can be used. Petroleum products must go to the refinery to be separated into various components such as gasoline and diesel fuel, and contaminants such as sulfur have to be minimized. All these processing steps add cost to the fuel.
Even after you obtain a relatively clean fuel, your manufacturing process may result in pollutants that could find their way into the atmosphere or rivers. Again, efforts to clean up these emissions will cost you money. Chemical systems that scrub the pollutants from the emissions, or filters that capture particulates, are expensive and raise your production costs.
But there may be people who are more concerned about a healthy environment than your profits. They might insist that you take whatever steps are necessary on both the inlet (fuel) side and the outlet (emissions and runoff) side to make the world a better, safer place to live. They may lobby to have laws passed that require you to clean up any emissions from your plant.
You want a clean environment too, but even the most environmentally conscious company must make a profit to stay in business. Environmental regulations add to the cost of producing your product, but this is no different than all the other costs you incur (raw materials, labor, transportation, marketing, etc.). If all competitors in an industry are constrained by the same regulations, then the playing field is level; every company in the field may have to raise its prices to make up for the added costs of compliance, but prices for similar products should remain competitive. However, if competitors in foreign countries are able to operate without these same environmental regulations, they can market their products more cheaply, and make it more difficult for domestic producers to stay in business. It is this kind of imbalance in regulations that lead to job losses, and give the mistaken impression that we must choose either jobs or the environment. If governments can maintain a level playing field in environmental regulations, we can have both jobs and a clean environment worldwide.
The situation may be further confused by an argument among scientists and health professionals as to how much of a health problem a certain chemical represents. Something that seems safe today may be discovered to be a health risk ten years from now. Until we understand how various chemicals interact with our bodies, there may be room for discussion on allowable levels of emission.
In light of the depletion of nonrenewable resources, it is important that we try to conserve energy whenever possible. Because the transformation of fuel into useful energy inevitably creates pollutants, we must reduce our energy consumption to reduce pollution. Using your air conditioner less during the summer by setting the thermostat higher can reduce the demand for electricity experienced by your energy provider. Your energy provider can burn less fossil fuel and still meet the needs of its customers, resulting in less pollution. Carpooling removes unnecessary vehicles from the road, reducing gasoline consumption and air pollution. Energy conservation efforts thus help at both ends of the cycle: they slow down the depletion of fuel reserves and, at the same time, clean up the environment.
The Politics of Energy
Because the conditions necessary for the creation of fossil fuels varied geographically throughout the earth's history, fossil fuels are not distributed evenly around the globe. Significant concentrations of oil occur in the Middle East, the North Sea, Russia, Texas, and Alaska, for example. Countries that control the world's access to oil have economic power over countries that need their oil, which can lead to political tensions. The "energy crisis" created by the OPEC (Organization of the Petroleum Exporting Countries) nations in the 1970s, when they artificially reduced the supply of oil available on the world market, was a display of this political and economic power. Iraq's attack on Kuwait in 1991 to take over Kuwaiti oil fields led to the first Persian Gulf War. As long as there is uneven access to energy sources throughout the world, political tensions over the availability and cost of energy will continue.
see also Air Pollution; Alternative Energy; Carbon Dioxide; Coal; Disasters: Nuclear Accidents; Disasters: Oil Spills; Electric Power; Fossil Fuels; Nuclear Energy; Nuclear Wastes; Petroleum; Renewable Energy; Thermal Pollution.
Tipler, Paul A. (1982). Physics, 2nd ed. New York: Worth Publishers.
Brain, Marshall. (2002). "How Car Engines Work." HowStuffWorks. Available from http://www.howstuffworks.com/steam.htm.
Brain, Marshall. (2002). "How Steam Engines Work." HowStuffWorks. Available from http://www.howstuffworks.com/steam.htm.
Energy Information Administration of the United States Department of Energy. (2003). "World Crude Oil and Natural Gas Reserves, Most Recent Estimates." Available from http://www.eia.gov/emeu/international/reserves.html.
Greenpeace. (1997). "Carbon Dioxide Emissions and Fossil Fuel Resources." Available from http://archive.greenpeace.org/~climate/science/reports/carbon/clfull-3.html.
Lawrence Livermore National Laboratory, Energy & Environment Directorate. "U.S. Energy Flow 2000." Available from http://en-env.llnl.gov/flow.
Lawrence Livermore National Laboratory, Energy & Environment Directorate. "U.S. 2000 Carbon Emissions from Energy Consumption." Available from http://en-env.llnl.gov/flow.
Mabro, Robert, ed. (1980). World Energy Issues and Policies: Proceedings of the First Oxford Energy Seminar (September 1979). Oxford: Oxford University Press.
Myhr, Franklin. (1998). "Overview of Fossil Fuel Energy Resources." Corporation for Public Access to Science and Technology (CPAST). Available from http://www.cpast.org/articles/fetch.adp?artnum=14.
The tiny town of Cheshire, Ohio, lives in the shadow of American Electric Power's giant coal-burning Gen. James M. Gavin generating plant. Each summer, blue clouds of sulfuric acid rain down on the town, an unintended and ironic by-product of AEP's efforts to curb other emissions at the plant. Residents sued and in 2002, AEP agreed to buy the town rather than fight the pollution suit. All but a handful of Cheshire's 221 residents have agreed to sell and move. The cost: $20 million.
Palucka, Tim. "Energy." Pollution A to Z. 2004. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1G2-3408100086.html
Palucka, Tim. "Energy." Pollution A to Z. 2004. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3408100086.html
Energy is the capacity to do work. In science, the term work has a very special meaning. It means that an object has been moved through a distance. Thus, pushing a brick across the top of a table is an example of doing work. By applying this definition of work, then, energy can also be defined as the ability to move an object through a distance. Imagine that a bar magnet is placed next to a pile of iron filings (thin slivers of iron metal). The iron filings begin to move toward the iron bar. We say that magnetic energy pulls on the iron filings and causes them to move.
Energy can be a difficult concept to understand. Unlike matter, energy cannot be held or placed on a laboratory bench for study. We know about energy best because of the effect it has on objects around it, as in the case of the bar magnet and iron filings mentioned above.
Energy can exist in many forms, including mechanical, heat, electrical, magnetic, sound, chemical, and nuclear. Although these forms appear to be very different from each other, they often have much in common and can generally be transformed from one to another.
Over time, a number of different units have been used to measure energy. In the British system, for example, the fundamental unit of energy is the foot-pound. One foot-pound is the amount of energy that can move a weight of one pound a distance of one foot. In the metric system, the fundamental unit of energy is the joule (abbreviation: J), named after English scientist James Prescott Joule (1818–1889). A joule is the amount of energy that can move a weight of one newton a distance of one meter.
Potential and kinetic energy
Objects possess energy for one of two reasons: because of their position or because of their motion. The first type of energy is defined as potential energy; the second type of energy is defined as kinetic energy. Think of a baseball sitting on a railing at the top of the Empire State Building. That ball has potential energy because of its ability to fall off the railing and come crashing down onto the street. The potential energy of the baseball—as well as that of any other object—is dependent on two factors: its mass and its height above the ground. The baseball has a relatively small mass, but in this example it still has a large potential energy because of its distance above the ground.
Words to Know
Conservation of energy: A law of physics that says that energy can be transformed from one form to another, but can be neither created nor destroyed.
Joule: The unit of measurement for energy in the metric system.
Kinetic energy: The energy possessed by a body as a result of its motion.
Mass: Measure of the total amount of matter in an object.
Potential energy: The energy possessed by a body as a result of its position.
Velocity: The rate at which the position of an object changes with time, including both the speed and the direction.
The second type of energy, kinetic energy, is a result of an object's motion. The amount of kinetic energy possessed by an object is a function of two variables, its mass and velocity. The formula for kinetic energy is E = ½mv2, where m is the mass of the object and v is its velocity. This formula shows that an object can have a lot of kinetic energy for two reasons: it can either be very heavy (large m) or it can be moving very fast (large v).
Imagine that the baseball mentioned previously falls off the Empire State Building. The ball can do a great deal of damage because it has a great deal of kinetic energy. The kinetic energy comes from the very high speed with which the ball is traveling by the time it hits the ground. The baseball may not weigh very much, but its high speed still gives it a great deal of kinetic energy.
Conservation of energy
In science, the term conservation means that the amount of some property is not altered during a chemical or physical change. At one time, physicists believed in the law of conservation of energy. That law states that the amount of energy present at the end of any physical or chemical change is exactly the same as the amount present at the beginning of the change. The form in which the energy appears may be different, but the total amount is constant. Another way to state the law of conservation of energy is that energy is neither created nor destroyed in a chemical or physical change.
As an example, suppose that you turn on an electric heater. A certain amount of electrical energy travels into the heater and is converted to heat. If you measure the amount of electricity entering the heater and the amount of heat given off, the amounts will be the same.
The law of conservation of energy is valid for the vast majority of situations that we encounter in our everyday lives. In the early 1900s, however, German-born American physicist Albert Einstein (1879–1955) made a fascinating discovery. Under certain circumstances, Einstein said, energy can be transformed into matter, and matter can be transformed into energy. Those circumstances are seldom encountered in daily life. When they are, a modified form of the law of conservation of energy applies. That modified form is known as the law of conservation of energy and matter. It says that the total amount of matter and energy is always conserved in any kind of change.
Forms of energy
We know of the existence of energy because of the various forms in which it occurs. When an explosion occurs, air is heated up to very high
Energy can be converted from one form to another, but the process is often very wasteful. An incandescent lightbulb is an example. When a lightbulb is turned on, electrical current flows into the wire filament in the bulb. The filament begins to glow, giving off light. That's what the bulb is designed to do. But most of the electrical energy entering the bulb is used to heat the wire first. That electrical energy is "wasted" since it is lost as heat; the lightbulb is not designed to be a source of heat.
The amount of useful energy obtained from some machine or some process compared to the amount of energy provided to the machine or process is called the energy efficiency of the machine or process. For example, a typical incandescent lightbulb converts about 90 percent of the electrical energy it receives to heat and 10 percent to light. Therefore, the energy efficiency of the lightbulb is said to be 10 percent.
Energy efficiency has come to have a new meaning in recent decades. The term also refers to any method by which the amount of useful energy can be increased in any machine or process. For example, some automobiles can travel 40 miles by burning a single gallon of gasoline, while others can travel only 20 miles per gallon. The energy efficiency achieved by the first car is twice that achieved by the second car.
Until the middle of the twentieth century, most developed nations did not worry very much about energy efficiency. Coal, oil, and natural gas—the fuels from which we get most of our energy—were cheap. It didn't make much difference to Americans and other people around the world if a lot of energy was wasted. We just dug up more coal or found more oil and gas to make more energy.
By the third quarter of the twentieth century, though, that attitude was much less common as people realized that natural resources won't last forever. Architects, automobile and airplane designers, plant managers, and the average home owner were all looking for ways to use energy more efficiently.
temperatures. The hot air expands quickly, knocking down objects in its path. Heat is a form of energy also known as thermal energy. Temperature is a measure of the amount of heat energy contained in an object.
Other forms of energy include electrical energy, magnetism, sound, chemical, and nuclear energy. Although these forms of energy appear to be very different from each other, they are all closely related: one form of energy can be changed into another, different form of energy.
An example of this principle is an electric power generating plant. In such a plant, coal or oil may be burned to boil water. Chemical energy stored in the coal or oil is converted to heat energy in steam. The steam can then be used to operate a turbine, a large fan mounted on a central rod. The steam strikes the fan and causes the rod to turn. Heat energy from the steam is converted to the kinetic energy of the rotating fan. Finally, the turbine runs an electric generator. In the generator, the kinetic energy of the rotating turbine is converted into electrical energy.
[See also Conservation laws; Electricity; Heat; Magnetism ]
"Energy." UXL Encyclopedia of Science. 2002. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1G2-3438100275.html
"Energy." UXL Encyclopedia of Science. 2002. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3438100275.html
In the discussion of energy, the fundamental concept is that of work, which is motion against an opposing force. Energy is the capacity to do work. An object traveling at high speed and impacting on another object can do more work—can drive the object farther against an opposing force—than the same object moving slowly. This contribution to energy, the energy ascribed to motion, is called kinetic energy. The kinetic energy of an object of mass m traveling at a speed υ is ½mυ 2. An object may also have energy by virtue of its position. An object high above the surface of Earth has more energy (can do more work) than one at its surface. This contribution to the total energy, the energy due to position, is called potential energy. The relation between the object's position and potential energy depends on the nature of the force field it experiences. The potential energy of a body of mass m at a height h above the surface of Earth is mgh, where g is the acceleration of free fall at the location. More important for chemistry is the potential energy of one charge near another charge. The Coulomb potential energy of a charge q 1 at a distance r from a charge q 2 is given by q 1q 2/4πϵ0r, where ϵ0 is a fundamental constant called the vacuum permittivity. Energy is also stored in the electromagnetic field in the form of photons. The energy of a photon of radiation of frequency υ is hv, where h is Planck's constant.
Energy is conserved. That is, the sum of the kinetic and potential energies of a single body remains constant provided it is free of external influences (forces). Thus, a falling weight accelerates: The fall implies a reduction of potential energy and the acceleration implies an increase in kinetic energy; the sum, though, is constant. A generalization (which can be interpreted as an implication) of the conservation of energy is the first law of thermodynamics, which focuses on a property of a many-body system called the internal energy. The internal energy can be interpreted as the sum of all the kinetic and potential energies of all the particles comprising the system. The first law of thermodynamics states that the internal energy of an isolated system is constant. The first law is closely related to the conservation of energy, but it acknowledges the possibility of the transfer of energy as heat, which is outside the reach of mechanics itself.
The special theory of relativity states that the mass of a body is a measure of its energy: E = mc 2, where c is the speed of light. That is, energy and mass are equivalent and interconvertible. Changes in mass are measurable only when changes in energy are considerable, which in practice commonly means for nuclear processes.
In chemistry we are often concerned with the transfer of energy from one location (e.g., a reaction vessel) to another (the surroundings of that vessel). One mode of transfer is by doing work. For example, work is performed when gases evolved in a reaction push back a movable wall (e.g., a piston) against an opposing force, such as that due to the external atmosphere or a weight to which the piston is attached. Another mode of transfer is as heat. Heat is the transfer of energy that occurs as a result of a temperature difference between a system and its surroundings when the two are separated by a diathermic wall (a wall that allows the passage of energy as heat). A metal wall is diathermic, a thermally insulated wall is not diathermic. Finally, energy may leave a system as electromagnetic radiation, for example as in chemiluminescence—the emission of radiation from matter in energetically excited molecular states produced in the course of a chemical reaction, and as a result of spectroscopic transitions. We shall concentrate on the first two modes of transfer, work and heat.
At a molecular level, work is the transfer of energy that makes use of or drives the orderly motion of molecules in the surroundings. The uniform motion of the atoms in a piston driven back by expanding gas is an example of orderly molecular motion. In contrast, heat is the transfer of energy that makes use of or causes disorderly motion in the surroundings. When we say that a chemical reaction gives out heat, we mean that energy is leaving the reaction vessel and stimulating thermal motion (random molecular motion) in the surroundings.
The energy of a chemical system is stored in the potential and kinetic energies of the electrons and atomic nuclei. This stored energy is sometimes referred to as chemical energy; however, this is only a shorthand way of referring to the kinetic and potential energies of all the particles in an element or compound.
The internal energy of a system changes when a chemical reaction occurs because the electrons and nuclei settle into different arrangements, as in the change of partnerships of H and O atoms in the reaction 2 H2(g) + O2(g) → 2 H2O(g). The energy released in a chemical reaction can be transferred to the surroundings (and put to use) in a variety of ways regardless of the manner in which the energy accumulated in the first place. Thus, energy may escape as heat and be used to raise the temperature of the surroundings, including raising the temperature of water that is then employed in a turbine to do work. The energy may also escape as work. We have already discussed expansion work, using the example of a piston being driven. The work may be accomplished electrically, as when electrons are driven through an external circuit and used to drive an electric motor.
Atomic nuclei are also centers of energy storage as a result of their internal structures. This energy is released when the nucleons (protons and electrons) undergo rearrangement and thereby change the strength of their interactions. The changes in energy are so great that they give rise to measurable changes of mass. For all chemical processes, the changes in mass accompanying acquisition or loss of energy are totally negligible.
see also Chemiluminescence; Chemistry and Energy; Electrochemistry; Heat; Physical Chemistry; Spectroscopy; Temperature; Thermodynamics.
Atkins, Peter, and de Paula, Julio (2002). Atkins' Physical Chemistry, 7th edition. New York: Oxford University Press.
Smith, Crosbie (1998). The Science of Energy: A Cultural History of Energy Physics in Victorian Britain. Chicago: University of Chicago Press.
Tipler, Paul Allen (1999). Physics for Scientists and Engineers, 4th edition. New York: W.H. Freeman and Worth Publishers.
Atkins, Peter. "Energy." Chemistry: Foundations and Applications. 2004. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1G2-3400900173.html
Atkins, Peter. "Energy." Chemistry: Foundations and Applications. 2004. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3400900173.html
The broadest definition of energy is the ability to do work. Human societies tap into various forms of energy, including chemical energy in biomass, natural gas, coal, and petroleum; nuclear energy in uranium; gravitational energy captured in hydroelectric plants; wind energy; and solar energy. Energy is usually measured in British thermal units (BTUs). A BTU is defined as the amount of heat energy that will raise the temperature of one pound of water by one degree Fahrenheit. In 2005 the world economy obtained about 40 percent of its nonsolar energy from petroleum, about 23 percent each from natural gas and coal, 8 percent total from hydroelectric, wind, and thermal sources, and about 6 percent from nuclear. Most of this energy is used in the industrialized world, although the most rapid growth in energy use is occurring in the industrializing world, especially China. The largest use of energy by far is for industrial production and transportation.
Energy has been a crucial factor in human cultural evolution. The evolution of increasingly complex human societies was driven by the capacity to harness energy. Harnessing energy may have also played a key role in our biological evolution. The large human brain, unique even among primates, has enormous energy requirements. The human brain represents about 2.5 percent of body weight and accounts for about 22 percent of resting metabolic needs. This large energy requirement was met by a much higher proportion of protein in the diet of early humans and the use of fire to predigest meat. The use of fire played a role in the anatomical development of our species— larger brains and shorter guts—and paved the way for further advances in technological and cultural evolution.
Beginning about 10,000 years ago, early agricultural technology harnessed flows of solar energy in the forms of animal-muscle power, water, and wind. With the widespread use of wood for fuel, humans began to tap into stocks of solar energy rather than flows. The use of stocks of energy made it possible to capture ever larger amounts of energy per capita with smaller amounts of effort. Wood, wind, and water power fueled the industrial revolution, which began in the early eighteenth century. In the nineteenth century, ancient solar energy, fossil hydrocarbons in the form of coal, rapidly became the fuel of choice. During the twentieth century, petroleum and natural gas replaced coal as the dominant fuel. Each step in the history of energy use has been characterized by a dominant fuel type that is increasingly flexible and substitutable.
Since our industrial economy depends so heavily on fossil fuels, an obvious question is, “Are we running out of it?” Most economists answer this question with an emphatic “No!” As energy becomes scarce, its price will increase, calling forth substitutes, increasing conservation efforts, and encouraging more exploration for new supplies. Economists point out that past warnings of impending shortages have proved to be greatly exaggerated. Critics of the economic argument counter that the inverse relationship between energy supply and energy demand may be trivially true, but this does not mean that the increasing scarcity of an essential resource like petroleum can be easily accommodated. The economic argument also ignores the geopolitical consequences of the waning of the petroleum age.
A useful supplement to the price-based analysis of economists is the concept of energy return on investment (EROI). This is a measure of how many units of energy can be obtained from a unit of energy invested. If the EROI is less than one, it makes no sense to tap that energy source, no matter how high the price.
Although the world uses many types of energy, none of them have the flexibility and high EROI of petroleum. Of paramount concern is when world petroleum production will peak and start to decline. Most predictions of when worldwide oil production will peak are based on variations of a model developed by the geophysicist M. King Hubbert in the 1950s. He created a mathematical model of the pattern of petroleum exhaustion assuming that the total amount of petroleum extracted over time would follow a bell-shaped pattern called a logistic curve. Past experience for individual oil fields shows that once peak production is reached, production tends to fall quite rapidly. A number of petroleum experts argue that technological advances in the past decade or so have extended the peak of the Hubbert curve for specific oil fields, but this has made exhaustion more rapid after the peak occurs. Since oil is limited, policies promoting technology to make more energy available today mean that less will be there in the future.
Estimates of when world oil production will peak run from 2005 (production has already peaked) to 2030, with most predictions clustering around the years 2010–2012. Predicted consequences of declining oil production range from catastrophic scenarios as agricultural and industrial outputs plummet, to relatively mild scenarios as the world’s economies endure inflation and temporary economic hardships to adjust, to the rosy scenarios of free-market fundamentalists who claim that markets will quickly call forth substitutes and conservation that overcome the scarcity of any particular fuel type.
It is impossible to predict how the world’s economies will adjust to the end of the fossil-fuel age. So far energy policies in the developed and developing worlds have shown little concern for the limited amount of fossil fuels. What happens in the future depends on how much developing economies (especially China) grow and how energy-dependent they become. Also of concern is how the rest of the world will react to the growing concentration of petroleum reserves in politically volatile areas and to the increasingly ominous effects of global climate change.
SEE ALSO Energy Sector; Solar Energy
Hall, Charles, Pradeep Tharakan, John Hallock, et al. 2003. Hydrocarbons and the Evolution of Human Culture. Nature 426: 318–322.
Simmons, Matthew. Various speeches. http://www.simmonscointl.com/research.aspx?Type=msspeeches. A good overview of the evidence for and negative consequences of the oil peak.
Tainter, Joseph. 1988. The Collapse of Complex Societies. Cambridge, U.K.: Cambridge University Press.
John M. Gowdy
"Energy." International Encyclopedia of the Social Sciences. 2008. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1G2-3045300715.html
"Energy." International Encyclopedia of the Social Sciences. 2008. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3045300715.html
Laws and regulations concerning the production and distribution of energy have existed for over one hundred years in the United States. Energy law became recognized as a specialty following the energy crises of the 1970s. It focuses on the production, distribution, conservation, and development of energy resources like coal, oil, natural gas, nuclear power, and hydroelectric power.
In 1876, the U.S. Supreme Court, in Munn v. Illinois, 94 U.S. (Otto) 113, 24 L. Ed. 77, held that "natural monopolies" could be regulated by the government. Munn concerned grain elevators but stood more generally for the principle that the public must be allowed to control private property committed to a use in which the public has an interest. This legal recognition of natural monopolies provides the basis for much of the legal and regulatory control the government exercises over utility companies.
The regulation of energy in the late 1800s was on a local and regional level, and was primarily market driven. The transition from using wood as a primary source of energy to using coal was almost complete, and a second transition from coal to natural gas and oil was beginning.
In 1900, Standard Oil Company controlled 90 percent of the oil market; within a few years, antitrust litigation had reduced its market share to 64 percent. Aside from antitrust enforcement, the federal government was content to let the market control the energy industry. Oil, coal, and natural gas found their greatest structural impediment in the "bottleneck" of distribution—pipelines for oil and natural gas, and railways for coal. The dominant model of energy policy that emerged from this period and existed unchanged until the 1970s was one of support for conventional resources and regulation of industries whose natural monopolies required some government oversight to ensure that their public purpose served a public interest.
On October 17, 1973, the Organization of Petroleum Exporting Countries (OPEC) announced an embargo of oil exports to all countries, including the United States, that were supporting Israel in the Yom Kippur War. Only approximately 10 percent of the United States' oil imports were affected, but the perception of a major oil shortage motivated the next three presidential administrations to exert a strong federal influence over energy.
President Richard M. Nixon created the Federal Energy Office (Exec. Order No. 11,930, 41 Fed. Reg. 32, 399) and appointed an "energy czar" to oversee oil supplies. President Gerald R. Ford's administration saw the passage of the Strategic Petroleum Reserve (42 U.S.C.A. § 6234) and the promulgation of minimum efficiency regulations for automobiles. In 1977, Jimmy Carter's administration created the department of energy (42 U.S.C.A. § 7101), which was the framework for the coordination, administration, and execution of a comprehensive national energy program.
The goal of a comprehensive national energy program was achieved with the passage of the National Energy Act of 1978, which consisted of five distinct pieces of legislation. The National Energy Conservation Policy Act (42 U.S.C.A. § 8201 et seq.) set standards and provided financing for conservation in buildings. The Powerplant and Industrial Fuel Use Act (42 U.S.C.A. § 8301 et seq.) encouraged the transition from oil
and gas to coal in boilers. The public utilities Regulatory Policies Act (15 U.S.C.A. § 2601) granted Congress authority over the interstate transmission of electric power. The Natural Gas Policy Act (15 U.S.C.A. § 3301) unified the gas market and promoted the deregulation of the natural gas industry. The Energy Tax Act (26 U.S.C.A. § 1 et seq.) approved tax credits to promote conservation.
The administration of ronald reagan set policies that marked a significant change in the national energy policy, away from the Carter administration's centralized, governmentally regulated energy plan, which set ambitious goals for market stabilization and energy conservation through government intervention. The Reagan administration favored a more market-driven approach to achieve these goals. Although unsuccessful in its goal to abolish the Department of Energy, the Reagan administration was able to deregulate the natural gas industry through administrative initiatives (under the Federal Energy Regulatory Commission) and the Wellhead Decontrol Act of 1989 (15 U.S.C.A. § 3301).
The administration of george h. w. bush also favored a market-driven approach to the regulation of energy, but the Persian Gulf War against Iraq in 1991 required Congress to respond to volatile conditions in the oil-exporting Middle East. The National Energy Policy Act of 1992 (42 U.S.C.A. § 13201) addressed issues such as competition among electric power generators and tax credits for wind and biomass energy production systems.
The National Energy Policy Plan, issued in 1995 during Bill Clinton's administration, continued the market-focused approach of the Reagan and Bush administrations. Citing as its primary goal a "sustainable energy policy," the plan states that the "administration's energy policy supports and reinforces the dominant role of the private sector" in achieving this goal.
The mid-1990s focus of market-driven, private sector regulation of energy development, conservation, and distribution may have to change in the years ahead. The energy needs of industrialized nations are intensifying, and the developing countries of the world are increasing their energy demands at a rate of 4.5 percent a year. Oil demand in Asia alone grew 50 percent from 1985 to 1995.
Energy policies in the future are likely to include emphasis on the development of more efficient, sustainable sources of energy. Many countries are already exploring the energy potential of biomass, wind, hydroelectric, and solar power.
Laitos, Jan G., and Tomain, Joseph. 1992. Energy and Natural Resources Law. St. Paul, Minn.: West.
Miller, Alan S. 1995. "Energy Policy from Nixon to Clinton: From Grand Provider to Market Facilitator." Environmental Law 25.
Reilly, Kathleen C. 1995. "Global Benefits versus Local Concerns: The Need for a Bird's Eye View of Nuclear Energy." Indiana Law Journal 70.
Tomain, Joseph P. 1990. "The Dominant Model of United States Energy Policy." University of Colorado Law Review 61.
"Energy." West's Encyclopedia of American Law. 2005. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1G2-3437701608.html
"Energy." West's Encyclopedia of American Law. 2005. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3437701608.html
energy, in physics, the ability or capacity to do work or to produce change. Forms of energy include heat, light, sound, electricity, and chemical energy. Energy and work are measured in the same units—foot-pounds, joules, ergs, or some other, depending on the system of measurement being used. When a force acts on a body, the work performed (and the energy expended) is the product of the force and the distance over which it is exerted.
Potential and Kinetic Energy
Potential energy is the capacity for doing work that a body possesses because of its position or condition. For example, a stone resting on the edge of a cliff has potential energy due to its position in the earth's gravitational field. If it falls, the force of gravity (which is equal to the stone's weight; see gravitation) will act on it until it strikes the ground; the stone's potential energy is equal to its weight times the distance it can fall. A charge in an electric field also has potential energy because of its position; a stretched spring has potential energy because of its condition. Chemical energy is a special kind of potential energy; it is the form of energy involved in chemical reactions. The chemical energy of a substance is due to the condition of the atoms of which it is made; it resides in the chemical bonds that join the atoms in compound substances (see chemical bond).
Kinetic energy is energy a body possesses because it is in motion. The kinetic energy of a body with mass m moving at a velocity v is one half the product of the mass of the body and the square of its velocity, i.e., KE = 1/2mv2. Even when a body appears to be at rest, its atoms and molecules are in constant motion and thus have kinetic energy. The average kinetic energy of the atoms or molecules is measured by the temperature of the body.
The difference between kinetic energy and potential energy, and the conversion of one to the other, is demonstrated by the falling of a rock from a cliff, when its energy of position is changed to energy of motion. Another example is provided in the movements of a simple pendulum (see harmonic motion). As the suspended body moves upward in its swing, its kinetic energy is continuously being changed into potential energy; the higher it goes the greater becomes the energy that it owes to its position. At the top of the swing the change from kinetic to potential energy is complete, and in the course of the downward motion that follows the potential energy is in turn converted to kinetic energy.
Conversion and Conservation of Energy
It is common for energy to be converted from one form to another; however, the law of conservation of energy, a fundamental law of physics, states that although energy can be changed in form it can be neither created nor destroyed (see conservation laws). The theory of relativity shows, however, that mass and energy are equivalent and thus that one can be converted into the other. As a result, the law of conservation of energy includes both mass and energy.
Many transformations of energy are of practical importance. Combustion of fuels results in the conversion of chemical energy into heat and light. In the electric storage battery chemical energy is converted to electrical energy and conversely. In the photosynthesis of starch, green plants convert light energy from the sun into chemical energy. Hydroelectric facilities convert the kinetic energy of falling water into electrical energy, which can be conveniently carried by wires to its place of use (see power, electric). The force of a nuclear explosion results from the partial conversion of matter to energy (see nuclear energy).
"energy." The Columbia Encyclopedia, 6th ed.. 2016. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1E1-energy.html
"energy." The Columbia Encyclopedia, 6th ed.. 2016. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1E1-energy.html
The total chemical energy in a food, as released by complete combustion (in the bomb calorimeter) is gross energy. Allowing for the losses of unabsorbed food in the faeces gives digestible energy. Allowing for loss in the urine due to incomplete combustion in the body (e.g. urea from the incomplete combustion of proteins) gives metabolizable energy. Allowing for the loss due to diet‐induced thermogenesis gives net energy, i.e. the actual amount available for use in the body.
The following factors are used for energy yields of foods: protein 17 kJ (4 kcal); fat, 37 kJ (9 kcal); carbohydrate, 16 kJ (4 kcal); alcohol, 29 kJ (7 kcal); sugar alcohols, 10 kJ (2.4 kcal); organic acids, 13 kJ (3 kcal).
DAVID A. BENDER. "energy." A Dictionary of Food and Nutrition. 2005. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1O39-energy.html
DAVID A. BENDER. "energy." A Dictionary of Food and Nutrition. 2005. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O39-energy.html
en·er·gy / ˈenərjē/ • n. (pl. -gies) 1. the strength and vitality required for sustained physical or mental activity: changes in the levels of vitamins can affect energy and well-being. ∎ a feeling of possessing such strength and vitality. ∎ force or vigor of expression. ∎ (energies) a person's physical and mental powers, typically as applied to a particular task or activity. 2. power derived from the utilization of physical or chemical resources, esp. to provide light and heat or to work machines. 3. Physics the property of matter and radiation that is manifest as a capacity to perform work (such as causing motion or the interaction of molecules): a collision in which no energy is transferred. ∎ a degree or level of this capacity possessed by something or required by a process.
"energy." The Oxford Pocket Dictionary of Current English. 2009. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1O999-energy.html
"energy." The Oxford Pocket Dictionary of Current English. 2009. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O999-energy.html
"energy." World Encyclopedia. 2005. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1O142-energy.html
"energy." World Encyclopedia. 2005. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O142-energy.html
Energy means work. It refers to the effort required to move a weight for some distance. The heavier the weight or the longer the distance, the more energy is required. Energy is measured in units called "joules," or sometimes as the heat equivalent to these joules, called "calories." In nutrition, both terms are used. A calorie is the amount of heat needed to warm one gram of water by one degree centigrade. A more convenient unit is the kilocalorie (kcal), which equals one thousand calories. In physical terms, energy has several forms, all of which can be converted into heat. These include potential energy, kinetic energy, chemical energy, and heat energy.
George A. Bray
(see also: Fats; Krebs Cycle; Nutrition )
Bray, George A.. "Energy." Encyclopedia of Public Health. 2002. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1G2-3404000301.html
Bray, George A.. "Energy." Encyclopedia of Public Health. 2002. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3404000301.html
"energy." A Dictionary of Biology. 2004. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1O6-energy.html
"energy." A Dictionary of Biology. 2004. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O6-energy.html
So energetic(al) †powerfully operative; full of energy. XVII. — Gr. energētikós active. energize XVIII.
T. F. HOAD. "energy." The Concise Oxford Dictionary of English Etymology. 1996. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1O27-energy.html
T. F. HOAD. "energy." The Concise Oxford Dictionary of English Etymology. 1996. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O27-energy.html
"energy." Oxford Dictionary of Rhymes. 2007. Encyclopedia.com. (July 29, 2016). http://www.encyclopedia.com/doc/1O233-energy.html
"energy." Oxford Dictionary of Rhymes. 2007. Retrieved July 29, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O233-energy.html