All modern science relies on measurements of observable properties of specific events or objects. Typically, a single measurement yields a number associated with a unit: for example, the number-unit pair “10” meters could be the result of a distance measurement. A number-unit pair produced by making a measurement is called a datum; when there is more than one datum, they are called data. The numerical part of a datum records unique information, while the unit tells what sort of phenomenon the information refers to: in the datum “10 meters,” the “10” is information, while “meters” tells us how to interpret that information. (“10 feet” is interpreted differently from “10” meters; the units tell us that it describes a shorter distance.)
Data produced by measurements usually have some degree of uncertainty. For example, if a ruler that is only accurate to the nearest centimeter is used to measure a distance, then it is not enough to report “10 meters”; a scientist must record that the distance is 10 meters give or take 1 centimeter.
Without precise measurements whose uncertainties are well-understood, science could not disentangle the web of cause and effect that is the physical world. And although science is far more than the mere accumulation of measurements, without measurement science could not exist. Millions of phenomena are measured throughout all scientific fields: the weight of a mouse, the response time of a human subject, the brightness of a star, the temperature of a breeze.
Historical Background and Scientific Foundations
The development of measurement systems is one of the oldest scientific concerns, but the earliest standardized units were developed primarily for trade and architecture, not for the sake of scientific measurement. As early as 3000 BC, early civilizations had developed certain units as common reference points to provide clarity where exchange of goods or coordination of activities was desired. Three of the most fundamental areas that people began to measure were time, length, and weight.
Most early measurement systems grew out of pre-existing patterns in nature, such as the length of the day or the size of a human foot or arm as a measurefor length. These standards of measure were inexact and varied both within and between cultures. Since the 1700s, measurements have become increasingly uniform and exact, with all nations across the globe agreeing (for scientific purposes) on a single measurement system, the metric system. As scientific instrumentation became more complex, allowing for measurements on subatomic and galactic scales, scientists began to make measurements with greater precision than thinkers of earlier ages could have foreseen and of phenomena they could not have imagined.
Ancient Astronomical Calendars
Time was the first quantity for which ancient cultures invented units of measure: all human peoples experience the seasons, lunar cycles, rising and setting sun, and wheeling stars, and are motivated to keep track of them.
Archaeological evidence suggests that even before developing written languages, prehistoric people tracked the passage of time by making marks on pieces of wood, stone, or bone to record the appearance of each full moon. The Sumerians, who lived in the area of the Near East known to archaeologists as Mesopotamia (now part of Iraq), were probably the first civilization to use calendars. Noticing that the cycle of seasons repeated every 12 full moons or so, the Sumerians counted 12 lunar months as a year. Other ancient civilizations also developed lunar calendars as well, identifying anywhere from two to five seasons depending on regional climate.
Although the moon's visibility and predictable orbit around Earth made it a strong choice as a basis for marking time, lunar calendars were problematic. Because the full moon appears about every 29.5 days, 12 lunar months contain only 354 days. As a result, lunar calendars fall increasingly out of sync with the seasons as the years pass. Different cultures have found different ways to accommodate the accumulating inconsistency.
Noticing that certain stars or constellations appeared in the sky at certain times of the year, some ancient astronomers began to use star patterns to measure the passage of time. Notably, the ancient Egyptians replaced their lunar calendar with the world's first so-lar calendar around the year 2700 BC. The Egyptians began each year with the rising of the star Sirius (now commonly called the Dog Star), which always occurred soon before the annual flooding of the Nile River. The Egyptians divided the year into 12 months of 30 days each, then added five days at the end to bring the calendar year to 365 days.
Ancient Devices for Measuring Time
The measurement of time intervals is an essential aspect of many scientific measurements, especially those involving events or processes. As we have seen, the notion of dividing time into intervals was almost inevitable, given the regular round of sky and seasonal phenomena that all cultures experience and their practical importance; however, such large, approximate units were not themselves useful for most scientific purposes, nor for many practical purposes, especially in cultures with complex economic affairs to coordinate. Devices for objectively marking the passage of smaller units of time—timepieces—were necessary. Around 3500 BC, the Egyptians began constructing obelisk's that used the motion of the sun to tell time. An obelisk was a large pillar that cast a shadow during the day. Someone could estimate the time of day by looking at the base of the structure, which had markings to measure hours depending on where the obelisk's shadow fell at a time of day.
The hour was not yet an absolute unit of time, as it is today, but varied from day to day: each hour measured 1/12 the time from sunrise to sunset on that particular day. Because the amount of daylight varies from season to season, these shadow clocks actually measured “temporal hours,” with longer hours in the summer and shorter hours during colder months. However, being near the equator, Egyptian sundials were fairly consistent year-round. Other civilizations in the Near East, Asia, and Europe later used sundials to tell time.
To tell time at night or during cloudy weather, the Egyptians invented the water clock around 1400 BC; the Chinese also used this type of clock. A water clock is a bucketlike device with a small hole near the base that allows water to escape. Markings on the inside of the bucket indicated the passing of units of time (such as hours). Because temperature changes cause the water to flow at different rates, water clocks could be imprecise, but they were independent of the seasons, clouds, and geographical location.
Water clocks were the first time-measuring devices that used an internal phenomenon (in this case, water flow) as a standard against which to mark the passage of time—the principle of all true clocks—and were used for many centuries. Italian physicist Galileo Galilei (1564–1642), widely considered the founder of modern physical science, used a type of water clock to measure time intervals while observing the motions of rolling balls and other objects in his laboratory. The equations of motion that he derived from these measurements are basic to modern physics. This illustrates the importance of measurement in the development of new theories in modern science, and the surprisingly important gains that can be made using even simple tools. In contemporary scientific developments, such simple measuring devices are no longer adequate, since extremely accurate measurements of hard-to-detect quantities are often required to distinguish between rival scientific theories: mechanical measuring devices of great subtlety are needed. The mechanical clock, developed relatively recently, is the direct technological descendent of such devices.
IN CONTEXT: THE PRIZE-WINNING CHRONOMETER
Early weight-driven and pendulum clocks were very inaccurate at sea due to temperature changes and the motion of ships. This was problematic because navigators relied on accurate timekeeping to locate their position at sea; the less accurate the clock, the farther off course a ship could go. In 1714, the British government offered a financial prize to whoever could develop an accurate method to determine a ship's longitude within 30 nautical miles at the end of a six-week voyage. To win the prize, the time-keeping device had to be accurate within three seconds per day, which was more accurate than any pendulum had been on shore. Over the next few years, numerous sailors tried to develop time-keeping devices, but none met the stiff criteria to win the competition.
After four unsuccessful attempts, during the 1720s carpenter and self-taught instrument-maker John Harrison (1693–1776) built a clock stable enough to withstand rough seas. Called a chronometer, Harrison's prize-winning invention lost only a few seconds in six weeks, earning him 20,000 British pounds (worth about $12 million today). Mechanically speaking, the device was essentially a well-made watch suspended in gimbals (rings connected by bearings) that remain horizontal even when a ship turns. Although advances in chronometry (time measurement) were motivated by economics and navigation, once available they quickly found use in the sciences, especially in astronomy.
Mechanical Clocks and Time Measurement
Time measurement advanced greatly during the 1300s, with the invention of mechanical clocks. The earliest known mechanical clocks were in Milan, Italy, by 1309. Large and made of iron, these earliest clocks had no faces or hands, but relied on an attendant to strike a bell for the hour. In 1335, Milan had the first clocks that struck automatically. Clocks became fixtures in the church bell towers of European cities by the end of the Middle Ages, keeping everyone within earshot on the same time. These early clocks relied on a heavy weight attached to a chord tightly wound on a spool. As gravity caused the spool to unwind, it turned sets of gears that moved hands on the clock of the face. A counterweight at the other end of the rope triggered a hammer that
struck a bell to ring when the clock reached the start of a new hour. Early clocks didn't keep very accurate time, losing as much as an hour a day and requiring regular adjustments, but the technology kept improving. The first household clocks appeared by 1400, and timepieces small and sturdy enough to easily carry on the person—pocket-watches—became possible a century later.
Increasingly accurate chronometry made fundamental scientific discoveries possible: for example, by the late 1600s clocks were accurate enough for astronomers to note that the movements of Jupiter's moon Io seemed to shift in time depending on where Earth was in its orbit. Danish astronomer Olaf Roemer (1644–1710) realized that the time-shifts could be explained by the extra time it took light, moving at a finite speed, to reach Earth from Jupiter and its moons when Earth's orbit took it farthest from Jupiter. By using the improved mechanical clocks of his day, Roemer was able to collect data on the time-shift of Io's eclipses that later astronomers could use to make a reasonable estimate of the speed of light.
Following the Babylonian mathematical tradition of dividing a circle into 360 degrees, early round-faced clocks—establishing a standard still followed today—divided hours into 60 minutes, and then into 60 seconds. This made for 3,600 seconds every hour, and 86,000 seconds in a 24-hour day. That method seemed to work until 1820, when a committee of French scientists pointed out that because of Earth's elliptical orbit, the length of one day can vary by few thousandths of a second from one day to another, depending on the planet's position around the sun. Scientists therefore measured a year of days to find the average length, then defined a second as a specific fraction of that average day. This calculation turned the second into an exact scientific measurement in itself rather than a small slice of a larger measurement. The decision seemed to work until 1956, when scientists discovered that Earth's rotation is slowing down slightly each year by about 7.3 milliseconds. Meanwhile, the solar year is getting shorter by about 5.3 milliseconds each year.
The invention of the atomic clock during the mid 1900s greatly refined the measurement of time. The atomic clock depends on the vibrational frequency of certain atoms. With the heightened accuracy of such clocks, scientists can measure time by fractions of a second far too small for people to perceive in daily life. Once again, improvements in the ability to measure time gave scientists new power to test scientific theories: for example, several predictions of the theory of relativity about the nature of space and time have been verified using atomic clocks. These include, among many others, the prediction that a clock at the bottom of a tower will run slower than one at the top (because it is in a slightly more intense gravitational field) and that a clock that is moved about relative to a clock stationed on the ground will run more slowly than the stationary clock.
Origins of Distance and Weight Measurement
The measurement of distance and weight, along with the measurement of time, is one of the most fundamental tools of science. Around 2000 BC, the Sumerians began using a cubit as a standard measurement for length. A Sumerian cubit was the distance from a person's middle finger to the elbow, and a “foot” equaled two-thirds of a cubit. In the ancient Near Eastern city of Nippur, a copper bar was a standard measurement tool around 1900 BC. At 3.62 ft (110.35 cm) long (converting to today's measurements), the bar was divided into 4 “feet,” each with 16 “inches”.
Ancient standards for weight focused on small standards that were light enough for someone to carry. By 2400 BC, the Sumerian civilization was using a base unit of weight called a shekel, about 0.3 oz (8.36 grams). A larger unit, the mina, weighed 60 shekels. During this period, the Egyptians used balancing scales to develop standard measurements for weight. By hanging two pans on opposite ends of a hanging rod, the Egyptians could see if objects in the two pans weighed the same amount. The Egyptians placed seeds in these simple balances to weigh small items, such as precious gems and metals. The weight of one grain became the official measure (and is still used in the measurement of gunpowder in North America).
The Metric System
Many systems of units for weight, distance, and volume evolved in Europe and elsewhere. Over the centuries, a slow trend toward standardization was driven by the economic benefits of having a common system; this trend also, as we have seen in the case of time measurement, benefited science by making it possible for different researchers to compare results and to test each other's experiments based on written descriptions of what was done. The concept of the measurement unit slowly evolved from an improvised marketplace convenience to a system of universally acknowledged quantities allowing rigorous comparisons between observations widely separated in time and space. The peak of this process was reached with the construction of the metric system.
From the time of Charlemagne (Charles the Great; c.742–814), who ruled France from 771 to 814, numerous European leaders issued decrees that modified regional weights and measures. This legacy made work difficult for travelers and traders, who readily had to convert back and forth between different systems. Commercial trade, as well as scientific investigation, required more uniformity. As a solution, mathematician Gabriel Mouton (1618–1694) proposed a standard unit of length based on the length of the meridian (imaginary curved line on the Earth's surface from the North Pole to the South), which astronomer Jean Picard (1620–1682) had determined to reasonable accuracy in 1670. In the late 1700s, as the Industrial Revolution increased the need for standard-sized parts, Mouton's idea caught on. In 1791, a commission of scientists led by astronomer Joseph Lagrange (1736–1813) tackled the issues. The result was the metric system. (The word derives from “metron,” which means “to measure” in Greek.) Following Mouton's advice, the commission divided the meridians length by 10,000,000 and called that distance a meter. The standard unit for weight, meanwhile, became the gram.
Based on multiples of 10, metric measures allowed for great ease in moving from larger and smaller units. For example, 1,000 times the weight of a gram is a kilogram; 1/100 the length of a meter is a centimeter. France began using the metric system in 1795, and several other European nations switched to metric in the decades that followed.
Over time, in response to scientific and industrial need for ever more precise and measurements, the metric system has been altered to make it more exact. For example, in the mid-twentieth century, officials at the International Bureau of Weights and Measures decided that defining the meter in terms of the size of Earth was too imprecise. The meter was temporarily redefined on the basis of the frequency of the light emitted by a particular isotope of the element krypton. In 1983, scientists devised a new basis for the meter from the speed of light in a vacuum. The length of a meter was redefined as the distance that light travels in a vacuum in 1/299,792,458 of a second, based on the 1967 definition of a second. This decision brought the measurement for time into the metric systemalmost. Suggestions to develop a metric clock (with 10 hours in a day, 100 minutes in an hour, and 100 seconds in a minute) have never caught on: the Babylonian habit of using factors and multiples of 60 remains embedded in our most advanced chronometers.
Modern Cultural Connections
Measurement of fine variations in forces, radiation intensities, and time intervals continues to drive progress in some of the most fundamental questions facing modern physical science. For example, as of 2007 efforts were underway to measure the phenomenon of frame dragging predicted by the theory of general relativity. Frame dragging is a shift in the inertia properties of objects that are near to large, spinning masses (such as planets or stars), where inertia is the tendency of an object to remain in its state of motion unless acted upon by a forcea fundamental property of all matter. Like scores of earlier tests of relativity, these measurement studies (which use satellites orbiting Earth) seek to answer questions about the nature of space, time, and the properties of matter.
On a larger scale, cosmology (the study of the structure of the universe as a whole) is enlivened today by debates about the existence and nature of “dark matter” and “dark energy”. In the late 1990s, measurements of light from very distant exploding stars showed that the acceleration of the universe is, contrary to scientist's expectations, accelerating: the energy driving this acceleration is termed “dark energy” because its nature remains mysterious. A number of measurements by satellite-based instruments are being planned that will resolve or at least constrain the nature of dark energy: these include measurements of the matter density of the universe (how much matter it contains, on average, in each unit of its volume) and fine variations in the cosmic microwave background radiation.
Nor are the questions being illuminated by measurement restricted to cosmic physical problems that most people cannot understand. Millions of separate measurements of temperature and the traces of ancient climate left in ancient ice layers or other materials have shown, in recent decades, that the climate of Earth is changing in response to greenhouse gases released by human activities. Although for some years the correctness of this view was disputed, it has been firmly established by an intense, international program of climate measurement, paleoclimatic proxy measurement, advancing knowledge of climate physics, and computer simulation, with effects on world political views and trends that are still only beginning to be felt.
So important is measurement to the conduct of modern science that a specialized field termed measurement theory has developed. Measurement theory systematically examines the ways in which numbers are assigned to phenomena during the making of scientific measurements. The effects of error and uncertainty are of particular interest in measurement theory, given that imperfection is inescapable in the making of measurements and that entire physical theories may stand or fall on fine variations in measured data.
Aveni, Anthony. Empires of Time: Calendars, Clocks, and Cultures. New York: Basic Books, 1989.
Barnett, Jo Ellen. Time's Pendulum: From Sundials to Atomic Clocks, the Fascinating History of Time-keeping and How Our Discoveries Changed the World. San Diego: Harcourt Brace & Company, 1998.
Bendick, Jeanne. How Much and How Many? The Story of Weights and Measures. Rev. ed. New York: Franklin Watts, 1989.
Blocksma, Mary. Reading the Numbers: A Survival Guide to the Measurements, Numbers, and Sizes Encountered in Everyday Life. New York: Viking, 1989.
Jones, Tony. Splitting the Second: The Story of Atomic Time. Bristol and Philadelphia: Institute of Physics Publishing, 2000.
Hebra, Alex. Measure for Measure: The Story of Imperial, Metric, and Other Units. Baltimore: Johns Hopkins University Press, 2003.
O'Malley, Michael. Keeping Watch: A History of American Time. New York: Viking, 1990.
Sobel, Dava. Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time. New York: Walker and Company, 1995.
Steel, Duncan. Marking Time: The Epic Quest to Invent the Perfect Calendar. New York: John Wiley and Sons, 2000.
Zupko, Ronald Edward. Revolution in Measurement: Western European Weights and Measures since the Age of Science. Philadelphia: The American Philosophical Society, 1990.
Fernandez, M.P., and P.C. Fernandez. “Precision Timekeepers of Tokugawa Japan and the Evolution of the Japanese Domestic Clock.” Technology and Culture 37 (1996): 221–248.
Astronomy Department, Cornell University, Ithaca, New York. “Timekeeping.” http://curious.astro.cornell.edu/timekeeping.php (accessed May 8, 2008).