Measurement seems like a simple subject, on the surface at least; indeed, all measurements can be reduced to just two components: number and unit. Yet one might easily ask, "What numbers, and what units?"—a question that helps bring into focus the complexities involved in designating measurements. As it turns out, some forms of numbers are more useful for rendering values than others; hence the importance of significant figures and scientific notation in measurements. The same goes for units. First, one has to determine what is being measured: mass, length, or some other property (such as volume) that is ultimately derived from mass and length. Indeed, the process of learning how to measure reveals not only a fundamental component of chemistry, but an underlying—if arbitrary and manmade—order in the quantifiable world.
HOW IT WORKS
In modern life, people take for granted the existence of the base-10, of decimal numeration system—a name derived from the Latin word decem, meaning "ten." Yet there is nothing obvious about this system, which has its roots in the ten fingers used for basic counting. At other times in history, societies have adopted the two hands or arms of a person as their numerical frame of reference, and from this developed a base-2 system. There have also been base-5 systems relating to the fingers on one hand, and base-20 systems that took as their reference point the combined number of fingers and toes.
Obviously, there is an arbitrary quality underlying the modern numerical system, yet it works extremely well. In particular, the use of decimal fractions (for example, 0.01 or 0.235) is particularly helpful for rendering figures other than whole numbers. Yet decimal fractions are a relatively recent innovation in Western mathematics, dating only to the sixteenth century. In order to be workable, decimal fractions rely on an even more fundamental concept that was not always part of Western mathematics: place-value.
Place-Value and Notation Systems
Place-value is the location of a number relative to others in a sequence, a location that makes it possible to determine the number's value. For instance, in the number 347, the 3 is in the hundreds place, which immediately establishes a value for the number in units of 100. Similarly, a person can tell at a glance that there are 4 units of 10, and 7 units of 1.
Of course, today this information appears to be self-evident—so much so that an explanation of it seems tedious and perfunctory—to almost anyone who has completed elementary-school arithmetic. In fact, however, as with almost everything about numbers and units, there is nothing obvious at all about place-value; otherwise, it would not have taken Western mathematicians thousands of years to adopt a place-value numerical system. And though they did eventually make use of such a system, Westerners did not develop it themselves, as we shall see.
Numeration systems of various kinds have existed since at least 3000 b.c., but the most important number system in the history of Western civilization prior to the late Middle Ages was the one used by the Romans. Rome ruled much of the known world in the period from about 200 b.c. to about a.d. 200, and continued to have an influence on Europe long after the fall of the Western Roman Empire in a.d. 476—an influence felt even today. Though the Roman Empire is long gone and Latin a dead language, the impact of Rome continues: thus, for instance, Latin terms are used to designate species in biology. It is therefore easy to understand how Europeans continued to use the Roman numeral system up until the thirteenth century a.d.—despite the fact that Roman numerals were enormously cumbersome.
The Roman notation system has no means of representing place-value: thus a relatively large number such as 3,000 is shown as MMM, whereas a much smaller number might use many more "places": 438, for instance, is rendered as CDXXXVIII. Performing any sort of calculations with these numbers is a nightmare. Imagine, for instance, trying to multiply these two. With the number system in use today, it is not difficult to multiply 3,000 by 438 in one's head. The problem can be reduced to a few simple steps: multiply 3 by 400, 3 by 30, and 3 by 8; add these products together; then multiply the total by 1,000—a step that requires the placement of three zeroes at the end of the number obtained in the earlier steps.
But try doing this with Roman numerals: it is essentially impossible to perform this calculation without resorting to the much more practical place-value system to which we're accustomed. No wonder, then, that Roman numerals have been relegated to the sidelines, used in modern life for very specific purposes: in outlines, for instance; in ordinal titles (for example, Henry VIII); or in designating the year of a motion picture's release.
The system of counting used throughout much of the world—1, 2, 3, and so on—is the Hindu-Arabic notation system. Sometimes mistakenly referred to as "Arabic numerals," these are most accurately designated as Hindu or Indian numerals. They came from India, but because Europeans discovered them in the Near East during the Crusades (1095-1291), they assumed the Arabs had invented the notation system, and hence began referring to them as Arabic numerals.
Developed in India during the first millennium b.c., Hindu notation represented a vast improvement over any method in use up to or indeed since that time. Of particular importance was a number invented by Indian mathematicians: zero. Until then, no one had considered zero worth representing since it was, after all, nothing. But clearly the zeroes in a number such as 2,000,002 stand for something. They perform a place-holding function: otherwise, it would be impossible to differentiate between 2,000,002 and 22.
Uses of Numbers in Science
Chemists and other scientists often deal in very large or very small numbers, and if they had to write out these numbers every time they discussed them, their work would soon be encumbered by lengthy numerical expressions. For this purpose, they use scientific notation, a method for writing extremely large or small numbers by representing them as a number between 1 and 10 multiplied by a power of 10.
Instead of writing 75,120,000, for instance, the preferred scientific notation is 7.512 · 107. To interpret the value of large multiples of 10, it is helpful to remember that the value of 10 raised to any power n is the same as 1 followed by that number of zeroes. Hence 1025, for instance, is simply 1 followed by 25 zeroes.
Scientific notation is just as useful—to chemists in particular—for rendering very small numbers. Suppose a sample of a chemical compound weighed 0.0007713 grams. The preferred scientific notation, then, is 7.713 · 10−4. Note that for numbers less than 1, the power of 10 is a negative number: 10−1 is 0.1, 10−2 is 0.01, and so on.
Again, there is an easy rule of thumb for quickly assessing the number of decimal places where scientific notation is used for numbers less than 1. Where 10 is raised to any power −n, the decimal point is followed by n places. If 10 is raised to the power of −8, for instance, we know at a glance that the decimal is followed by 7 zeroes and a 1.
In making measurements, there will always be a degree of uncertainty. Of course, when the standards of calibration (discussed below) are very high, and the measuring instrument has been properly calibrated, the degree of uncertainty will be very small. Yet there is bound to be uncertainty to some degree, and for this reason, scientists use significant figures—numbers included in a measurement, using all certain numbers along with the first uncertain number.
Suppose the mass of a chemical sample is measured on a scale known to be accurate to 10−5, kg. This is equal to 1/100,000 of a kilo, or 1/100 of a gram; or, to put it in terms of place-value, the scale is accurate to the fifth place in a decimal fraction. Suppose, then, that an item is placed on the scale, and a reading of 2.13283697 kg is obtained. All the numbers prior to the 6 are significant figures, because they have been obtained with certainty. On the other hand, the 6 and the numbers that follow are not significant figures because the scale is not known to be accurate beyond 10−5 kg.
Thus the measure above should be rendered with 7 significant figures: the whole number 2, and the first 6 decimal places. But if the value is given as 2.132836, this might lead to inaccuracies at some point when the measurement is factored into other equations. The 6, in fact, should be "rounded off" to a 7. Simple rules apply to the rounding off of significant figures: if the digit following the first uncertain number is less than 5, there is no need to round off. Thus, if the measurement had been 2.13283627 kg (note that the 9 was changed to a 2), there is no need to round off, and in this case, the figure of 2.132836 is correct. But since the number following the 6 is in fact a 9, the correct significant figure is 7; thus the total would be 2.132837.
Fundamental Standards of Measure
So much for numbers; now to the subject of units. But before addressing systems of measurement, what are the properties being measured? All forms of scientific measurement, in fact, can be reduced to expressions of four fundamental properties: length, mass, time, and electric current. Everything can be expressed in terms of these properties: even the speed of an electron spinning around the nucleus of an atom can be shown as "length" (though in this case, the measurement of space is in the form of a circle or even more complex shapes) divided by time.
Of particular interest to the chemist are length and mass: length is a component of volume, and both length and mass are elements of density. For this reason, a separate essay in this book is devoted to the subject of Mass, Density, and Volume. Note that "length," as used in this most basic sense, can refer to distance along any plane, or in any of the three dimensions—commonly known as length, width, and height—of the observable world. (Time is the fourth dimension.) In addition, as noted above, "length" measurements can be circular, in which case the formula for measuring space requires use of the coefficient π, roughly equal to 3.14.
Standardized Units of Measure: Who Needs Them?
People use units of measure so frequently in daily life that they hardly think about what they are doing. A motorist goes to the gas station and pumps 13 gallons (a measure of volume) into an automobile. To pay for the gas, the motorist uses dollars—another unit of measure, economic rather than scientific—in the form of paper money, a debit card, or a credit card.
This is simple enough. But what if the motorist did not know how much gas was in a gallon, or if the motorist had some idea of a gallon that differed from what the gas station management determined it to be? And what if the value of a dollar were not established, such that the motorist and the gas station attendant had to haggle over the cost of the gasoline just purchased? The result would be a horribly confused situation: the motorist might run out of gas, or money, or both, and if such confusion were multiplied by millions of motorists and millions of gas stations, society would be on the verge of breakdown.
THE VALUE OF STANDARDIZATION TO A SOCIETY.
Actually, there have been times when the value of currency was highly unstable, and the result was near anarchy. In Germany during the early 1920s, for instance, rampant inflation had so badly depleted the value of the mark, Germany's currency, that employees demanded to be paid every day so that they could cash their paychecks before the value went down even further. People made jokes about the situation: it was said, for instance, that when a woman went into a store and left a basket containing several million marks out front, thieves ran by and stole the basket—but left the money. Yet there was nothing funny about this situation, and it paved the way for the nightmarish dictatorship of Adolf Hitler and the Nazi Party.
It is understandable, then, that standardization of weights and measures has always been an important function of government. When Ch'in Shih-huang-ti (259-210 b.c.) united China for the first time, becoming its first emperor, he set about standardizing units of measure as a means of providing greater unity to the country—thus making it easier to rule. On the other hand, the Russian Empire of the late nineteenth century failed to adopt standardized systems that would have tied it more closely to the industrialized nations of Western Europe. The width of railroad tracks in Russia was different than in Western Europe, and Russia used the old Julian calendar, as opposed to the Gregorian calendar adopted throughout much of Western Europe after 1582. These and other factors made economic exchanges between Russia and Western Europe extremely difficult, and the Russian Empire remained cut off from the rapid progress of the West. Like Germany a few decades later, it became ripe for the establishment of a dictatorship—in this case under the Communists led by V. I. Lenin.
Aware of the important role that standardization of weights and measures plays in the governing of a society, the U.S. Congress in 1901 established the Bureau of Standards. Today it is known as the National Institute of Standards and Technology (NIST), a nonregulatory agency within the Commerce Department. As will be discussed at the conclusion of this essay, the NIST maintains a wide variety of standard definitions regarding mass, length, temperature and so forth, against which other devices can be calibrated.
THE VALUE OF STANDARDIZATION TO SCIENCE.
What if a nurse, rather than carefully measuring a quantity of medicine before administering it to a patient, simply gave the patient an amount that "looked right"? Or what if a pilot, instead of calculating fuel, distance, and other factors carefully before taking off from the runway, merely used a "best estimate"? Obviously, in either case, disastrous results would be likely to follow. Though neither nurses or pilots are considered scientists, both use science in their professions, and those disastrous results serve to highlight the crucial matter of using standardized measurements in science.
Standardized measurements are necessary to a chemist or any scientist because, in order for an experiment to be useful, it must be possible to duplicate the experiment. If the chemist does not know exactly how much of a certain element he or she mixed with another to form a given compound, the results of the experiment are useless. In order to share information and communicate the results of experiments, then, scientists need a standardized "vocabulary" of measures.
This "vocabulary" is the International System of Units, known as SI for its French name, Système International d'Unités. By international agreement, the worldwide scientific community adopted what came to be known as SI at the 9th General Conference on Weights and Measures in 1948. The system was refined at the 11th General Conference in 1960, and given its present name; but in fact most components of SI belong to a much older system of weights and measures developed in France during the late eighteenth century.
SI vs. the English System
The United States, as almost everyone knows, is the wealthiest and most powerful nation on Earth. On the other hand, Brunei—a tiny nation-state on the island of Java in the Indonesian archipelago—enjoys considerable oil wealth, but is hardly what anyone would describe as a super-power. Yemen, though it is located on the Arabian peninsula, does not even possess significant oil wealth, and is a poor, economically developing nation. Finally, Burma in Southeast Asia can hardly be described even as a "developing" nation: ruled by an extremely repressive military regime, it is one of the poorest nations in the world.
So what do these four have in common? They are the only nations on the planet that have failed to adopt the metric system of weights and measures. The system used in the United States is called the English system, though it should more properly be called the American system, since England itself has joined the rest of the world in "going metric." Meanwhile, Americans continue to think in terms of gallons, miles, and pounds; yet American scientists use the much more convenient metric units that are part of SI.
HOW THE ENGLISH SYSTEM WORKS (OR DOES NOT WORK).
Like methods of counting described above, most systems of measurement in premodern times were modeled on parts of the human body. The foot is an obvious example of this, while the inch originated from the measure of a king's first thumb joint. At one point, the yard was defined as the distance from the nose of England's King Henry I to the tip of his outstretched middle finger.
Obviously, these are capricious, downright absurd standards on which to base a system of measure. They involve things that change, depending for instance on whose foot is being used as a standard. Yet the English system developed in this willy-nilly fashion over the centuries; today, there are literally hundreds of units—including three types of miles, four kinds of ounces, and five kinds of tons, each with a different value.
What makes the English system particularly cumbersome, however, is its lack of convenient conversion factors. For length, there are 12 inches in a foot, but 3 feet in a yard, and 1,760 yards in a mile. Where volume is concerned, there are 16 ounces in a pound (assuming one is talking about an avoirdupois ounce), but 2,000 pounds in a ton. And, to further complicate matters, there are all sorts of other units of measure developed to address a particular property: horsepower, for instance, or the British thermal unit (Btu).
THE CONVENIENCE OF THE METRIC SYSTEM.
Great Britain, though it has long since adopted the metric system, in 1824 established the British Imperial System, aspects of which are reflected in the system still used in America. This is ironic, given the desire of early Americans to distance themselves psychologically from the empire to which their nation had once belonged. In any case, England's great worldwide influence during the nineteenth century brought about widespread adoption of the English or British system in colonies such as Australia and Canada. This acceptance had everything to do with British power and tradition, and nothing to do with convenience. A much more usable standard had actually been embraced 25 years before in a land that was then among England's greatest enemies: France.
During the period leading up to and following the French Revolution of 1789, French intellectuals believed that every aspect of existence could and should be treated in highly rational, scientific terms. Out of these ideas arose much folly, particularly during the Reign of Terror in 1793, but one of the more positive outcomes was the metric system. This system is decimal—that is, based entirely on the number 10 and powers of 10, making it easy to relate one figure to another. For instance, there are 100 centimeters in a meter and 1,000 meters in a kilometer.
PREFIXES FOR SIZES IN THE METRIC SYSTEM.
For designating smaller values of a given measure, the metric system uses principles much simpler than those of the English system, with its irregular divisions of (for instance) gallons, quarts, pints, and cups. In the metric system, one need only use a simple Greek or Latin prefix to designate that the value is multiplied by a given power of 10. In general, the prefixes for values greater than 1 are Greek, while Latin is used for those less than 1. These prefixes, along with their abbreviations and respective values, are as follows. (The symbol μ for "micro" is the Greek letter mu.)
The Most Commonly Used Prefixes in the Metric System
- giga (G) = 109 (1,000,000,000)
- mega (M) = 106 (1,000,000)
- kilo (k) == 103 (1,000)
- deci (d) = 10−1 (0.1)
- centi (c) = 10−2 (0.01)
- milli (m) = 10−3 (0.001)
- micro (μ) = 10−6 (0.000001)
- nano (n) = 10−9 (0.000000001)
The use of these prefixes can be illustrated by reference to the basic metric unit of length, the meter. For long distances, a kilometer (1,000 m) is used; on the other hand, very short distances may require a centimeter (0.01 m) or a millimeter (0.001 m) and so on, down to a nanometer (0.000000001 m). Measurements of length also provide a good example of why SI includes units that are not part of the metric system, though they are convertible to metric units. Hard as it may be to believe, scientists often measure lengths even smaller than a nanometer—the width of an atom, for instance, or the wavelength of a light ray. For this purpose, they use the angstrom (Å or A), equal to 0.1 nanometers.
Calibration and SI Units
THE SEVEN BASIC SI UNITS.
The SI uses seven basic units, representing length, mass, time, temperature, amount of substance, electric current, and luminous intensity. The first four parameters are a part of everyday life, whereas the last three are of importance only to scientists. "Amount of substance" is the number of elementary particles in matter. This is measured by the mole, a unit discussed in the essay on Mass, Density, and Volume. Luminous intensity, or the brightness of a light source, is measured in candelas, while the SI unit of electric current is the ampere.
The other four basic units are the meter for length, the kilogram for mass, the second for time, and the degree Celsius for temperature. The last of these is discussed in the essay on Temperature; as for meters, kilograms, and seconds, they will be examined below in terms of the means used to define each.
Calibration is the process of checking and correcting the performance of a measuring instrument or device against the accepted standard. America's preeminent standard for the exact time of day, for instance, is the United States Naval Observatory in Washington, D.C. Thanks to the Internet, people all over the country can easily check the exact time, and calibrate their clocks accordingly—though, of course, the resulting accuracy is subject to factors such as the speed of the Internet connection.
There are independent scientific laboratories responsible for the calibration of certain instruments ranging from clocks to torque wrenches, and from thermometers to laser-beam power analyzers. In the United States, instruments or devices with high-precision applications—that is, those used in scientific studies, or by high-tech industries—are calibrated according to standards established by the NIST.
The NIST keeps on hand definitions, as opposed to using a meter stick or other physical model. This is in accordance with the methods of calibration accepted today by scientists: rather than use a standard that might vary—for instance, the meter stick could be bent imperceptibly—unvarying standards, based on specific behaviors in nature, are used.
METERS AND KILOGRAMS.
A meter, equal to 3.281 feet, was at one time defined in terms of Earth's size. Using an imaginary line drawn from the Equator to the North Pole through Paris, this distance was divided into 10 million meters. Later, however, scientists came to the realization that Earth is subject to geological changes, and hence any measurement calibrated to the planet's size could not ultimately be reliable. Today the length of a meter is calibrated according to the amount of time it takes light to travel through that distance in a vacuum (an area of space devoid of air or other matter). The official definition of a meter, then, is the distance traveled by light in the interval of 1/299,792,458 of a second.
One kilogram is, on Earth at least, equal to 2.21 pounds; but whereas the kilogram is a unit of mass, the pound is a unit of weight, so the correspondence between the units varies depending on the gravitational field in which a pound is measured. Yet the kilogram, though it represents a much more fundamental property of the physical world than a pound, is still a somewhat arbitrary form of measure in comparison to the meter as it is defined today.
Given the desire for an unvarying standard against which to calibrate measurements, it would be helpful to find some usable but unchanging standard of mass; unfortunately, scientists have yet to locate such a standard. Therefore, the value of a kilogram is calibrated much as it was two centuries ago. The standard is a bar of platinum-iridium alloy, known as the International Prototype Kilogram, housed near Sévres in France.
A second, of course, is a unit of time as familiar to non-scientifically trained Americans as it is to scientists and people schooled in the metric system. In fact, it has nothing to do with either the metric system or SI. The means of measuring time on Earth are not "metric": Earth revolves around the Sun approximately every 365.25 days, and there is no way to turn this into a multiple of 10 without creating a situation even more cumbersome than the English units of measure.
The week and the month are units based on cycles of the Moon, though they are no longer related to lunar cycles because a lunar year would soon become out-of-phase with a year based on Earth's rotation around the Sun. The continuing use of weeks and months as units of time is based on tradition—as well as the essential need of a society to divide up a year in some way.
A day, of course, is based on Earth's rotation, but the units into which the day is divided—hours, minutes, and seconds—are purely arbitrary, and likewise based on traditions of long standing. Yet scientists must have some unit of time to use as a standard, and, for this purpose, the second was chosen as the most practical. The SI definition of a second, however, is not simply one-sixtieth of a minute or anything else so strongly influenced by the variation of Earth's movement.
Instead, the scientific community chose as its standard the atomic vibration of a particular isotope of the metal cesium, cesium-133. The vibration of this atom is presumed to be unvarying, because the properties of elements—unlike the size of Earth or its movement—do not change. Today, a second is defined as the amount of time it takes for a cesium-133 atom to vibrate 9,192,631,770 times. Expressed in scientific notation, with significant figures, this is 9.19263177 · 109.
WHERE TO LEARN MORE
Gardner, Robert. Science Projects About Methods of Measuring. Berkeley Heights, N.J.: Enslow Publishers, 2000.
Long, Lynette. Measurement Mania: Games and Activities That Make Math Easy and Fun. New York: Wiley, 2001.
"Measurement" (Web site). <http://www.dist214.k12.il.us/users/asanders/meas.html> (May 7, 2001).
"Measurement in Chemistry" (Web site). <http://bradley.edu/~campbell/lectnotes/149ch2/tsld001.htm> (May7, 2001).
MegaConverter 2 (Web site). <http://www.megaconverter.com> (May 7, 2001).
Patilla, Peter. Measuring. Des Plaines, IL: Heinemann Library, 2000.
Richards, Jon. Units and Measurements. Brookfield, CT: Copper Beech Books, 2000.
Sammis, Fran. Measurements. New York: Benchmark Books, 1998.
Units of Measurement (Web site). <http://www.unc.edu/~rowlett/units/> (May 7, 2001).
Wilton High School Chemistry Coach (Web site). <http://www.chemistrycoach.com> (May 7, 2001).
The process of checking and correcting the performance of a measuring instrument or device against a commonly accepted standard.
A method used by scientists for writing extremely large or small numbers by representing them as a number between 1 and 10 multiplied by a power of 10. Instead of writing 0.0007713, the preferred scientific notation is 7.713 · 10−4.
An abbreviation of the French term Système International d'Unités, or International System of Units. Based on the metric system, SI is the system of measurement units in use by scientists worldwide.
Numbers included in a measurement, using all certain numbers along with the first uncertain number.
"Measurement." Science of Everyday Things. . Encyclopedia.com. (August 17, 2017). http://www.encyclopedia.com/science/news-wires-white-papers-and-books/measurement
"Measurement." Science of Everyday Things. . Retrieved August 17, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/news-wires-white-papers-and-books/measurement
British mathematician and physicist William Thomson (1824–1907), otherwise known as Lord Kelvin, indicated the importance of measurement to science:
When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of science, whatever the matter may be.
Possibly the most striking application of Kelvin's words is to the explanation of combustion by the French chemist Antoine Lavoisier (1743–1794). Combustion was confusing to scientists of the time because some materials, such as wood, seemed to decrease in mass on burning: Ashes weigh less than wood. In contrast, others, including iron, increased in mass: Rust weighs more than iron. Lavoisier was able to explain that combustion results when oxygen in the air unites with the material being burned, after careful measurement of the masses of the reactants—air and the material to be burned—and those of the products. Because Lavoisier was careful to capture all products of combustion, it was clear that the reason wood seemed to lose mass on burning was because one of its combustion products is a gas, carbon dioxide, which had been allowed to escape.
Lavoisier's experiments and his explanations of them and of the experiments of others are often regarded as the beginning of modern chemistry. It is not an exaggeration to say that modern chemistry is the result of careful measurement.
Most people think of measurement as a simple process. One simply finds a measuring device, uses it on the object to be measured, and records the result. Careful scientific measurement is more involved than this and must be thought of as consisting of four steps, each one of which is discussed here: choosing a measuring device, selecting a sample to be measured, making a measurement, and interpreting the results.
Choosing a Measuring Device
The measuring device one chooses may be determined by the devices available and by the object to be measured. For example, if it were necessary to
determine the mass of a coin, obviously inappropriate measuring devices would include a truck scale (reading in units of 20 pounds, with a 10-ton capacity), bathroom scale (in units of 1 pound, with a 300-pound capacity), and baby scale (in units of 0.1 ounce, with a 30-pound capacity). None of these is capable of determining the mass of so small an object. Possibly useful devices include a centigram balance (reading in units of 0.01 gram, with a 500-gram capacity), milligram balance (in units of 0.001 gram, with a 300-gram capacity), and analytical balance (in units of 0.00001 gram, with a 100-gram capacity). Even within this limited group of six instruments, those that are suitable differ if the object to be measured is an approximately one-kilogram book instead of a coin. Then only the bathroom scale and baby scale will suffice.
In addition, it is essential that that the measuring device provide reproducible results. A milligram balance that yields successive measurements of 3.012, 1.246, 8.937, and 6.008 grams for the mass of the same coin is clearly faulty. One can check the reliability of a measuring device by measuring a standard object, in part to make sure that measurements are reproducible. A common measuring practice is to intersperse samples of known value within a group of many samples to be measured. When the final results are tallied, incorrect values for the known samples indicate some fault, which may be that of the measuring device, or that of the experimenter. In the example of measuring the masses of different coins, one would include several "standard" coins, the mass of each being very well known.
Selecting a Sample
There may be no choice of sample because the task at hand may be simply that of measuring one object, such as determining the mass of a specific coin. If the goal is to determine the mass of a specific kind of coin, such as a U.S. penny, there are several questions to be addressed, including the following. Are uncirculated or worn coins to be measured? Worn coins may have less mass because copper has worn off, or more mass because copper oxide weighs more than copper and dirt also adds mass. Are the coins of just one year to be measured? Coin mass may differ from year to year. How many coins should be measured to obtain a representative sample? It is likely that there is a slight variation in mass among coins and a large enough number of coins should be measured to encompass that variation. How many sources (banks or stores) should be visited to obtain samples? Different batches of new coins may be sent to different banks; circulated coins may be used mostly in vending machines and show more wear as a result.
The questions asked depend on the type of sample to be measured. If the calorie content of breakfast cereal is to be determined, the sampling questions include how many factories to visit for samples, whether to sample unopened or opened boxes of cereal, and the date when the breakfast sample was manufactured, asked for much the same reason that similar questions were advanced about coins. In addition, other questions come to mind. How many samples should be taken from each box? From where in the box should samples be taken? May samples of small flakes have a different calorie content than samples of large flakes?
These sampling questions are often the most difficult to formulate but they are also the most important to consider in making a measurement. The purpose of asking them is to obtain a sample that is as representative as possible of the object being measured, without repeating the measurement unnecessarily. Obviously, a very exact average mass of the U.S. penny can be obtained by measuring every penny in circulation. This procedure would be so time-consuming that it is impractical, in addition to being expensive.
Making a Measurement
As mentioned above, making a measurement includes verifying that the measuring device yields reproducible results, typically by measuring standard samples. Another reason for measuring standard samples is to calibrate the measuring instrument. For example, a common method to determine the viscosity of a liquid—its resistance to flow—requires knowing the density of that liquid and the time that it takes for a definite volume of liquid to flow through a thin tube, within a device called a viscometer. It is very difficult to construct duplicate viscometers that have exactly the same length and diameter of that tube. To overcome this natural variation, a viscometer is calibrated by timing the flow of a pure liquid whose viscosity is known—such as water—through it. Careful calibration involves timing the flow of a standard volume of more than one pure liquid.
Calibration not only accounts for variations in the dimensions of the viscometer. It also compensates for small variations in the composition of the glass of which the viscometer is made, small differences in temperatures, and even differences in the gravitational acceleration due to different positions on Earth. Finally, calibration can compensate for small variations in technique from one experimenter to another.
These variations between experimenters are of special concern. Different experimenters can obtain very different values when measuring the same sample. The careful experimenter takes care to prevent bias or difference in technique from being reflected in the final result. Methods of prevention include attempting to measure different samples without knowing the identity of each sample. For instance, if the viscosities of two colorless liquids are to be measured, several different aliquots of each liquid will be prepared, the aliquots will be shuffled, and each aliquot will be measured in order. As much of the measurement as possible will be made mechanically. Rather than timing flow with a stopwatch, it is timed with an electronic device that starts and stops as liquid passes definite points.
Finally, the experimenter makes certain to observe the measurement the same way for each trial. When a length is measured with a meter stick or a volume is measured with a graduated cylinder, the eye of the experimenter is in line with or at the same level as the object being measured to avoid parallax. When using a graduated device, such as a thermometer, meter stick, or graduated cylinder, the measurement is estimated one digit more finely than the finest graduation. For instance, if a thermometer is graduated in degrees, 25.4°C (77.7°F) would be a reasonable measurement made with it, with the ".4" estimated by the experimenter.
Each measurement is recorded as it is made. It is important to not trust one's memory. In addition, it is important to write down the measurements made, not the results from them. For instance, if the mass of a sample of sodium chloride is determined on a balance, one will first obtain the mass of a container, such as 24.789 grams, and then the mass of the container with the sodium chloride present, such as 32.012 grams. It is important to record both of these masses and not just their difference, the mass of sodium chloride, 7.223 grams.
Typically, the results of a measurement involve many values, the observations of many trials. It is tempting to discard values that seem quite different from the others. This is an acceptable course of action if there is good reason to believe that the errant value was improperly measured. If the experimenter kept good records while measuring, notations made during one or more trials may indicate that an individual value was poorly obtained—for instance, by not zeroing or leveling a balance, neglecting to read the starting volume in a buret before titration, or failing to cool a dried sample before obtaining its mass.
Simply discarding a value based on its deviation from other values, without sound experimental reasons for doing so, may lead to misleading results besides being unjustified. Consider the masses of several pennies determined with a milligram balance to be: 3.107, 3.078, 3.112, 2.911,3.012, 3.091, 3.055, and 2.508 grams. Discarding the last mass because of its deviation would obscure the facts that post-1982 pennies have a zinc core with copper cladding (representing a total of about 2.4% copper), whereas pre-1982 pennies are composed of an alloy that is 95 percent copper. There are statistical tests that help in deciding whether to reject a specific value or not.
It is cumbersome, however, to report all the values that have been measured. Reporting solely the average or mean value gives no indication of how carefully the measurement has been made or how reproducible the repeated measurements are. Care in measurement is implied by the number of significant figures reported; this corresponds to the number of digits to
which one can read the measuring devices, with one digit beyond the finest graduation, as indicated earlier.
The reproducibility of measurements is a manifestation of their precision. Precision is easily expressed by citing the range of the results; a narrow range indicates high precision. Other methods of expressing precision include relative average deviation and standard deviation. Again, a small value of either deviation indicates high precision; repeated measurements are apt to replicate the values of previous ones.
When several different quantities are combined to obtain a final value—such as combining flow time and liquid density to determine viscosity—standard propagation-of-error techniques are employed to calculate the deviation in the final value from the deviations in the different quantities.
Both errors and deviations combine in the same way when several quantities are combined, even though error and deviation are quite different concepts. As mentioned above, deviation indicates how reproducible successive measurements are. Error is a measure of how close an individual value—or an average—is to an accepted value of a quantity. A measurement with small error is said to be accurate. Often, an experimenter will believe that high precision indicates low error. This frequently is true, but very precise measurements may have a uniform error, known as a systematic error. An example would be a balance that is not zeroed, resulting in masses that are uniformly high or low.
The goal of careful measurement ultimately is to determine an accepted value. Careful measurement technique—including choosing the correct measuring device, selecting a sample to be measured, making a measurement, and interpreting the results—helps to realize that goal.
see also International System of Units; Lavoisier, Antoine.
Robert K. Wismer
Youden, W. J. (1991). Experimentation and Measurement. NIST Special Publication 672. Washington, DC: National Institute of Standards and Technology.
"Measurement." Chemistry: Foundations and Applications. . Encyclopedia.com. (August 17, 2017). http://www.encyclopedia.com/science/news-wires-white-papers-and-books/measurement-0
"Measurement." Chemistry: Foundations and Applications. . Retrieved August 17, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/news-wires-white-papers-and-books/measurement-0
Measurement is the evaluation or estimation of degree, extent, dimension, or capacity in relation to certain standards (i.e., units of measurement). As one of the most important inventions in human history, the process of measuring involves every aspect of our lives, such as time, mass, length, and space. The Greeks first developed the “foot” as their fundamental unit of length during the fourth millennium BCE. The ancient peoples of Mesopotamia, Egypt, and the Indus Valley seem to have all created systems of weight around the same period. Zero, the crucial number in the history of measurement, was first regarded as a true number by Aristotle.
The ancient Egyptians first developed a systematic method of measuring objects, which they used in the construction of pyramids. During the long period during which their civilization thrived in northeastern Africa, they also cultivated the earliest thoughts on earth measurement: geometry. Euclid of Alexandria (c. 330–c. 275 BCE), a Greek mathematician who lived in Egypt and who is regarded as the “father of geometry,” provided the proofs of geometric rules that Egyptians had devised in building their monuments. His most famous work, Elements, covers the basic definitions, postulates, propositions, and proofs of mathematical and geometric theorems. Euclid’s Elements has proven instrumental in the development of modern science and measurement.
Another great mathematician who contributed to modern measurement was Karl Friedrich Gauss. He was born on April 30, 1777, in Brunswick, Germany. At age twenty-four, Gauss published a brilliant work, Disquisitiones Arithmeticae, in which he established basic concepts and methods of number theory. In 1801, Gauss developed the method of least squares in calculating the orbital component of the motion of celestial bodies with high accuracy. Since that time the method of least squares has been the most widely used method in all of science to estimate the impact of measurement error. He was able to prove that a bell-shaped, normally distributed error curve is a basic assumption of statistical probability analysis (Gauss-Markov theorem).
Among all science and social science disciplines, there are three broad measurement theories (Michell 1986, 1990). The first and most commonly used is the classical theory of measurement. Measurement is defined by the magnitudes of the quantity and expressed as real numbers. An object’s quantitative properties are estimated in relation to one another, and ratios of quantities can be determined by the unit of measurement. The classical concept of measurement can be traced back to early theorists and mathematicians, including Isaac Newton and Euclid. The classical approach assumes that the underlying reality exists, but only quantitative attributes are measurable, and the meaningfulness of scientific theories can, and only can, be supported by the empirical relationships of various measurements.
The second theory, the representational approach, defines measurement as “the correlation of numbers and entities that are not numbers” (Nagel  1960, p. 121). For example, IQ scores can be used to measure intelligence, and the Likert scale measures personal attitudes based on a set of statements. The representational approach assumes that a reality exists and can be measured, and the goal of science is to understand this reality. However, the representational approach does not insist that only quantitative properties are measurable. Instead, measurements can be used to reflect differences at multiple levels.
Unlike the classical and representational approaches, the third approach, called the operational approach, avoids the assumption of objective reality. Instead, it emphasizes only the precisely specified operational process, such as the measurement of reliability and validity. The main concern of scientific theories is only the relationships indicated by the measurements rather than the distance between the reality and measures.
According to the different properties and relationships of the numbers, there are four different levels of measurement: nominal, ordinal, interval, and ratio. In nominal (also called categorical) measurement, names or symbols are assigned to objects, and this assignment is determined by the similarity of the to-be-measured values or attributes. The categories of assignment in most situations are defined arbitrarily, for instance, numbers assigned to individual marital status: single = 1, married = 2, separated = 3, divorced = 4, and so on, or to religious preference: Christian = 1, Jewish = 2, Muslim = 3, Buddhist = 4, and so on.
In ordinal measurement, the number assigned to the objects based on their attributes reflects an order relation among them. Examples include grades for academic performance (A, B, C …), the results of sporting events and the awarding of gold, silver, and bronze medals, and many measurements in psychology and other social science disciplines.
Interval measurements have all the features of ordinal measurements, but in addition the difference between the numbers reflects the equivalent interval of the attributes being measured. This property makes comparison among different measures of an attribute or characteristic meaningful and operations such as addition and subtraction possible. Temperature in Fahrenheit or Celsius degrees, calendar dates, and standardized intelligence tests (IQ) are a few examples of interval measurements.
In ratio measurement, objects are assigned numbers that have all the features of interval measurements, but in addition there are meaningful ratios between the numbers. In other words, the zero value is a meaningful point on the measurement scale, and operations of multiplication and division are therefore also meaningful. Examples include income in dollars, length or distance in meters or feet, age, and duration in seconds or hours.
Because measurement can be arbitrarily defined by the government, researchers, or cultural norms, it is socially constructed. The social construction of measurement is frequently encountered in social science disciplines. For instance, the U.S. Census Bureau has redefined the measure of race several times. Before the 1980 census, census forms contained questions about racial categories, but the categories included only white, black, American Indian, and specified Asian categories. The census was based on the Office of Management and Budget’s (OMB) 1977 Statistical Policy Directive Number 15, Race and Ethnic Standards for Federal Statistics and Administrative Reporting, defining four mutually exclusive single-race categories: white, black, American Indian or Alaska Native, and Asian or Pacific Islander. In addition, the standards also provided two ethnicity categories: Hispanic origin and Not of Hispanic origin. The 1980 and 1990 censuses were collected according to these standards.
By 1997, OMB modified the race/ethnicity measurement again by splitting the Asian or Pacific Islander category into two groups, creating five race categories: white, African American, American Indian or Alaska Native, Asian, and Native Hawaiian or Other Pacific Islander. In addition, the 2000 census allowed people to identify themselves as belonging to two or more races. It also created six single races and fifty-seven multiple race categories. The ethnicity measure for Hispanic doubled the total number of the race/ethnicity categories to 126. However, such an extensive number of measures causes even more problems. Many Hispanics consider their ethnic origin as a racial category and therefore choose “some other race” on the census form, leading to over 40 percent of the Texas population reported as “some other race.” The misconstruction of the categories of race and ethnicity in the U.S. census illustrates the fluid and subjective nature of measurement.
SEE ALSO Econometrics; Ethnicity; Gender; Likert Scale; Mathematics in the Social Sciences; Measurement Error; Methods, Quantitative; Racial Classification; Regression Analysis; Sampling; Scales; Survey
Michell, J. 1986. Measurement Scales and Statistics: A Clash of Paradigms. Psychological Bulletin 100: 398–407.
Michell, J. 1990. An Introduction to the Logic of Psychological Measurement. Hillsdale, NJ: Erlbaum.
Nagel, E.  1960. Measurement. In Philosophy of Sciences, ed. A. Danto and S. Morgenbesser, 121–140. New York: Meridian.
"Measurement." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (August 17, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/measurement
"Measurement." International Encyclopedia of the Social Sciences. . Retrieved August 17, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/measurement
See also 226. INSTRUMENTS .
- the measurement of the relative amount of acetic acid in a given subtance. —acetimetrical , adj.
- Chemistry. the determination of the amount of free acid in a liquid. —acidimeter , n. —acidimetrical , adj.
- measurement of pain by means of an algometer.
- the measurement of evaporation in the air. —atmidometer , n.
- 1. the measurement of oneself.
- 2. the measurement of a part of a figure as a fraction of the total figure’s height. —autometric , adj.
- the measurement of distance or lines by means of a stave or staff.
- the science of land surveying.
- accurate measurement of short intervals of time by means of a chronoscope. —chronoscopic , adj.
- the science of measuring the universe.
- the measurement of extremely low temperatures, by means of a cryometer.
- the measurement of circles.
- the measurement by a dosimeter of the dosage of radiation a per-son has received. See also 130. DRUGS . —dosimetrist , n. —dosimetric, dosimetrical , adj.
- measurement of the red blood cells in the blood, by use of an erythrocytometer.
- the science of measuring and analyzing gases by means of a eudiometer.
- the measurement of fluorescence, or visible radiation, by means of a fluorometer. —fluorometric , adj.
- the measurement of the strength of electric currents, by means of a galvanometer. —galvanometric, galvanometrical , adj.
- the measurement of the amounts of the gases in a mixture. —gasometer , n. —gasometric, gasometrical , adj.
- the practice or theory of measuring angles, especially by means of a goniometer.
- the measurement of the dimensions and angles of the planes of salt crystals. —halometer , n.
- the practice of measuring the angular distance between stars by means of a heliometer. —heliometric, heliometrical , adj.
- the art or science of measuring time. —horometrical , adj.
- the measurement of altitude and heights, especially with refer-ence to sea level. —hypsometric, hypsometrical , adj.
- the practice and art of determining the strength and coloring power of an indigo solution.
- equality of measure. —isometric, isometrical , adj.
- the measurement of impurities in the air by means of a konimeter. —konimetric , adj.
- kymography, cymography
- 1. the measuring and recording of variations in fluid pressure, as blood pressure.
- 2. the measuring and recording of the angular oscillations of an aircraft in flight, with respect to an axis or axes flxed in space. —kymograph , n. —kymographic , adj.
- Rare. an instrument for measuring large objects. See also 178. GEOGRAPHY .
- 1. the act, process, or science of measurement.
- 2. the branch of geometry dealing with measurement of length, area, or volume. —mensurate, mensurational , adj.
- the study and science of measures and weights. —metrologist , n. —metrological , adj.
- the measurement of osmotic pressure, or the force a dissolved substance exerts on a semipermeable membrane through which it cannot pass when separated by it from a pure solvent. —osmometric , adj.
- the measurement of bones.
- the determination or estimation of the quantity of oxide formed on a substance. —oxidimetric , adj.
- Obsolete, the realm of geometrical measurements, taken as a whole. —pantometer , n. —pantometric, pantometrical , adj.
- the measurement of pressure or compressibility, as with a piezometer. —piezometric , adj.
- the measurement of the plasticity of materials, as with a plastometer. —plastometric , adj.
- the measurement of the capacity of the lungs. —pulmometer , n.
- the measurement of temperatures greater than 1500 degrees Celsius. —pyrometer , n. —pyrometric, pyrometrical , adj.
- the measurement of radiant energy by means of a radiometer. —radiometric , adj.
- the measurement of electric current, usually with a galvanometer. —rheometric , adj.
- a means of surveying in which distances are measured by reading intervals on a graduated rod intercepted by two parallel cross hairs in the telescope of a surveying instrument. —stadia , adj.
- 1. the process of determining the volume and dimensions of a solid.
- 2. the process of determining the specific gravity of a liquid. —stereometric , adj.
- the measurement of distance, height, elevation, etc., with a tachymeter.
- the science or use of the telemeter; long-distance measurement.
- the measurement of the turbidity of water or other fluids, as with a turbidimeter. —turbidimetric , adj.
- measurement of the specific gravity of urine, by means of an urinometer.
- the measurement of the volume of a solid body by means of a volumenometer.
- the measurement of the volume of solids, gases, or liquids; volumetric analysis. —volumetric, volumetrical , adj.
- the measurement and comparison of the sizes of animals and their parts. —zoometric , adj.
"Measurement." -Ologies and -Isms. . Encyclopedia.com. (August 17, 2017). http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/measurement
"Measurement." -Ologies and -Isms. . Retrieved August 17, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/measurement
Measurement itself is concerned with the exact relationship between the ‘Empirical Relational System’ and the ‘Formal (or Numerical) Relational System’ chosen to represent it. Thus, a strict status relationship between individuals or positions can be shown to have the same properties as the operators ‘>, <’ (greater than, less than) in the set of numbers, and may be thus represented. Most social and psychological attributes do not strictly have numerical properties, and so are often termed ‘qualitative’ or ‘non-metric’ variables, whereas properties such as wealth or (arguably) measured intelligence or cardinal utility are termed quantitative or metric.
For any given domain of interest, a measurement representation or model states how empirical data are to be interpreted formally; for example, a judgement that x is preferred to y may be interpreted as saying that x is less distant from my ideal point than is y. Once represented numerically, uniqueness issues arise. S. S. Stevens (among others) postulates a hierarchy of levels of measurement of increasing complexity, defined in terms of what transformations can be made to the original measurement numbers, whilst keeping the properties they represent (see, for example, his essay ‘On the Theory of Scales of Measurement’, Science, 1946
). The simplest version distinguishes four such levels. At the nominal level, things are categorized and labelled (or numbered), so that each belongs to one and only one category (for example, male = 0, female = 1). Any one-to-one re-assignment of numbers preserves information about a categorization. At the level of ordinal measurement, the categories also have a (strict) order (such as a perfect Guttman scale), and any order-preserving transformation is legitimate. Interval-level measurement requires that equal differences between the objects correspond to equal intervals on the scale (as in temperature) and that any linear transformation preserves the differences. At the ratio level, the ratio of one distance to another is preserved, as in moving from (say) miles to kilometres.
Clyde H. Coombs (‘A Theory of Psychological Scaling’, Engineering Research Institute Bulletin, 1946)
has shown that there are many other such scales (such as partial orderings) which are useful in the social sciences, and urges keeping to lower levels, rather than quantifying by fiat. Procedures for transforming data into higher levels of measurement are known as ‘scaling’ or ‘quantification’. If the representation can be made on a straight line it is unidimensional scaling (as in Guttman and Likert scales), but if it needs two or more dimensions it is multidimensional scaling.
Most textbooks on survey research explain the different levels of measurement, with examples, and describe the statistics and techniques that are appropriate to the different levels (see, for example, D. A. de Vaus , Surveys in Social Research, 1985, 1991
"measurement." A Dictionary of Sociology. . Encyclopedia.com. (August 17, 2017). http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/measurement
"measurement." A Dictionary of Sociology. . Retrieved August 17, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/measurement
measurement, determination of the magnitude of a quantity by comparison with a standard for that quantity. Quantities frequently measured include time, length, area, volume, pressure, mass, force, and energy. To express a measurement, there must be a basic unit of the quantity involved, e.g., the inch or second, and a standard of measurement (instrument) calibrated in such units, e.g., a ruler or clock. For convenience, such a standard is usually marked off both in multiples and in fractions of the basic unit. Although various systems of units exist for measuring different quantities (see weights and measures), the most important and widely used are the metric system and the English units of measurement. Certain units have been defined for special applications, e.g., the light-year and parsec in astronomy and the angstrom in physics. Measurement is one of the fundamental processes of science. It provides the data on which new theories are based and by which older theories are tested and retested. A good measurement should be both accurate and precise. Accuracy is determined by the care taken by the person making the measurement and the condition of the instrument; a worn or broken instrument or one carelessly used may give an inaccurate result. Precision, on the other hand, is determined by the design of the instrument; the finer the graduations on the instrument's scale and the greater the ease with which they can be read, the more precise the measurement. The choice of the instrument used should be appropriate to the desired precision of the results. The human foot may be a suitable instrument for pacing off short distances if precision is not important; at the other extreme, the interferometer (see interference) is used for extremely precise measurements of distance in science. There is a basic distinction between measurement and counting. The result of counting is exact because it involves discrete entities that are not subdivided into fractions. Measurement, on the other hand, involves entities that may be subdivided into smaller and smaller fractions and is thus always an estimate. This distinction between measurement and counting seems, on the surface, to break down at the atomic level, where the quantum theory reveals that not only mass (in the form of elementary particles and atoms) but also many other quantities occur only in discrete units, or quanta. It would seem, therefore, that one could, in theory, reduce measurement to counting at this level. However, the quantum theory also places limitations on the possibility of counting, stressing such concepts as the wavelike nature and indistinguishability of particles and proposing the uncertainty principle as an absolute limitation on certain pairs of related measurements.
"measurement." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (August 17, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/measurement
"measurement." The Columbia Encyclopedia, 6th ed.. . Retrieved August 17, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/measurement
The assessment of a trait or feature against a standard scale.
Psychologists rely heavily on measurements for very different purposes, ranging from clinical diagnoses based on test scores to the effects of an independent variable on a dependent variable in an experiment. Several different issues arise when considering measurement. One consideration is whether the measurement shows reliability and validity. Reliability refers to consistency: if the results of a test or measurement are reliable, a person should receive a similar score if tested on different occasions. Validity refers to whether the measurement will be useful for the purposes for which it is intended.
The Scholastic Assessment Test (SAT) is reasonably reliable, for example, because many students obtain nearly the same score if they take the test more than once. If the test score is valid, it should be useful for predicting how well a student will perform in college. Research suggests that the SAT is a sufficient but not perfect predictor of how well students will perform in their first year in college; thus, it shows some validity. However, a test can be reliable without being valid. If a person wanted to make a prediction about an individual's personality based on an SAT score, they would not succeed very well because the SAT is not a valid test for that purpose, even though it would still be reliable.
Another dimension of measurement involves what is called the scale of measurement. There are four different scales of measurement: nominal, ordinal, interval, and ratio. Nominal scales involve simple categorization but does not make use of the notion of comparisons like larger, bigger, and better. Ordinal scales involve ranking different elements in some dimension. Interval scales are used to assess by how much two measurements differ, and ratio scales can determine the difference between measurements and by how much. One advantage of more complex scales of measurement is that they can be applied to more sophisticated research. More complex scales also lend themselves to more useful statistical tests that give researchers more confidence in the results of their work.
"Measurement." Gale Encyclopedia of Psychology. . Encyclopedia.com. (August 17, 2017). http://www.encyclopedia.com/medicine/encyclopedias-almanacs-transcripts-and-maps/measurement
"Measurement." Gale Encyclopedia of Psychology. . Retrieved August 17, 2017 from Encyclopedia.com: http://www.encyclopedia.com/medicine/encyclopedias-almanacs-transcripts-and-maps/measurement
meas·ure·ment / ˈmezhərmənt/ • n. the action of measuring something: accurate measurement is essential | a telescope with which precise measurements can be made. ∎ the size, length, or amount of something, as established by measuring: his inseam measurement. ∎ a unit or system of measuring: a hand is a measurement used for measuring horses.
"measurement." The Oxford Pocket Dictionary of Current English. . Encyclopedia.com. (August 17, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/measurement
"measurement." The Oxford Pocket Dictionary of Current English. . Retrieved August 17, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/measurement