Physics: Articulation of Classical Physical Law

views updated

Physics: Articulation of Classical Physical Law

Introduction

A physical law is a description in words or mathematical symbols of a measurable, universally recurrent pattern in nature. For example, Newton's law of gravitation, F = Gm1m2/r2, first published by English physicist Isaac Newton (1642–1727) in 1668, describes the force of gravity between any two objects. This physical law was one of the earliest to be discovered and is still one of the most useful. Though simple in form, it explains a great deal of what happens in the whole universe: why stars and planets and moons orbit each other as they do, why a free-flying ball or bullet travels in a parabolic path, and much more. Scientists have devised—and revised—scores of such laws over the last 400 years that now articulate modern science.

The creation of the first great system of physical laws, today called classical physics, began in seventeenth-century Europe and lasted until the end of the nineteenth century, when contradictions and inconsistencies in classical physics came to light. In the early years of the twentieth century a new physics was developed, one that extended the old body of physical laws with the advancement of relativity theory and quantum mechanics. During this period of radical revision, the very idea of a physical law changed; formerly, physicists had argued that the laws they discovered described reality perfectly, but today scientists view many laws as only approximately true. In fact, they spend much of their time trying to discover what revisions are necessary.

Historical Background and Scientific Foundations

The physics that developed from the sixteenth to the early twentieth century is known as classical physics. This article restricts its attention the articulation of the laws of classical physics.

Milieu: The State of Science and Society

People have always observed regularities in nature and put their knowledge into words. For example, the phrase “What goes up, must come down” is a sort of common sense law of gravitation, an experience-based generalization about an important physical fact. Although it is not true under all conditions—an object that “goes up” fast enough need never come down again, but can go into orbit or leave the vicinity of Earth forever—such general understandings sufficed human society for tens of thousands of years. Such vague statements are sometimes called folk physics, naive physics, or intuitive physics. They do not predict any quantity that can be measured, such as time, velocity, or acceleration. They describe observation only in a general, and very localized, way.

In the European classical period and in the Middle Ages, intuitive physics was mingled with metaphysical thought—speculation about the nature of reality—to produce a scientific tradition that mixed religion, philosophy, mathematics, and trust in reason. Modern physics arose when this mixed medieval science began, in the late 1500s, to be systematically replaced by a new kind of science, one in which mathematical physical laws were tested by observation and experiment.

A scientific physical law is a statement that predicts how specific measurable quantities such as force, mass, speed, electric-field intensity, number of particles, time, or the like will behave in relation to each other. A law is usually stated as an equation, which is a mathematical expression with two terms or groups of terms separated by an equals sign—for example, Newton's law of gravitation as given above, or F = ma (Newton's second law of motion, force equals mass times acceleration). Once Newton's three laws of motion and the law of gravitation were formulated, any person trained to apply them could tell not only that a cannonball would come down after going up—which had always been obvious—but when it would come down and where, how fast it would be going, and at exactly what angle it would strike (allowing for some complication from air resistance). Moreover, the motions of the planets could be predicted with better accuracy than before. Once the study of moving objects and forces (mechanics) was brought under the sway of mathematical law in Newton's day, laws were also articulated in optics, chemistry, and other fields. New, powerful technologies were devised with the help of these precise laws, transforming industry, trade, and daily life. By providing new instruments for measurement and experimentation, the new technology in turn transformed science itself. This cycle continues today.

The notion of mathematical physical law tested against observations may seem obvious today, but it only arrived after centuries of painstaking effort. Until the late 1500s, scientific thought was organized around ideas inherited from antiquity. Greek doctor and philosopher Aristotle (384–322 BC) taught that knowledge of the natural world should arise from experience, but did not wed mathematics to experience. The Greek mathematical physicist Archimedes (287–212 BC) elucidated mathematical laws for the simple machines (lever, pulley, etc.), yet Greece was conquered by Rome—Archimedes himself was reportedly killed by a Roman soldier. Roman culture was not, by and large, as interested in philosophy and mathematics as preceding Greek culture—and more emphasis was placed upon military and economic application.

Many of the writings of Aristotle and Archimedes were essentially lost to Europe after the fall of the Roman Empire, and the Arab civilization of the Middle East, which preserved copies, still did not develop a systematic, mathematical, experimental form of science based upon those writings.

One reason that the flourishing of science was delayed for so many centuries is that the way people looked at the world, their worldview, was not congenial to its invention. During the Middle Ages, which lasted roughly from the breakup of the Roman Empire in the AD 400s to the Protestant Reformation of the 1500s, the universe was seen primarily not as a machine to be described by mechanical laws but as a harmonious, meaningful whole, less like a clock than a living creature. All Earthly physical events were said to be driven ultimately by supernatural forces (a god or group of gods), also variously described as the First Cause or Prime (i.e., first) Mover. Following Aristotle, most medieval thinkers taught that objects moved as they did because it was more “natural” to them to be in some places rather than others. Flame and smoke, for example, rose because they sought their natural place higher up; stones fell because their natural place was lower down. Some philosophers, such as Jean Buridan of Paris (1300–1358), criticized this idea in the later Middle Ages, preparing the way for the new physics of the fifteenth and sixteenth centuries. However, knowledge of medicine, chemistry, physics, and the like was typically derived from revered books rather than from direct experimentation. Mathematics was carried on as a separate, abstract pursuit.

Medieval science was not entirely stagnant—new knowledge, especially mathematical and technological, continued to be gained—but by modern standards the rate of progress was very slow. Astronomy was one of the first areas in which breakthroughs were made. Long before the Scientific Revolution of the fifteenth and sixteenth centuries, the motions of the heavenly bodies had been observed carefully, and explanatory models had been created to account for them.

The first physical model to describe the heavenly motions with good accuracy was the Ptolemaic (pronounced tole-eh-MAY-ik) model. This model was the work of the Egyptian-Greek astronomer Claudius Ptolemy (TOLE-eh-mee, c.AD 90–c.168). According to Ptolemy's book Almagest, which remained the standard astronomical text of Europe for about 1,400 years, Earth is a sphere residing at the center of the universe. Around it are nested eight rotating, concentric, transparent (“crystal”) spheres. From smallest to largest these are the spheres of the moon, the sun, Mercury, Venus, Mars, Jupiter, Saturn, and the fixed stars. No other planets were known until the discovery of Neptune by English astronomer Sir William Herschel (1738–1832) in 1781. The moon, sun, planets, and stars were supposed to be attached to these larger, invisible spheres much as a tack might be stuck into the surface of a soccer ball. The spheres would rotate independently around Earth, which is stationary. The rising and setting of the sun, for example, was supposedly due to the rotation of the sun's crystal sphere around Earth. Motions too complex to be accounted for in this way could be calculated using epicycles, which are hypothetical circular motions executed by a heavenly body on the surface of its Earth-centered crystal sphere. Circular motions were used because it was standard doctrine that perfection resides in the heavens, and that the circle is the most perfect geometrical shape—therefore, heavenly bodies must move in circles.

The Ptolemaic model did an excellent job of explaining the motions of the stars, planets, sun, and moon. Yet it suggested no physical reason why the spheres should move as they did, no mechanical explanation; the most popular motive power was angels. Moreover, the astronomical success of the Ptolemaic model did not help explain any non-astronomical phenomena, such as the way objects behave on Earth. It did not contain or imply any physical law: it was purely descriptive, and it could not be generalized. It was an intellectual triumph, but an isolated and a scientifically barren triumph.

In 1543, Polish astronomer Nicolaus Copernicus (1473–1543) published De revolutionibus orbium coelestium (On the Revolution of the Celestial Spheres). In it, he proposed the revolutionary idea that the sun, not Earth, was at the center of the universe. Scientists now know that the sun is also not at the center of the universe. The universe has no center. By displacing Earth from its central position, however, Copernicus began a process of remodeling that would eventually cast doubt on the centrality of human beings in the story of the cosmos. Perhaps, the new changes seemed to suggest, mechanism, not meaning, was the key to reality. Despite resistance from cultural religious authorities—Italian astronomer and physicist Galileo Galilei (1564–1642) was threatened with torture in 1633 for affirming that Copernicanism was literally true—Copernicus's ideas gradually prevailed, helping set the stage for a new physics.

Danish astronomer Tycho Brahe (1546–1601), court astronomer and astrologer of Denmark, was born only a few years after Copernicus published his controversial book. He made precise measurements of the heavenly bodies' motions and tried to create a description of the cosmos that blended the Copernican system with the Ptolemaic. Soon, using Tycho's data, the best collected to that date, German astronomer and mathematician Johannes Kepler (1571–1630) made a discovery: The planets move not in circular orbits, as had been taught for thousands of years, but in elliptical orbits. (Elliptical orbits are ellipse-shaped. If the top is sliced off a circular cone at a slanting angle, the oval outline of the slice is an ellipse.) Moreover, Kepler described his discoveries in terms of three mathematical laws. These were some of the first physical laws of the new scientific age.

While Kepler was studying the planets, Galileo was studying objects on Earth. In particular, he measured the speeds of objects rolling down inclined planes and formulated mathematical expressions of laws describing his results. Galileo's laws are still used today in calculating the results of constant accelerations (changes in velocity), such as are experienced by objects falling freely under the influence of gravity.

As described later in this article, Newton managed to bridge the motion laws of Kepler, which applied only to the planets, and of Galileo, which applied only to everyday objects. With this unification, modern science came into being.

Cultural Changes

Two great social changes occurred just before modern science—essentially the project of describing all of nature in terms of mathematical physical laws—got under way. These were the replacement of feudalism with capitalism (a change sometimes called the Commercial Revolution) and the Protestant Reformation.

In the social and economic system of Europe in the Middle Ages, trade was primarily local, and seagoing navigation hugged the coasts. Money was used, but there was little industrialism, and the economy was basically agricultural. Then, around 1520, all this began to change. Globe-girdling voyages of discovery and the invasion of North and South America by England, France, and Spain increased long-distance trade. A European commercial culture arose with fundamentally different attitudes toward the world than had prevailed during the Middle Ages. Some modern cultural historians, such as Georg Simmel (1858–1918) and Morris Berman (1944–), have argued that the new money economy increased the importance of exact numerical calculation, and that this predisposed people to expect exact numerical accounting in the physical cosmos also. Counting, weighing, and pragmatism (the philosophy of focusing on results rather than values or morality) were elevated to high status: nature began to be seen primarily as dead material to be manipulated rather than part of an organic chain of being stretching from the dust to the divine.

IN CONTEXT: NEWTON AND THE APPLE

There is a popular folk story that gravitation was discovered when English physicist Isaac Newton (1642–1727) was hit on the head by a falling apple. As with many such folk tales, although the anecdote is not exactly true, there is evidence that has origins in a real incident. A year before Newton's death, he told a man named William Stukeley a story in which Newton recalled his early thoughts about gravity and remembered pondering the fall of an apple. Newton claimed that at the time he wondered: “Why should it not go sideways, or upwards, but constantly to the earth's center? Assuredly, the reason is, that the earth draws it. There must be a drawing power….”

Other writers of Newton's day relate slightly different versions of the story, but the idea that Newton reasoned from apples to planets appears to be at least partially true. The story reminds us that science sometimes progresses by leaps of insight rather than by systematic reasoning and careful experimentation. However, it took Newton another 20 years to fully work out his theory of gravitation.

The new commercial economy also placed a new value on technical knowledge, which had hitherto been thought the proper concern of people in the trades, guilds, and working classes, not of philosophers and mathematicians. Early in the sixteenth century, the educated classes began to show new interest in the details of manufacture. Spanish scholar Joan Lluís Vives (1492–1540) published De Disciplini Libri XX (“Twenty books on disciplines”), in which he argued that a young nobleman's education should include some study of agriculture, textile manufacture, cookery, building, and navigation. German scholar Georg Agricola (1490–1555) visited mines and metallurgical workshops to study minerals and mining directly and published De Natura Fossilium in 1546, rejecting ancient authorities in favor of his own observations. His De Re Metallica (1556) described the practical processes of mining and metal making in great detail. Agricola is often characterized as the founder of the science of mineralogy. Many other treatises on the crafts of printing, papermaking, and the like were published during this period. Also, the economy provided financial rewards for improvements in water-pumping, clock-making, and other technologies. Such improvements would, in turn, allow the manufacture of instruments to verify and to apply the new, quantitative, mathematical science of the seventeenth century and beyond. The scientific revolution could not have succeeded on the strength of either abstract thought without practical know-how, or knowhow without rigorous abstraction.

At about the time of the Commercial Revolution, another great upheaval of European society was occurring: the Protestant Reformation. Although its culturaland historical origins are far more complex, the landmark beginning of the Reformation is traditionally dated to the posting by Martin Luther (1483–1546) of his ninety-five theses or propositions to the door of Castle Church, Wittenberg, Germany, on October 31, 1517. At the time both economic and religious thought was changing across Europe. A new emphasis on the testing of ancient authorities against individual conscience and reason was abroad. In this setting it would seem increasingly natural, even necessary, to test theoretical ideas against experiments, and to check reports of experiments by repeating them oneself.

The Revolution in Mechanics

“In the beginning,” says historian of science Max von Laue (1879–1960), “was mechanics.” Why? Because mechanics is the science of objects, motion, and forces, and objects can be touched, motions seen, forces felt. Mechanical processes such as falling and pushing can be measured using simple equipment and compared to mathematical theories. It was natural that the articulation of physical law would begin with mechanics rather than, say, electromagnetics.

The sixteenth century's success in wedding math to mechanics overturned the basically Aristotelian view of nature as organic and moved by propriety or “naturalness” rather than by forces. The success of mechanics seemed to prove that the universe is indeed a machine. The extension of mathematical law to other fields, including the study of heat, light, and electromagnetic forces, followed over the next two centuries.

The triumph of classical mechanics in the late 1500s and the 1600s was the application of the scientific method to long-pondered problems. The Greeks invented the beginnings of mathematical physics, and many philosophers of the Middle Ages, mostly famously English friar Roger Bacon (c.1214–c.1292), advised that theory be tested by experiment. Some medieval thinkers, such as William of Ockham (c.1288–1347), also an English Franciscan friar, questioned Aristotle's theories of motion and groped toward an understanding of acceleration (changing velocity) and inertia (the tendency of an object to maintain its state of motion unless acted upon by a force). Yet most medieval writers addressing physical science were content to quote alleged experimental results from ancient books rather than carrying out the experiments themselves, and thinkers such as Ockham did not succeed in producing an accurate science of mechanics. They were hampered by, among other things, their Aristotelian belief that a force of some kind, whether produced by surrounding air or by some power in the object, is needed to keep an object in motion. Medieval scientists taught that when a gun fired a projectile it endowed the projectile with a certain force called “impetus” that would push the projectile upward at an angle in a straight line until the impetus was exhausted, whereupon the projectile then dropped straight down upon the target. This is a commonsense view based on everyday experience: If you want to raise a brick, you have to apply force the whole time you are lifting it. If you let go, it falls straight to the ground.

Galileo rejected the impetus theory of motion. He understood that a force does not keep an object in motion: rather, an object stays in steady motion (that is, at rest, or moving in a straight line) unless a force acts upon it. Forces do not maintain states of motion; they change states of motion.

Galileo was the first scientist to systematically and thoroughly apply an experimental and mathematical working philosophy similar to that of modern science. Galileo, like many thinkers of the fourteenth and fifteenth century, was influenced by the Greek philosopher Plato (428–348 BC), who taught that the visible world is secondary to a higher realm of invisible, eternal forms. Galileo's version was that mathematical law is the ultimate reality behind the miscellaneous happenings of the physical world. Historian of science A.C. Crombie (1915–1996) described Galileo's view this way in 1959:“The object of science for Galileo was to explain the particular facts of observation by showing them to be consequences of … general ‘mathematical’ laws, and to build up a whole system of such laws in which the more particular were consequences of the more general.” This closely describes the whole project of modern science. “Facts of observation” include explanations both of naturally occurring events, such as the motions of the heavenly bodies—Galileo was a supporter of the new Copernican system—and of artificially arranged events or experiments, as Roger Bacon had advocated centuries earlier.

The ingredients of Galilean science had been present for many generations, but until the late sixteenth century, philosophical speculations about experimental method and actual physical investigations were rarely in the hands of the same people. Technicians, coinmakers, and the like worked out practical methods based on the observed behaviors of materials, while the book-learned handled mathematics and speculated on (but did not test) the behaviors of objects. For the two centuries prior to Galileo there is no record of anyone testing actual motions of earthly objects or heavily bodies against philosophical speculations about the nature of motion. Only when the three practices considered essential by Galileo—testing of theory against actual experiments, use of mathematics to state physical laws precisely, and the weaving together of laws into a unified system—were unified into a single method of interrogatingnature did the basic practice of modern science come into being.

IN CONTEXT: THE LAW OF GRAVITATION

Can a few rules, simple enough to be printed in large type on a credit card, describe how objects fall, why projectiles fly in parabolic arcs, the ocean tides, and the path of every object in the universe that moves freely through space, from atoms to galaxies? The answer is yes: all these events can be predicted to high accuracy using Isaac Newton's three laws of motion plus his law of gravitation, which describes the force of gravity pulling any two objects together. If the masses of the objects are symbolized as m and m, then Newton's law of gravitation can be written as follows: F = Gm1m2/r2.

Here F is the force felt by either object (both objects feel the same force, pointing toward the other object). G is a fixed number called the universal gravitational constant, and r is the distance between the two objects. Since the masses are on top of the fraction, adding mass to either object makes the force of gravity bigger. And since distance is on the bottom of the fraction, the larger it gets—that is, the farther away the two objects are from each other—the smaller the force of gravity gets. Newton's laws are still used today in all calculations of motion and gravity that do not involve speeds close to that of light or extreme gravity conditions such as those found near black holes. First published by Newton in 1687, the Law of Gravitation is one of the triumphs of classical physical law.

Newton completed the revolution that Galileo began. While he did important work in optics (the science of light), invented the first practical reflecting telescope in 1671, and published his major work on light, Opticks, in 1704, his truly revolutionary work was in mechanics. This culminated in 1687 with the publication of Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). This book, usually referred to today simply as the Principia, is one of the important single works in the history of physics.

Focusing on Forces as Real Rather than as to Their Nature

Newton completed the Galilean revolution in mechanics partly by putting aside the vexing question of what,exactly, forces are. Instead, he followed the principle that it was sufficient to treat them as if they were real. As long as forces obeyed the equations written down for them—or rather, as long as equations could be devised that described what the forces did—it did not matter what caused forces: that question could be dealt with separately. Newton did express the opinion that some form of mechanical explanation for gravity would be found, particles of some sort bumping against other particles, as opposed to “action at a distance,” which he condemned in a letter to scholar Richard Bentley (1662–1742) in 1692: “That gravity should be innate inherent & essential to matter so that one body may action upon another at a distant through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has in philosophical matters any competent faculty of thinking can ever fall into it.”

The mathematics available to Newton at the beginning of his career could not calculate the effects of the new physical laws he sought, so he had to invent calculus, the mathematics of continuously varying and accumulating quantities. Calculus was invented independently at about the same time by German mathematician Got-tfried Wilhelm von Leibniz (1646–1716). Calculus immediately began to be applied by mathematicians to all sorts of scientific questions, not only mechanical ones. Today, it is the universal language of science and technology. Other forms of higher mathematics are applied as needed in specific fields, but calculus is applied in virtually all fields of study concerned with the physical world. A version of Leibniz's notation (way of writing down calculus) is used today.

Newton's mechanics boil down to four elegant mathematical laws. The first three are the three laws of motion:

  1. An object retains its state of motion unless acted upon by a force. The units of force used today are called Newton's in honor of Newton's clarification of the concept of force.
  2. The total force acting on a body causes it to accelerate (change its velocity) to a degree that is proportional to the body's mass. Stated as an equation, with F standing for force, m for mass, and a for acceleration, F=ma. Alternatively, a = F/m; that is, a heavier object (larger m) is accelerated less by a given force, and a larger force accelerates an object of given mass more quickly.
  3. Forces always occur in pairs that point in opposite directions; also stated as To every action there is an equal and opposite reaction. For example, if a rocket motor pushes gases away from itself with a certain force, the gases also push the rocket motor away in the opposite direction with equal force. This is the principle of rocket propulsion.
  4. The fourth basic law of Newtonian mechanics is Newton's law describing the gravitational attraction between two objects: F = Gm 1 / r 2. Here m 2 F is the force due to gravity, G is a fixed number called the universal gravitational constant, m 1 is the mass of one of the two objects, m 2 is the mass of the other, and r is the distance between them. By Newton's third law of motion, gravitation between two objects produces two equal and opposite forces of strength F, one pushing on each object.

The law of gravitation has several interesting consequences. Since the distance r is in the denominator, the force F gets smaller as r gets bigger; since r is squared, the force F decreases as the square of the distance. This means that if the distance is doubled, the gravitational force is cut to one fourth its original strength; if the distance is quadrupled, the force is cut to one sixteenth.

Newton was able to show mathematically that Ke-pler's laws of planetary motion are consequences of the same handful of mechanical laws that describe the motions of all physical objects, including falling apples. He thus forwarded the Galilean program of producing a system of mathematical physical laws in which the more specific laws were consequences of the more general laws. In this case, the Keplerian laws were specific to planetary motion; Newton showed that they were merely an application of the more general Newtonian laws to planetary motion.

Beyond Mechanics: Physical Law in the Eighteenth and Nineteenth Centuries

Over the next two centuries, the Newtonian revolution in mechanics was extended to other fields. The body of physical laws that has been created by this process is much too large to review here, but a few highlights can be noted.

The first century and a half after the revolution in mechanics saw little progress in our understanding of electricity and magnetism. Early on, English physician William Gilbert (1540–1603) made fundamental observations about these matters, arguing correctly that electricity and magnetism are not the same thing and proving by means of a physical model that Earth itself is a giant magnet. Gilbert coined the word “electricity.” However, he was not able to formulate any mathematical, quantitative laws describing the properties of electricity or magnetism. The first law to accurately describe an electrical phenomenon was discovered in 1785 by French scientist Charles Augustin de Coulomb (1736–1806). Coulomb's law for the force between two charged objects was remarkably similar to Newton's law of gravitation: F = Kc 1 c 2 / r 2.

Here F is the force of electrical attraction or repulsion; K is a fixed number called the electrostatic constant; c is the charge on one of the objects; c 2 is the charge on the other object; and r is the distance between them. Coulomb showed that the force arising from electrical charge behaves much like the gravitational force 1 arising from mass, with a few important differences: first, the electrical force can be either attractive or repulsive. Like charges (two positives or two negatives) repel, and unlike charges (one positive and one negative) attract. Second, an electric field can be blocked by a conductor, while gravitational fields cannot be screened or blocked by any barrier.

Coulomb's law was only the beginning of the discovery of the laws governing electricity and magnetism. Electricity and magnetism have a complex relationship to each other: electric charges produce electric fields, much as masses produce gravitational fields, but changing or moving electric charges also produces a magnetic field, and changing or moving magnetic fields produces an electric field. A pair of electric and magnetic fields that are changing together, each generating the other, propagate through space as a wave—an electromagnetic wave—without any charge being present at all. Light, radio waves, and X rays are all electromagnetic waves.

Electromagnetism—the mutual production of electric and magnetic fields create—was first publicized in 1820 by Danish physicist Hans Christian Oersted (1777–1851). A mathematical law describing the attractive or repulsive force between two current-carrying conductors (e.g., wires) was discovered by French physicist André-Marie Ampère (1775–1836) a few years later. The physical law that describes the generation of a magnetic field by a changing electric field was described by English physicist Michael Faraday (1791–1867) in 1831.

The unification of all the miscellaneous laws describing the relationships of electric and magnetic fields was accomplished by Scottish physicist James Clerk Maxwell (1831–1879) in 1864, with his publication of the group of four equations known in his honor today as Maxwell's Equations. These laws are as basic a contribution to the body of physical law as those made by Newton and, later, German-American physicist Albert Einstein (1879–1955). They form the working basis of all technology that relies on magnetic and electric fields.

Meanwhile, laws were also being articulated in other fields. In the 1660s, Henry Power (1623–1668) and Richard Towneley (1629–1707), both English scientists, discovered a mathematical law relating the pressure and volume of a gas: pV = constant (at a fixed temperature). This law is known today as Boyle's law. The study of heat (thermodynamics) had been given new urgency by the invention of the steam engine, first built in 1698 but not analyzed mathematically until the early 1800s. Almost all English and French physicists

concerned themselves with the question of the steam engine and its principles in the late 1700s and early 1800s; most of the basic laws of thermodynamics were developed by studying the steam engine.

The new laws in various fields were built on the Galilean model—that is, they were mathematical and (usually) explainable in terms of more fundamental laws. For example, the gas laws were explained in terms of large numbers of particles (gas molecules) flying about in obedience to Newton's laws of motion. Summary laws or surface laws such as Boyle's gas law remain useful, however, even when they are explained in terms of more fundamental laws, because they are much easier to apply to practical problems.

Also developed in the nineteenth century were laws of acoustics (the science of sound), optics, energy

conservation, the kinetic theory of gases, and statistical mechanics. All these equations remain essential in modern physics and technology. So complete did the picture of physics painted by the new sciences and their laws seem, so great was the success of science during this period, that some thinkers declared that science itself was almost finished—that scientists were on the verge of having answered all questions. For example, Albert Michelson (1852–1931) said in 1894 (and again in 1898) that “While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice.”

Yet contradictions were arising, contradictions that forced the birth of a new physics in the early twentieth century.

The Limitations of Classical Physical Law

What the scientists of the seventeenth through the nineteenth centuries did not realize was that all the laws they were discovering were approximate. Newton's law of gravitation, F = Gm 1 / r 2, is usually so close to true m 2 as to appear perfectly exact; yet there are conditions, real-world conditions, under which it is inaccurate. New, more general, laws had to be discovered when the approximate nature of classical law was realized. It proved to be impossible to adjust the classical laws in such a way as to explain the constancy of the speed of light (which had been proved by the 1887 experiment by Michelson and Edward Morley (1838–1923) in the quantization of energy in photons, light's mixture of wave and particle properties, and other phenomena. Resolving these difficulties required the laws of relativity and quantum mechanics, which were developed in the early twentieth century. These laws, too, are approximate, but what scientists call their domain of application—the range of conditions under which they are correct within the limits of our ability to make measurements—is much broader than that of classical physics. The old laws are still good for most conditions and are mathematically simpler, so they are still used today throughout science and technology—but they are used with the knowledge that they are imperfect.

Two assumptions that had been made throughout all of classical physics finally had to be abandoned: absolute space and time, and determinism.

Absolute space and time are independent of each other and of the objects they contain, and look the same to all observers, regardless of the locations or states of motion of those observers. If time were absolute, one could, in principle, distribute accurate clocks set to a single, agreed-upon time throughout the universe, and they would continue to agree forever. This is our commonsense experience of time: When we speak of the accuracy of a clock, for example, we never feel obliged to condition our description on where the clock is located or how it is moving. Time passes at the same rate everywhere—or does it? At the beginning of the twentieth century, it was discovered that time is not absolute. Neither is space. Moreover, they are not independent of each other, but are intimately related. Yet they are not interchangeable, two forms of the same thing: as Einstein, discoverer of relativity, put it in 1921, time is equivalent to space “in respect to its role in the equations of physics, ‘but’ not with respect to its physical significance.”

Determinism was the second basic assumption of classical physics to be called into question. Since Newton, scientists have assumed that all appearances of randomness in the universe, such as the unpredictability of rolled dice, are illusions: on this view, all outcomes are actually the result of cause-and-effect chains that could not have turned out any other way. Some events seemed random to us only because we did not have the knowledge and computational power to predict how conditions were bound to work themselves out according to rigid physical law. The idea of universal, absolutedeterminism was famously expressed by French mathematician, Pierre Simon de Laplace (1749–1827) who argued that one could “embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.”

Since the early twentieth century, the interpretation of quantum mechanics accepted by the great majority of physicists, the Copenhagen interpretation, has been that at the scale of the very small, the subatomic scale, events are truly random. For example, the time when a given unstable or radioactive atom breaks down—which is experimentally unpredictable—is not determined by tiny, hidden differences between that particular atom and others of its kind; it is truly random. Whether this interpretation will be replaced some day by a new, more subtle form of determinism has been debated for decades by physicists, but as of the early 2000s the Copenhagen interpretation still prevailed among the majority of specialists studying the question.

Modern Cultural Connections

Classical physical law greatly multiplied the ability to predict and control physical events, whether for profit, pleasure, war, or knowledge. It transformed the world by making possible the Industrial Revolution, and continues to transform it through that ongoing flood of new technologies which we now take for granted—at least, those of us who live in the industrialized parts of the world. Almost all modern technology is made possible by applying the classical laws of mechanics, thermodynamics, optics, electromagnetics, chemistry, and a few other disciplines. Without succinct, accurate, mathematical laws, whatever information science managed to accumulate would be useless—a heap of unconnected facts.

It should be noted that sciences such as geology, biology, and astronomy, which primarily work to produce factual explanations and histories—what makes the continents move, how does a new species of finch appear, how did the galaxies form, and so on—are just as valid, important, and scientific as physics or the other elemental physical sciences. Without the sciences that describe the world, physics would be crippled in its search for universal laws; without the laws provided by physics, the other sciences would be a jumble of disjointed facts. Today, for example, the existence of a form of gravitating yet invisible matter clustered around the galaxies, called dark matter, has been proved by applying the laws of Newtonian physics to data obtained by telescopes. Dark matter implies what scientists call “new physics”—that is, it is not predicted by the laws of quantum physics as they now exist. Some modification of those laws will be needed to account for the observations that prove the existence of dark matter. Without known physical laws, the observations could not be made; once they are made, the observations require revision of the known physical laws.

The proliferation of technology, enabled by scientific knowledge of physical laws, has effected every aspect of modern life in industrialized societies. The rise of the automobile has affected work and courtship patterns; telecommunications have changed the way we socialize; nuclear and other weapons have enabled destruction on a scale not even imagined by earlier centuries; and the unintended side-effects of applied science are causing soil loss and changing the climate, both of which ultimately threaten human survival by threatening the agricultural basis of all human life. Yet even this list does not capture all the ways in which scientific law has impacted the modern world.

Social and Philosophical Implications

Soon after what even its contemporaries referred to as the “revolution” in science achieved by Newton and the other physicists, the new ideas were popularized to a wide public by writers such as English philosophers John Locke (1632–1704) and David Hume (1711–1776) and French philosophers Bernard le Bovier de Fontenelle (1657–1757) and François-Marie Voltaire (1694–1778). The result was a shift in the way people in European societies saw the nature of the world—a shift more pronounced in the more educated (especially the more scientifically educated), but affecting all levels of society.

Locke taught that all ideas arise from experience and that ideas cannot be innate in our minds or a priori (i.e., obvious on their own merits—from the Latin for “what is before”). Locke's philosophy, especially as expressed in his Essay Concerning Human Understanding (1690), was boosted by the prestige of Newton and in turn boosted the prestige of mathematical, experimental science. Locke claimed—what Newton himself, ironically, as a dedicated Puritan Christian, would have denied—that human beings have no special place in creation. Locke's impact was great both in England and in continental Europe. His emphasis on experience, limited knowledge, and tentative conclusions has become largely habitual in modern thought. Influenced by Locke and the new science, Hume argued for the total rejection of all beliefs that are not modeled on Newtonian physics:“If we take in our hand any volume; of divinity [theology] or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number ? No. Does it contain any experimental reasoning concerning matter of fact and existence ? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion” (An Enquiry Concerning Human Understanding, 1748).

IN CONTEXT: SCIENCE VS. RELIGION

Since the early days of the new science of mathematical law and universal determinism, some thinkers have protested against its spiritual effects. English poet William Blake (1757–1827) wrote in 1802, “May God us keep / From Single vision & Newton's sleep!” By this he meant, more or less, that the spiritual effect of the Newtonian worldview is deadly—that it empties the world, reducing beauty, meaning, and the like to the status of arbitrary experiences or illusions that are real only inside the individual human mind, which itself is reduced to irrelevance, a sort of bright shadow cast by the deterministic chemical-mechanical workings of the brain. Blake called this “Newton's sleep.”

In France, Fontenelle popularized the new science in works such as Conversations on the Plurality of World'S (1686), a work of popular astronomy. Voltaire, at about the same time as Fontenelle, touted the success of Newton in explaining the universe and argued for a semi-religious worldview in which reason reigned supreme and Christianity was rejected as superstition. Historian of science Herbert Butterfield (1900–1979) has said that Fontenelle, Voltaire, and the anti-religious, rationalistic writers that followed in their steps, the philosophes, not only spread the new scientific knowledge but performed a second function: “the translation of the scientific achievement into a new view of life and the universe.” Religion was not eliminated, of course, but its plausibility and authority were weakened.

The view of the world as a machine with inherent properties that cannot be expressed in mathematical form—the view that arose and became commonplace in Europe in the seventeenth and eighteen centuries—has had profound impact on all aspects of culture, including religion. Qualities such as beauty, meaning, holiness, and value, which were once assumed to be inherent in the physical world, have been relocated into the subjective realm of personal feeling: “Beauty in things exists merely in the mind which contemplates them,” wrote Hume. The position of religion as a source of knowledge about the world has been weakened by the all-embracing, illuminative powers of scientific laws: The universe has come to seem self-sufficient, self-explanatory.

Although the connection between science and diminished religious belief is not a logical one—science as such makes no statements, pro or con, about a God or gods, moral values, beauty, or anything else that cannot be measured—the rise of science as articulated in physical laws has made religious belief less plausible for some people. The more educated in science a person is, the less likely they are to have religious beliefs. In the United States, for example, the journal Scientific American reported in 1999 that belief in a prayer-answering supernatural god and personal immortality was affirmed by over 90% of the general public, only 50% of scientists with degrees at the B.S. level, and less than 10% of scientists in the elite National Academy of Sciences. The journal Skeptic conducted a poll in 1998 that found that 40% of scientists believe in a supernatural god, but that the rate was lowest (20%) for physicists—those scientists most directly concerned with discovering and testing fundamental, universal physical laws on the Galilean model.

There is no consensus among either experts or the general population on the significance of these changes—only on the fact that they have happened.

Primary Source Connection

The following essay was written by the American physicist Richard Feynman (1918–1988), published in The Feynman Lectures on Physics. Feynman won the Nobel Prize in Physics in 1965 for his contributions to the advancement of Quantum Electrodynamics (QED). He also illustrated the mathematical laws of subatomic particles in what later became known as Feynman diagrams. He participated in teams that developed the atomic bomb and was instrumental in determining that leaking O-rings were the cause of the space shuttle Challenger disaster. Feynman popularized the study of physics by writing entertaining accounts such as Surely You're Joking, Mr. Feynman and through his respected lectures. This essay from one of Feynman's introductory lectures explores teaching and learning physics through experimentation, approximation, and imagination.

ATOMS IN MOTION

This two-year course in physics is presented from the point of view that you, the reader, are going to be a physicist. This is not necessarily the case of course, but that is what every professor in every subject assumes! If you are going to be a physicist, you will have a lot to study: two hundred years of the most rapidly developing field of knowledge that there is. So much knowledge, in fact, that you might think that you cannot learn all of it in four years, and truly you cannot; you will have to go to graduate school too!

Surprisingly enough, in spite of the tremendous amount of work that has been done for all this time it is possible to condense the enormous mass of results to a large extent—that is, to find laws which summarize all our knowledge. Even so, the laws are so hard to grasp that it is unfair to you to start exploring this tremendous subject without some kind of map or outline of the relationship of one part of the subject of science to another. Following these preliminary remarks, the first three chapters will therefore outline the relation of physics to the rest of the sciences, the relations of the sciences to each other, and the meaning of science, to help us develop a “feel” for the subject.

You might ask why we cannot teach physics by just giving the basic laws on page one and then showing how they work in all possible circumstances, as we do in Euclidean geometry, where we state the axioms and then make all sorts of deductions. (So, not satisfied to learn physics in four years, you want to learn it in four minutes?) We cannot do it in this way for two reasons. First, we do not yet know all the basic laws: there is an expanding frontier of ignorance. Second, the correct statement of the laws of physics involves some very unfamiliar ideas which require advanced mathematics for their description. Therefore, one needs a considerable amount of preparatory training to learn what the words mean. No, it is not possible to do it that way. We can only do it piece by piece.

Each piece, or part, of the whole of nature is always merely an approximation to the complete truth, or the complete truth so far as we know it. In fact, everything we know is only some kind of approximation, because we know that we do not know all the laws as yet. Therefore, things must be learned only to be unlearned again, or more likely, to be corrected.

The principle of science, the definition, almost, is the following: The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth.” But what is the source of knowledge? Where do the laws that are to be tested come from? Experiment, itself, helps to produce these laws, in the sense that it gives us hints. But also needed is imagination to create from these hints the great generalizations—to guess at the wonderful, simple, but very strange patterns beneath them all, and then to experiment to check again whether we have made the right guess. This imagining process is so difficult that there is a division of labor in physics: there are theoretical physicists who imagine, deduce, and guess at new laws, but do not experiment; and then there are experimental physicists who experiment, imagine, deduce, and guess.

We said that the laws of nature are approximate: that we first find the “wrong” ones, and then we find the “right” ones. Now, how can an experiment be “wrong?”

First, in a trivial way: if something is wrong with the apparatus that you did not notice. But these things are easily fixed, and checked back and forth. So without snatching at such minor things, how can the results of an experiment be wrong? Only by being inaccurate. For example, the mass of an object never seems to change:a spinning top has the same weight as a still one. So a “law” was invented: mass is constant, independent of speed. That “law” is now found to be incorrect. Mass is found to increase with velocity, but appreciable increases require velocities near that of light. A true law is: if an object moves with a speed with less than one hundred miles a second the mass is constant to within one part in a million. In some such approximate form this is a correct law. So in practice one might think that the new law makes no significant difference. Well, yes and no. For ordinary speeds we can certainly forget it and use the simple constant-mass law as a good approximation. But for high speeds we are wrong, and the higher the speed, the more wrong we are.

Finally, and most interesting, philosophically we are completely wrong with the approximate law. Our entire picture of the world has to be altered even though the mass changes only by a little bit. This is a very peculiar thing about the philosophy, or the ideas, behind the laws. Even a very small effect sometimes requires profound changes in our ideas.

Now, what should we teach first? Should we teach the correct but unfamiliar law with its strange and difficult conceptual ideas, for example the theory of relativity, four-dimensional space-time, and so on? Or should we first teach the simple “constant-mass” law, which is only approximate, but does not involve such difficult ideas? The first is more exciting, more wonderful, and more fun, but the second is easier to get at first, and is a first step to a real understanding of the second idea. This point arises again and again in teaching physics. At different times we shall have to resolve it in different ways, but at each stage it is worth learning what is now known, how accurate it is, how it fits into everything else, and how it may be changed when we learn more.

Richard Feynman

feynman, r ichard. “atoms in motion.” the feynman lectures on physics. boston: addison-wesley, 1963. reprinted with permission of california institute of technology.

See Also Astronomy and Cosmology: A Mechanistic Universe; Maxwell's Equations: Light and the Electromagnetic Spectrum; Physics: Aristotelian Physics; Physics: Fundamental Forces and the Synthesis of Theory; Physics: Heisenberg Uncertainty Principle; Physics: Newtonian Physics; Physics: Special and General Relativity; Physics: The Quantum Hypothesis; Science Philosophy and Practice: Postmodernism and the “Science Wars”; Science Philosophy and Practice: Pseudoscience and Popular Misconceptions; Science Philosophy and Practice: The Scientific Method.

bibliography

Books

Bell, Arthur. Newtonian Science. London: Edward Arnold Publishers Ltd., 1961.

Cohen, I. Bernard. The Newtonian Revolution: With Illustrations of the Transformation of Scientific Ideas. New York: Cambridge University Press, 1980.

Crombie, A.C. Medieval and Early Modern Science, Vol. II: Science in the Later Middle Ages and Early Modern Time's: XIII–XVII Centuries. New York: Doubleday, 1959.

Deason, Gary B. “Reformation Theology and the Mechanistic Conception of Nature.” In God and Nature: Historical Essays on the Encounter between Christianity and Science. Edited by David C. Lind-berg and Ronald L. Numbers. Berkeley: University of California Press, 1986.

Feynman, Richard. “Atoms in Motion.” The Feynman Lectures on Physics. Boston: Addison-Wesley, 1963.

Feynman, Richard. The Character of Physical Law. Cambridge, MA: The M.I.T. Press, 1967.

Kuhn, Thomas. The Essential Tension. Chicago: University of Chicago Press, 1977.

Purrington, Robert D. Physics in the Nineteenth Century. New Brunswick, NJ: Rutgers University Press, 1997.

Von Laue, Max. History of Physics. New York: Academic Press Inc., 1950.

Periodicals

Badash, Lawrence. “The Completeness of Nineteenth-Century Science.” Isis, Vol. 63, No. 1 (March 1972): 48–58.

Larson, Edward J., and Larry Witham. “Scientists and Religion in America.” Scientific American (September 1999): 88–93.

Larry Gilman

K. Lee Lerner

About this article

Physics: Articulation of Classical Physical Law

Updated About encyclopedia.com content Print Article